==================== Test output for //tensorflow/python/distribute:multi_worker_continuous_run_test_cpu: Running tests under Python 3.11.2: /usr/local/bin/python3 [ RUN ] MultiWorkerContinuousRunTest.testAllReduceContinuousRun_test_mode_eager INFO:tensorflow:Using local port 21634 I0331 21:55:27.997832 281473481601920 test_util.py:3794] Using local port 21634 INFO:tensorflow:Using local port 21561 I0331 21:55:27.998430 281473481601920 test_util.py:3794] Using local port 21561 INFO:tensorflow:Using local port 24185 I0331 21:55:27.998805 281473481601920 test_util.py:3794] Using local port 24185 INFO:tensorflow:Using local port 20525 I0331 21:55:27.999176 281473481601920 test_util.py:3794] Using local port 20525 INFO:tensorflow:Using local port 21040 I0331 21:55:27.999543 281473481601920 test_util.py:3794] Using local port 21040 [worker-0]: I0331 21:55:31.923409 281473026782080 multi_process_runner.py:840] Subprocess with PID 1067256 (worker, 0) is now being started. [worker-0]: I0331 21:55:31.923853 281473026782080 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:21634", "localhost:21561", "localhost:24185", "localhost:20525", "localhost:21040"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' [worker-1]: I0331 21:55:31.951224 281473026782080 multi_process_runner.py:840] Subprocess with PID 1067440 (worker, 1) is now being started. [worker-1]: I0331 21:55:31.951629 281473026782080 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:21634", "localhost:21561", "localhost:24185", "localhost:20525", "localhost:21040"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [worker-2]: I0331 21:55:32.050868 281473026782080 multi_process_runner.py:840] Subprocess with PID 1067478 (worker, 2) is now being started. [worker-2]: I0331 21:55:32.051301 281473026782080 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:21634", "localhost:21561", "localhost:24185", "localhost:20525", "localhost:21040"]}, "task": {"type": "worker", "index": 2}, "rpc_layer": "grpc"}' [worker-1]: 2023-03-31 21:55:32.076708: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:21561 [worker-3]: I0331 21:55:32.081012 281473026782080 multi_process_runner.py:840] Subprocess with PID 1067547 (worker, 3) is now being started. [worker-3]: I0331 21:55:32.081398 281473026782080 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:21634", "localhost:21561", "localhost:24185", "localhost:20525", "localhost:21040"]}, "task": {"type": "worker", "index": 3}, "rpc_layer": "grpc"}' [worker-0]: 2023-03-31 21:55:32.089201: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:21634 [worker-0]: 2023-03-31 21:55:32.216377: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:535] /job:worker/replica:0/task:0 has connected to coordination service. Incarnation: 10714557196362942821 [worker-0]: 2023-03-31 21:55:32.216584: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:535] /job:worker/replica:0/task:1 has connected to coordination service. Incarnation: 14136456063399151498 [worker-1]: 2023-03-31 21:55:32.217474: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-0]: 2023-03-31 21:55:32.217088: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-4]: I0331 21:55:32.281373 281473026782080 multi_process_runner.py:840] Subprocess with PID 1067711 (worker, 4) is now being started. [worker-4]: I0331 21:55:32.281759 281473026782080 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:21634", "localhost:21561", "localhost:24185", "localhost:20525", "localhost:21040"]}, "task": {"type": "worker", "index": 4}, "rpc_layer": "grpc"}' [worker-2]: 2023-03-31 21:55:32.367697: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:24185 [worker-0]: 2023-03-31 21:55:32.377902: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:535] /job:worker/replica:0/task:2 has connected to coordination service. Incarnation: 12394148466196502625 [worker-2]: 2023-03-31 21:55:32.378346: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-3]: 2023-03-31 21:55:32.646732: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:20525 [worker-0]: 2023-03-31 21:55:32.650158: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:535] /job:worker/replica:0/task:3 has connected to coordination service. Incarnation: 6469725793135511180 [worker-3]: 2023-03-31 21:55:32.666169: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-4]: 2023-03-31 21:55:32.871787: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:21040 [worker-0]: 2023-03-31 21:55:32.886633: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:535] /job:worker/replica:0/task:4 has connected to coordination service. Incarnation: 17661310720648198693 [worker-4]: 2023-03-31 21:55:32.896571: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:32.900573 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:32.916940 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:32.918650 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:32.916840 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:32.952439 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:33.048266 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:33.048782 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:33.049070 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:33.076863 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:33.077435 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:33.077725 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:33.158879 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:33.159386 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:33.159684 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:33.171678 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:33.172102 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:33.172288 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:33.175732 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:33.176241 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:33.176442 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-4]: I0331 21:55:33.651992 281473026782080 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0331 21:55:33.701572 281473026782080 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0331 21:55:33.672123 281473026782080 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0331 21:55:33.796165 281473026782080 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0331 21:55:33.868116 281473026782080 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-4]: W0331 21:55:33.963840 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:33.968141 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:33.969895 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-2]: W0331 21:55:33.980222 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:33.977005 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:33.983291 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:33.983608 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:33.983777 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: W0331 21:55:33.987797 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: I0331 21:55:33.976197 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:33.988583 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:33.988762 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:33.989563 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:33.995650 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:33.995955 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:33.996132 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: W0331 21:55:33.998801 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:33.996978 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:34.003229 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:34.003552 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:34.003711 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:34.006757 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:34.012694 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:34.012959 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:34.013113 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21634', 'localhost:21561', 'localhost:24185', 'localhost:20525', 'localhost:21040']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0331 21:55:34.019675 281473026782080 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0331 21:55:34.019681 281473026782080 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0331 21:55:34.031875 281473026782080 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0331 21:55:34.032231 281473026782080 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-4]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-4]: I0331 21:55:34.052050 281473026782080 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 5, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: 2023-03-31 21:55:34.118857: E tensorflow/core/distributed_runtime/worker.cc:431] Bad status from CompleteGroupDistributed: INTERNAL: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-0]: 2023-03-31 21:55:34.119055: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort INTERNAL: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-0]: 2023-03-31 21:55:34.119114: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:0/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INTERNAL: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-0]: The error could be from a previous operation. Restart your program to reset. [worker-0]: [[{{node CollectiveReduceV2}}]] [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-2]: 2023-03-31 21:55:34.119337: E tensorflow/core/distributed_runtime/worker.cc:431] Bad status from CompleteGroupDistributed: INTERNAL: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-2]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-2]: :{"created":"@1680299734.119233089","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)","grpc_status":13} [worker-0]: 2023-03-31 21:55:34.156340: E tensorflow/core/distributed_runtime/worker.cc:431] Bad status from CompleteGroupDistributed: INTERNAL: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-0]: The error could be from a previous operation. Restart your program to reset. [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-1]: 2023-03-31 21:55:34.156958: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort INTERNAL: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299734.156807147","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":13} [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-1]: 2023-03-31 21:55:34.157027: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:1/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INTERNAL: Collective ops is aborted by: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299734.156807147","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":13} [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: [[{{node CollectiveReduceV2}}]] [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-0]: 2023-03-31 21:55:34.157191: E tensorflow/core/distributed_runtime/worker.cc:431] Bad status from CompleteGroupDistributed: INTERNAL: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-0]: The error could be from a previous operation. Restart your program to reset. [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-4]: 2023-03-31 21:55:34.176418: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort INTERNAL: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-4]: The error could be from a previous operation. Restart your program to reset. [worker-4]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-4]: :{"created":"@1680299734.176211560","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":13} [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-4]: 2023-03-31 21:55:34.176500: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:4/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INTERNAL: Collective ops is aborted by: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-4]: The error could be from a previous operation. Restart your program to reset. [worker-4]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-4]: :{"created":"@1680299734.176211560","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":13} [worker-4]: The error could be from a previous operation. Restart your program to reset. [worker-4]: [[{{node CollectiveReduceV2}}]] [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-0]: 2023-03-31 21:55:34.193035: E tensorflow/core/distributed_runtime/worker.cc:431] Bad status from CompleteGroupDistributed: INTERNAL: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-0]: The error could be from a previous operation. Restart your program to reset. [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-3]: 2023-03-31 21:55:34.193535: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort INTERNAL: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-3]: The error could be from a previous operation. Restart your program to reset. [worker-3]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-3]: :{"created":"@1680299734.193387116","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":13} [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-3]: 2023-03-31 21:55:34.193597: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:3/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INTERNAL: Collective ops is aborted by: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-3]: The error could be from a previous operation. Restart your program to reset. [worker-3]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-3]: :{"created":"@1680299734.193387116","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":13} [worker-3]: The error could be from a previous operation. Restart your program to reset. [worker-3]: [[{{node CollectiveReduceV2}}]] [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-0]: 2023-03-31 21:55:34.207019: E tensorflow/core/distributed_runtime/worker.cc:431] Bad status from CompleteGroupDistributed: INTERNAL: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-0]: The error could be from a previous operation. Restart your program to reset. [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-2]: 2023-03-31 21:55:34.207560: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort INTERNAL: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-2]: The error could be from a previous operation. Restart your program to reset. [worker-2]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-2]: :{"created":"@1680299734.207375572","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":13} [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-2]: 2023-03-31 21:55:34.207623: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:2/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INTERNAL: Collective ops is aborted by: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-2]: The error could be from a previous operation. Restart your program to reset. [worker-2]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-2]: :{"created":"@1680299734.207375572","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":13} [worker-2]: The error could be from a previous operation. Restart your program to reset. [worker-2]: [[{{node CollectiveReduceV2}}]] [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-0]: Process _Process-2: [worker-0]: Traceback (most recent call last): [worker-0]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-0]: self.run() [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755, in _run_with_setenv [worker-0]: return self._actual_run() [worker-0]: ^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in _run_with_absl [worker-0]: app.run(lambda _: self._run_impl()) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/absl_py/absl/app.py", line 312, in run [worker-0]: _run_main(main, args) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/absl_py/absl/app.py", line 258, in _run_main [worker-0]: sys.exit(main(argv)) [worker-0]: ^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in [worker-0]: app.run(lambda _: self._run_impl()) [worker-0]: ^^^^^^^^^^^^^^^^ [worker-0]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-0]: self._target(*self._args, **self._kwargs) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 866, in __call__ [worker-0]: six.reraise(*info.exc_info) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/six_archive/six.py", line 719, in reraise [worker-0]: raise value [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained [worker-0]: return_value = fn(*args, **kwargs) [worker-0]: ^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 103, in worker_fn [worker-0]: worker_step_fn(worker_id) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 91, in worker_step_fn [worker-0]: t_out = run_reduce() [worker-0]: ^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/util/traceback_utils.py", line 141, in error_handler [worker-0]: return fn(*args, **kwargs) [worker-0]: ^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 840, in __call__ [worker-0]: result = self._call(*args, **kwds) [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 912, in _call [worker-0]: return self._concrete_variable_creation_fn._call_flat( # pylint: disable=protected-access [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/monomorphic_function.py", line 1352, in _call_flat [worker-0]: return self._build_call_outputs(self._inference_function.call( [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/atomic_function.py", line 176, in call [worker-0]: outputs = execute.execute( [worker-0]: ^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/execute.py", line 53, in quick_execute [worker-0]: tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: tensorflow.python.framework.errors_impl.InternalError: Graph execution error: [worker-0]: [worker-0]: Detected at node 'CollectiveReduceV2' defined at (most recent call last): [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 154, in [worker-0]: multi_process_runner.test_main() [worker-0]: File "", line 1, in [worker-0]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 274, in main [worker-0]: code = _serve_one(child_r, fds, [worker-0]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 313, in _serve_one [worker-0]: code = spawn._main(child_r, parent_sentinel) [worker-0]: File "/usr/lib/python3.11/multiprocessing/spawn.py", line 133, in _main [worker-0]: return self._bootstrap(parent_sentinel) [worker-0]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-0]: self.run() [worker-0]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-0]: self._target(*self._args, **self._kwargs) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 103, in worker_fn [worker-0]: worker_step_fn(worker_id) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 91, in worker_step_fn [worker-0]: t_out = run_reduce() [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 89, in run_reduce [worker-0]: return strategy.reduce(reduce_util.ReduceOp.MEAN, t_in, axis=None) [worker-0]: Node: 'CollectiveReduceV2' [worker-0]: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-0]: The error could be from a previous operation. Restart your program to reset. [worker-0]: [[{{node CollectiveReduceV2}}]] [Op:__inference_run_reduce_67] [worker-2]: Process _Process-4: [worker-2]: Traceback (most recent call last): [worker-2]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-2]: self.run() [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755, in _run_with_setenv [worker-2]: return self._actual_run() [worker-2]: ^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in _run_with_absl [worker-2]: app.run(lambda _: self._run_impl()) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/absl_py/absl/app.py", line 312, in run [worker-2]: _run_main(main, args) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/absl_py/absl/app.py", line 258, in _run_main [worker-2]: sys.exit(main(argv)) [worker-2]: ^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in [worker-2]: app.run(lambda _: self._run_impl()) [worker-2]: ^^^^^^^^^^^^^^^^ [worker-2]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-2]: self._target(*self._args, **self._kwargs) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 866, in __call__ [worker-2]: six.reraise(*info.exc_info) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/six_archive/six.py", line 719, in reraise [worker-2]: raise value [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained [worker-2]: return_value = fn(*args, **kwargs) [worker-2]: ^^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 103, in worker_fn [worker-2]: worker_step_fn(worker_id) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 91, in worker_step_fn [worker-2]: t_out = run_reduce() [worker-2]: ^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/util/traceback_utils.py", line 141, in error_handler [worker-2]: return fn(*args, **kwargs) [worker-2]: ^^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 840, in __call__ [worker-2]: result = self._call(*args, **kwds) [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 912, in _call [worker-2]: return self._concrete_variable_creation_fn._call_flat( # pylint: disable=protected-access [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/monomorphic_function.py", line 1352, in _call_flat [worker-2]: return self._build_call_outputs(self._inference_function.call( [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/atomic_function.py", line 176, in call [worker-2]: outputs = execute.execute( [worker-2]: ^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/execute.py", line 53, in quick_execute [worker-2]: tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: tensorflow.python.framework.errors_impl.InternalError: Graph execution error: [worker-2]: [worker-2]: Detected at node 'CollectiveReduceV2' defined at (most recent call last): [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 154, in [worker-2]: multi_process_runner.test_main() [worker-2]: File "", line 1, in [worker-2]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 274, in main [worker-2]: code = _serve_one(child_r, fds, [worker-2]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 313, in _serve_one [worker-2]: code = spawn._main(child_r, parent_sentinel) [worker-2]: File "/usr/lib/python3.11/multiprocessing/spawn.py", line 133, in _main [worker-2]: return self._bootstrap(parent_sentinel) [worker-2]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-2]: self.run() [worker-2]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-2]: self._target(*self._args, **self._kwargs) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 103, in worker_fn [worker-2]: worker_step_fn(worker_id) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 91, in worker_step_fn [worker-2]: t_out = run_reduce() [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 89, in run_reduce [worker-2]: return strategy.reduce(reduce_util.ReduceOp.MEAN, t_in, axis=None) [worker-2]: Node: 'CollectiveReduceV2' [worker-2]: Collective ops is aborted by: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-2]: The error could be from a previous operation. Restart your program to reset. [worker-2]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-2]: :{"created":"@1680299734.207375572","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":13} [worker-2]: The error could be from a previous operation. Restart your program to reset. [worker-2]: [[{{node CollectiveReduceV2}}]] [Op:__inference_run_reduce_67] [worker-4]: Process _Process-6: [worker-4]: Traceback (most recent call last): [worker-4]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-4]: self.run() [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755, in _run_with_setenv [worker-4]: return self._actual_run() [worker-4]: ^^^^^^^^^^^^^^^^^^ [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in _run_with_absl [worker-4]: app.run(lambda _: self._run_impl()) [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/absl_py/absl/app.py", line 312, in run [worker-4]: _run_main(main, args) [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/absl_py/absl/app.py", line 258, in _run_main [worker-4]: sys.exit(main(argv)) [worker-4]: ^^^^^^^^^^ [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in [worker-4]: app.run(lambda _: self._run_impl()) [worker-4]: ^^^^^^^^^^^^^^^^ [worker-4]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-4]: self._target(*self._args, **self._kwargs) [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 866, in __call__ [worker-4]: six.reraise(*info.exc_info) [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/six_archive/six.py", line 719, in reraise [worker-4]: raise value [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained [worker-4]: return_value = fn(*args, **kwargs) [worker-4]: ^^^^^^^^^^^^^^^^^^^ [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 103, in worker_fn [worker-4]: worker_step_fn(worker_id) [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 91, in worker_step_fn [worker-4]: t_out = run_reduce() [worker-4]: ^^^^^^^^^^^^ [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/util/traceback_utils.py", line 141, in error_handler [worker-4]: return fn(*args, **kwargs) [worker-4]: ^^^^^^^^^^^^^^^^^^^ [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 840, in __call__ [worker-4]: result = self._call(*args, **kwds) [worker-4]: ^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 912, in _call [worker-4]: return self._concrete_variable_creation_fn._call_flat( # pylint: disable=protected-access [worker-4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/monomorphic_function.py", line 1352, in _call_flat [worker-4]: return self._build_call_outputs(self._inference_function.call( [worker-4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/atomic_function.py", line 176, in call [worker-4]: outputs = execute.execute( [worker-4]: ^^^^^^^^^^^^^^^^ [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/execute.py", line 53, in quick_execute [worker-4]: tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, [worker-4]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-4]: tensorflow.python.framework.errors_impl.InternalError: Graph execution error: [worker-4]: [worker-4]: Detected at node 'CollectiveReduceV2' defined at (most recent call last): [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 154, in [worker-4]: multi_process_runner.test_main() [worker-4]: File "", line 1, in [worker-4]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 274, in main [worker-4]: code = _serve_one(child_r, fds, [worker-4]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 313, in _serve_one [worker-4]: code = spawn._main(child_r, parent_sentinel) [worker-4]: File "/usr/lib/python3.11/multiprocessing/spawn.py", line 133, in _main [worker-4]: return self._bootstrap(parent_sentinel) [worker-4]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-4]: self.run() [worker-4]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-4]: self._target(*self._args, **self._kwargs) [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 103, in worker_fn [worker-4]: worker_step_fn(worker_id) [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 91, in worker_step_fn [worker-4]: t_out = run_reduce() [worker-4]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 89, in run_reduce [worker-4]: return strategy.reduce(reduce_util.ReduceOp.MEAN, t_in, axis=None) [worker-4]: Node: 'CollectiveReduceV2' [worker-4]: Collective ops is aborted by: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-4]: The error could be from a previous operation. Restart your program to reset. [worker-4]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-4]: :{"created":"@1680299734.176211560","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":13} [worker-4]: The error could be from a previous operation. Restart your program to reset. [worker-4]: [[{{node CollectiveReduceV2}}]] [Op:__inference_run_reduce_67] [worker-1]: Process _Process-3: [worker-1]: Traceback (most recent call last): [worker-1]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-1]: self.run() [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755, in _run_with_setenv [worker-1]: return self._actual_run() [worker-1]: ^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in _run_with_absl [worker-1]: app.run(lambda _: self._run_impl()) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/absl_py/absl/app.py", line 312, in run [worker-1]: _run_main(main, args) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/absl_py/absl/app.py", line 258, in _run_main [worker-1]: sys.exit(main(argv)) [worker-1]: ^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in [worker-1]: app.run(lambda _: self._run_impl()) [worker-1]: ^^^^^^^^^^^^^^^^ [worker-1]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-1]: self._target(*self._args, **self._kwargs) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 866, in __call__ [worker-1]: six.reraise(*info.exc_info) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/six_archive/six.py", line 719, in reraise [worker-1]: raise value [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained [worker-1]: return_value = fn(*args, **kwargs) [worker-1]: ^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 103, in worker_fn [worker-1]: worker_step_fn(worker_id) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 91, in worker_step_fn [worker-1]: t_out = run_reduce() [worker-1]: ^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/util/traceback_utils.py", line 141, in error_handler [worker-1]: return fn(*args, **kwargs) [worker-1]: ^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 840, in __call__ [worker-1]: result = self._call(*args, **kwds) [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 912, in _call [worker-1]: return self._concrete_variable_creation_fn._call_flat( # pylint: disable=protected-access [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/monomorphic_function.py", line 1352, in _call_flat [worker-1]: return self._build_call_outputs(self._inference_function.call( [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/atomic_function.py", line 176, in call [worker-1]: outputs = execute.execute( [worker-1]: ^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/execute.py", line 53, in quick_execute [worker-1]: tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: tensorflow.python.framework.errors_impl.InternalError: Graph execution error: [worker-1]: [worker-1]: Detected at node 'CollectiveReduceV2' defined at (most recent call last): [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 154, in [worker-1]: multi_process_runner.test_main() [worker-1]: File "", line 1, in [worker-1]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 274, in main [worker-1]: code = _serve_one(child_r, fds, [worker-1]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 313, in _serve_one [worker-1]: code = spawn._main(child_r, parent_sentinel) [worker-1]: File "/usr/lib/python3.11/multiprocessing/spawn.py", line 133, in _main [worker-1]: return self._bootstrap(parent_sentinel) [worker-1]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-1]: self.run() [worker-1]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-1]: self._target(*self._args, **self._kwargs) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 103, in worker_fn [worker-1]: worker_step_fn(worker_id) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 91, in worker_step_fn [worker-1]: t_out = run_reduce() [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 89, in run_reduce [worker-1]: return strategy.reduce(reduce_util.ReduceOp.MEAN, t_in, axis=None) [worker-1]: Node: 'CollectiveReduceV2' [worker-1]: Collective ops is aborted by: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299734.156807147","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":13} [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: [[{{node CollectiveReduceV2}}]] [Op:__inference_run_reduce_67] [worker-3]: Process _Process-5: [worker-3]: Traceback (most recent call last): [worker-3]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-3]: self.run() [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755, in _run_with_setenv [worker-3]: return self._actual_run() [worker-3]: ^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in _run_with_absl [worker-3]: app.run(lambda _: self._run_impl()) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/absl_py/absl/app.py", line 312, in run [worker-3]: _run_main(main, args) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/absl_py/absl/app.py", line 258, in _run_main [worker-3]: sys.exit(main(argv)) [worker-3]: ^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in [worker-3]: app.run(lambda _: self._run_impl()) [worker-3]: ^^^^^^^^^^^^^^^^ [worker-3]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-3]: self._target(*self._args, **self._kwargs) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 866, in __call__ [worker-3]: six.reraise(*info.exc_info) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/six_archive/six.py", line 719, in reraise [worker-3]: raise value [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained [worker-3]: return_value = fn(*args, **kwargs) [worker-3]: ^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 103, in worker_fn [worker-3]: worker_step_fn(worker_id) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 91, in worker_step_fn [worker-3]: t_out = run_reduce() [worker-3]: ^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/util/traceback_utils.py", line 141, in error_handler [worker-3]: return fn(*args, **kwargs) [worker-3]: ^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 840, in __call__ [worker-3]: result = self._call(*args, **kwds) [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 912, in _call [worker-3]: return self._concrete_variable_creation_fn._call_flat( # pylint: disable=protected-access [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/monomorphic_function.py", line 1352, in _call_flat [worker-3]: return self._build_call_outputs(self._inference_function.call( [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/atomic_function.py", line 176, in call [worker-3]: outputs = execute.execute( [worker-3]: ^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/execute.py", line 53, in quick_execute [worker-3]: tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: tensorflow.python.framework.errors_impl.InternalError: Graph execution error: [worker-3]: [worker-3]: Detected at node 'CollectiveReduceV2' defined at (most recent call last): [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 154, in [worker-3]: multi_process_runner.test_main() [worker-3]: File "", line 1, in [worker-3]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 274, in main [worker-3]: code = _serve_one(child_r, fds, [worker-3]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 313, in _serve_one [worker-3]: code = spawn._main(child_r, parent_sentinel) [worker-3]: File "/usr/lib/python3.11/multiprocessing/spawn.py", line 133, in _main [worker-3]: return self._bootstrap(parent_sentinel) [worker-3]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-3]: self.run() [worker-3]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-3]: self._target(*self._args, **self._kwargs) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 103, in worker_fn [worker-3]: worker_step_fn(worker_id) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 91, in worker_step_fn [worker-3]: t_out = run_reduce() [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 89, in run_reduce [worker-3]: return strategy.reduce(reduce_util.ReduceOp.MEAN, t_in, axis=None) [worker-3]: Node: 'CollectiveReduceV2' [worker-3]: Collective ops is aborted by: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-3]: The error could be from a previous operation. Restart your program to reset. [worker-3]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-3]: :{"created":"@1680299734.193387116","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":13} [worker-3]: The error could be from a previous operation. Restart your program to reset. [worker-3]: [[{{node CollectiveReduceV2}}]] [Op:__inference_run_reduce_67] I0331 21:55:36.890147 281473481601920 multi_process_runner.py:646] worker-0 exit code: 1 I0331 21:55:36.890399 281473481601920 multi_process_runner.py:646] worker-1 exit code: 1 I0331 21:55:36.890539 281473481601920 multi_process_runner.py:646] worker-2 exit code: 1 I0331 21:55:36.890681 281473481601920 multi_process_runner.py:646] worker-3 exit code: 1 I0331 21:55:36.890788 281473481601920 multi_process_runner.py:646] worker-4 exit code: 1 [ FAILED ] MultiWorkerContinuousRunTest.testAllReduceContinuousRun_test_mode_eager INFO:tensorflow:time(__main__.MultiWorkerContinuousRunTest.testAllReduceContinuousRun_test_mode_eager): 8.93s I0331 21:55:36.923848 281473481601920 test_util.py:2462] time(__main__.MultiWorkerContinuousRunTest.testAllReduceContinuousRun_test_mode_eager): 8.93s [ RUN ] MultiWorkerContinuousRunTest.testVariableInitializationWithChangingShape_test_mode_eager INFO:tensorflow:Using local port 19006 I0331 21:55:36.925060 281473481601920 test_util.py:3794] Using local port 19006 INFO:tensorflow:Using local port 17719 I0331 21:55:36.925435 281473481601920 test_util.py:3794] Using local port 17719 INFO:tensorflow:Using local port 18671 I0331 21:55:36.925789 281473481601920 test_util.py:3794] Using local port 18671 INFO:tensorflow:Using local port 22574 I0331 21:55:36.926151 281473481601920 test_util.py:3794] Using local port 22574 INFO:tensorflow:Using local port 19647 I0331 21:55:36.926489 281473481601920 test_util.py:3794] Using local port 19647 [worker-0]: I0331 21:55:37.051198 281473026782080 multi_process_runner.py:840] Subprocess with PID 1087626 (worker, 0) is now being started. [worker-0]: I0331 21:55:37.051584 281473026782080 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:19006", "localhost:17719", "localhost:18671", "localhost:22574", "localhost:19647"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' [worker-0]: 2023-03-31 21:55:37.286994: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:19006 [worker-0]: 2023-03-31 21:55:37.336350: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:535] /job:worker/replica:0/task:0 has connected to coordination service. Incarnation: 15398279817510230038 [worker-0]: 2023-03-31 21:55:37.337244: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-2]: I0331 21:55:37.481753 281473026782080 multi_process_runner.py:840] Subprocess with PID 1088805 (worker, 2) is now being started. [worker-2]: I0331 21:55:37.482124 281473026782080 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:19006", "localhost:17719", "localhost:18671", "localhost:22574", "localhost:19647"]}, "task": {"type": "worker", "index": 2}, "rpc_layer": "grpc"}' [worker-1]: I0331 21:55:37.502250 281473026782080 multi_process_runner.py:840] Subprocess with PID 1088773 (worker, 1) is now being started. [worker-1]: I0331 21:55:37.502638 281473026782080 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:19006", "localhost:17719", "localhost:18671", "localhost:22574", "localhost:19647"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [worker-4]: I0331 21:55:37.547462 281473026782080 multi_process_runner.py:840] Subprocess with PID 1089090 (worker, 4) is now being started. [worker-4]: I0331 21:55:37.547887 281473026782080 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:19006", "localhost:17719", "localhost:18671", "localhost:22574", "localhost:19647"]}, "task": {"type": "worker", "index": 4}, "rpc_layer": "grpc"}' [worker-3]: I0331 21:55:37.560991 281473026782080 multi_process_runner.py:840] Subprocess with PID 1089057 (worker, 3) is now being started. [worker-3]: I0331 21:55:37.561381 281473026782080 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:19006", "localhost:17719", "localhost:18671", "localhost:22574", "localhost:19647"]}, "task": {"type": "worker", "index": 3}, "rpc_layer": "grpc"}' [worker-1]: 2023-03-31 21:55:37.762625: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:17719 [worker-0]: 2023-03-31 21:55:37.775107: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:535] /job:worker/replica:0/task:1 has connected to coordination service. Incarnation: 11437271572539809317 [worker-1]: 2023-03-31 21:55:37.780105: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-2]: 2023-03-31 21:55:37.807476: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:18671 [worker-0]: 2023-03-31 21:55:37.811913: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:535] /job:worker/replica:0/task:2 has connected to coordination service. Incarnation: 9416756309273548373 [worker-2]: 2023-03-31 21:55:37.826148: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-3]: 2023-03-31 21:55:37.903761: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:22574 [worker-0]: 2023-03-31 21:55:37.936441: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:535] /job:worker/replica:0/task:3 has connected to coordination service. Incarnation: 3945335210301308717 [worker-3]: 2023-03-31 21:55:37.937026: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-4]: 2023-03-31 21:55:38.167134: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:19647 [worker-0]: 2023-03-31 21:55:38.170698: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:535] /job:worker/replica:0/task:4 has connected to coordination service. Incarnation: 10002399166695184480 [worker-4]: 2023-03-31 21:55:38.171087: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:38.196258 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:38.201501 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:38.206924 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:38.226932 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:38.254548 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:38.295443 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:38.295829 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:38.296025 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:38.305271 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:38.305757 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:38.305953 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:38.316871 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:38.317241 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:38.317425 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:38.403029 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:38.403390 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:38.403581 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:38.510244 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:38.510603 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:38.510796 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-0]: W0331 21:55:38.513293 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-4]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-1]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-3]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-4]: W0331 21:55:38.513544 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-2]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-1]: W0331 21:55:38.513338 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-2]: W0331 21:55:38.513227 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-3]: W0331 21:55:38.513441 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-0]: W0331 21:55:38.585585 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:38.586828 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:38.592069 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:38.592347 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:38.592511 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: W0331 21:55:38.599446 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:38.600820 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:38.606607 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:38.606903 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:38.607066 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: W0331 21:55:38.620310 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:38.624701 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:38.622433 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:38.628194 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:38.628480 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:38.628643 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:38.626026 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:38.631350 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:38.631601 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:38.631759 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: W0331 21:55:38.639380 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:38.641179 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:38.646466 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:38.646705 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:38.646862 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-3]: W0331 21:55:38.647701 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-0]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-0]: W0331 21:55:38.647855 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-4]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-4]: W0331 21:55:38.647926 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-2]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-2]: W0331 21:55:38.649051 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-1]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-1]: W0331 21:55:38.649152 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-3]: W0331 21:55:38.689393 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:38.689463 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: W0331 21:55:38.688963 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:38.689032 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:38.690092 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:38.690121 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-4]: W0331 21:55:38.689481 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:38.690451 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:38.690607 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:38.690704 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:38.695229 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-2]: I0331 21:55:38.695482 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: I0331 21:55:38.695455 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:38.695652 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:38.695700 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:38.695638 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-1]: I0331 21:55:38.695858 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:38.695528 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:38.695228 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-3]: I0331 21:55:38.695777 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:38.695481 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:38.695926 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:38.695908 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:38.696069 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-4]: W0331 21:55:38.696884 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-3]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-3]: W0331 21:55:38.697049 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-1]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-1]: W0331 21:55:38.697211 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-2]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-2]: W0331 21:55:38.697012 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:38.695634 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-0]: W0331 21:55:38.696955 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-2]: W0331 21:55:38.740592 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:38.740648 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:38.741787 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:38.746638 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:38.741724 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:38.746629 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:38.746890 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:38.746890 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: I0331 21:55:38.747037 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:38.747057 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: W0331 21:55:38.749747 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: W0331 21:55:38.749775 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:38.750948 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:38.751090 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-1]: I0331 21:55:38.755845 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-3]: I0331 21:55:38.755896 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-3]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:38.756097 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: I0331 21:55:38.756160 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:38.756279 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:38.756319 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: W0331 21:55:38.758371 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:38.759683 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:38.764847 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:38.765106 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:38.765265 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-3]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-3]: W0331 21:55:38.766418 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-1]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-4]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-4]: W0331 21:55:38.766173 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-2]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-2]: W0331 21:55:38.766333 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-0]: W0331 21:55:38.766273 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-1]: W0331 21:55:38.766596 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-2]: W0331 21:55:38.826790 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:38.836607 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: W0331 21:55:38.837205 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: I0331 21:55:38.837781 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:38.843082 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:38.843342 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:38.843502 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: W0331 21:55:38.843385 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:38.845205 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:38.850055 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:38.850301 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:38.850450 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:38.846558 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:38.851648 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:38.851903 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:38.852052 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:38.851478 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:38.856601 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:38.856863 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:38.857011 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: W0331 21:55:38.867791 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-3]: W0331 21:55:38.876328 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:38.869670 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:38.874963 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:38.875213 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:38.875363 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-2]: W0331 21:55:38.877064 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-1]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-1]: W0331 21:55:38.877594 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-4]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-4]: W0331 21:55:38.878522 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-0]: WARNING:tensorflow:Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-0]: W0331 21:55:38.886497 281473026782080 mirrored_run.py:87] Using CollectiveAllReduceStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. [worker-2]: W0331 21:55:38.961791 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:38.973995 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:38.985576 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: W0331 21:55:38.997205 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:38.999857 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:39.004930 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:39.005182 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:39.005332 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:39.006409 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:39.011302 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:39.011580 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:39.011729 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:39.012681 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:39.017582 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:39.017837 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:39.017992 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: W0331 21:55:39.016290 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:39.019349 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:39.024588 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:39.024861 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:39.025010 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:39.026095 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:39.031367 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:39.031647 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:39.031798 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: W0331 21:55:39.133872 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:39.135210 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:39.140102 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:39.140346 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:39.140491 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: W0331 21:55:39.145652 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:39.147459 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:39.152500 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:39.152757 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:39.152909 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: W0331 21:55:39.152773 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:39.155450 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:39.160848 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:39.161121 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:39.161276 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: W0331 21:55:39.170878 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:39.179694 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:39.180933 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:39.181140 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:39.186245 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:39.186521 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:39.186669 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:39.186253 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:39.186522 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:39.186669 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: W0331 21:55:39.275837 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:39.277284 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:39.282272 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:39.282544 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:39.282705 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: W0331 21:55:39.286404 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:39.287358 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:39.287873 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:39.292904 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:39.293161 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:39.293318 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: W0331 21:55:39.298437 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:39.310683 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:39.311916 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:39.317244 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:39.317522 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:39.317673 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:39.327525 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:39.333406 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:39.333694 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:39.333847 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:39.335860 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:39.341667 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:39.341943 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:39.342093 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: W0331 21:55:39.430090 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:39.431481 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: W0331 21:55:39.434867 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: W0331 21:55:39.435450 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: I0331 21:55:39.435994 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:39.436529 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:39.436802 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:39.436953 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:39.441610 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:39.441878 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:39.442040 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:39.436535 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:39.441426 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:39.441681 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:39.441832 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: W0331 21:55:39.455976 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:39.455972 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:39.457318 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:39.462827 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-1]: I0331 21:55:39.457172 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:39.463113 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:39.463278 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:39.462827 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:39.463113 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:39.463264 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: W0331 21:55:39.506818 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:39.508112 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:39.513446 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:39.513711 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:39.513869 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: W0331 21:55:39.522057 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:39.523571 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:39.528984 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:39.529263 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:39.529410 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: W0331 21:55:39.540564 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:39.541915 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:39.547249 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:39.547527 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:39.547688 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: W0331 21:55:39.549228 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:39.550517 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:39.556102 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:39.556409 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:39.556569 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: W0331 21:55:39.566600 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:39.567953 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:39.573231 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:39.573508 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:39.573668 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: W0331 21:55:39.622285 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:39.623709 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:39.628793 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:39.629058 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:39.629216 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: W0331 21:55:39.634223 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:39.635442 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:39.641092 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-3]: W0331 21:55:39.637667 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:39.639088 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:39.641352 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:39.641515 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:39.644225 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:39.644500 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:39.644660 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: W0331 21:55:39.653226 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: W0331 21:55:39.653878 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:39.661420 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:39.666855 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:39.667151 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:39.667304 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:39.654627 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:39.660099 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:39.660374 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:39.660521 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: W0331 21:55:39.979311 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: W0331 21:55:39.991133 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:39.991358 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:39.992261 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:39.997284 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:39.997533 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:39.997685 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: W0331 21:55:40.002717 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: W0331 21:55:40.014385 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:40.026604 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:40.031865 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:40.032119 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:40.032264 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:40.033201 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:40.038755 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:40.038196 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:40.038463 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:40.038614 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:40.043961 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:40.044220 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:40.044366 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:40.039619 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:40.044454 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:40.044695 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:40.044842 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: W0331 21:55:40.148833 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:40.157279 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: W0331 21:55:40.157695 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:40.158600 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:40.163319 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:40.163544 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:40.163695 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:40.159132 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:40.163799 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:40.164039 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:40.164189 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: W0331 21:55:40.166049 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:40.167731 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:40.172730 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:40.172994 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:40.173147 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: W0331 21:55:40.168897 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:40.174099 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:40.179120 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:40.179383 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:40.179535 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:40.184556 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:40.189679 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:40.189918 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:40.190078 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: W0331 21:55:40.235851 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:40.235700 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: W0331 21:55:40.248008 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:40.249465 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:40.254406 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:40.254649 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:40.254793 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:40.255847 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:40.260755 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:40.260996 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:40.261139 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:40.262211 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:40.267450 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:40.267711 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:40.267855 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: W0331 21:55:40.279746 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:40.281180 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:40.286046 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:40.286355 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:40.286500 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: W0331 21:55:40.309634 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:40.311091 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:40.316462 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:40.316747 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:40.316905 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: W0331 21:55:40.365059 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: W0331 21:55:40.365327 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:40.366380 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:40.366845 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: W0331 21:55:40.368091 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: W0331 21:55:40.367437 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:40.368963 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:40.369348 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:40.371389 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:40.371662 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:40.371812 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: W0331 21:55:40.372006 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:40.371874 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:40.372141 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:40.372292 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:40.373583 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:40.374398 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:40.374691 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:40.374839 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:40.374310 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:40.374607 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:40.374754 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:40.378709 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:40.378983 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:40.379131 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: W0331 21:55:40.434558 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: W0331 21:55:40.435030 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:40.435090 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: W0331 21:55:40.435345 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:40.435939 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:40.436409 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:40.436445 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-4]: W0331 21:55:40.435433 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:40.437013 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-3]: I0331 21:55:40.437010 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:40.441414 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:40.441529 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:40.441670 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-0]: I0331 21:55:40.441734 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:40.441815 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: I0331 21:55:40.441894 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:40.441976 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: I0331 21:55:40.441970 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-4]: I0331 21:55:40.442228 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:40.442287 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-2]: I0331 21:55:40.442132 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: INFO:tensorflow:Check health not enabled. [worker-3]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:40.442527 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: I0331 21:55:40.442580 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:40.442677 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:40.442733 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: W0331 21:55:40.489998 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: W0331 21:55:40.490051 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:40.493058 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: W0331 21:55:40.493613 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: W0331 21:55:40.493979 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:40.494419 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:40.495629 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-3]: I0331 21:55:40.495230 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:40.491414 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:40.496979 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:40.497309 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:40.497460 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:40.491396 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:40.496995 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:40.497325 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:40.497476 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:40.499719 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:40.500032 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:40.500196 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:40.500545 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:40.500857 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:40.501031 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:40.500956 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:40.501272 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:40.501437 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: W0331 21:55:40.564376 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-3]: W0331 21:55:40.565647 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: W0331 21:55:40.565694 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:40.565900 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:40.567133 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:40.567371 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:40.571884 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:40.572238 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:40.572402 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:40.572523 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:40.572859 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: I0331 21:55:40.572738 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:40.573021 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:40.573080 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:40.573243 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: W0331 21:55:40.602096 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:40.616842 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:40.618226 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:40.623397 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:40.623620 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:40.623762 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:40.621385 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:40.626376 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:40.626638 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:40.626785 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: W0331 21:55:40.730594 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: W0331 21:55:40.752833 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: W0331 21:55:40.753533 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: I0331 21:55:40.754307 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:40.760285 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:40.760563 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:40.760711 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:40.756163 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:40.762344 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:40.762705 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:40.762857 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:40.764010 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:40.770435 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:40.770803 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:40.770958 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: W0331 21:55:40.792835 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:40.794336 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: W0331 21:55:40.795709 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:40.800280 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:40.800634 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:40.800798 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:40.797904 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:40.803408 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:40.803741 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:40.803894 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: W0331 21:55:40.857065 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: W0331 21:55:40.862014 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-4]: W0331 21:55:40.865864 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-2]: I0331 21:55:40.858671 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0331 21:55:40.865296 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0331 21:55:40.865634 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0331 21:55:40.865787 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 2, num_workers = 5, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: I0331 21:55:40.863796 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0331 21:55:40.869814 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0331 21:55:40.870295 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0331 21:55:40.870558 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 1, num_workers = 5, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: I0331 21:55:40.873625 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:4/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0'] [worker-4]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: I0331 21:55:40.879099 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:4/device:CPU:0',) [worker-4]: INFO:tensorflow:Check health not enabled. [worker-4]: I0331 21:55:40.879360 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-4]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-4]: I0331 21:55:40.879514 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 4, num_workers = 5, local_devices = ('/job:worker/task:4/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: W0331 21:55:40.885179 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:40.886655 281473026782080 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-0]: I0331 21:55:40.888230 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: I0331 21:55:40.887511 281473026782080 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:4/device:CPU:0'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0331 21:55:40.893329 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0331 21:55:40.893617 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0331 21:55:40.893768 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 3, num_workers = 5, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0331 21:55:40.894786 281473026782080 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0331 21:55:40.895164 281473026782080 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0331 21:55:40.895332 281473026782080 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:19006', 'localhost:17719', 'localhost:18671', 'localhost:22574', 'localhost:19647']}, task_type = 'worker', task_id = 0, num_workers = 5, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO I0331 21:55:42.196524 281473481601920 multi_process_runner.py:646] worker-0 exit code: 0 I0331 21:55:42.196748 281473481601920 multi_process_runner.py:646] worker-1 exit code: 0 I0331 21:55:42.196863 281473481601920 multi_process_runner.py:646] worker-2 exit code: 0 I0331 21:55:42.196968 281473481601920 multi_process_runner.py:646] worker-3 exit code: 0 I0331 21:55:42.197064 281473481601920 multi_process_runner.py:646] worker-4 exit code: 0 I0331 21:55:42.199269 281473481601920 multi_process_runner.py:662] Joining log reading threads. I0331 21:55:42.199447 281473481601920 multi_process_runner.py:665] Joined log reading threads. INFO:tensorflow:time(__main__.MultiWorkerContinuousRunTest.testVariableInitializationWithChangingShape_test_mode_eager): 5.28s I0331 21:55:42.200181 281473481601920 test_util.py:2462] time(__main__.MultiWorkerContinuousRunTest.testVariableInitializationWithChangingShape_test_mode_eager): 5.28s [ OK ] MultiWorkerContinuousRunTest.testVariableInitializationWithChangingShape_test_mode_eager [ RUN ] MultiWorkerContinuousRunTest.test_session [ SKIPPED ] MultiWorkerContinuousRunTest.test_session ====================================================================== ERROR: testAllReduceContinuousRun_test_mode_eager (__main__.MultiWorkerContinuousRunTest) MultiWorkerContinuousRunTest.testAllReduceContinuousRun_test_mode_eager testAllReduceContinuousRun_test_mode_eager(mode='eager') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/absl_py/absl/testing/parameterized.py", line 314, in bound_param_test return test_method(self, **testcase_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 360, in decorated execute_test_method() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 343, in execute_test_method test_method(**kwargs_to_pass) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/combinations.py", line 559, in decorator test_method(self, **kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 106, in testAllReduceContinuousRun multi_process_runner.run( File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1329, in run return runner.join(timeout) ^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 649, in join self._reraise_if_subprocess_error(process_statuses) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 565, in _reraise_if_subprocess_error six.reraise(*process_status.exc_info) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/six_archive/six.py", line 719, in reraise raise value File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained return_value = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 103, in worker_fn worker_step_fn(worker_id) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 91, in worker_step_fn t_out = run_reduce() ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/util/traceback_utils.py", line 141, in error_handler return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 840, in __call__ result = self._call(*args, **kwds) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 912, in _call return self._concrete_variable_creation_fn._call_flat( # pylint: disable=protected-access ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/monomorphic_function.py", line 1352, in _call_flat return self._build_call_outputs(self._inference_function.call( ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/atomic_function.py", line 176, in call outputs = execute.execute( ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/execute.py", line 53, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ^^^^^^^^^^^^^^^^^ tensorflow.python.framework.errors_impl.InternalError: Graph execution error: Detected at node 'CollectiveReduceV2' defined at (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 154, in multi_process_runner.test_main() File "", line 1, in File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 274, in main code = _serve_one(child_r, fds, File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 313, in _serve_one code = spawn._main(child_r, parent_sentinel) File "/usr/lib/python3.11/multiprocessing/spawn.py", line 133, in _main return self._bootstrap(parent_sentinel) File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 103, in worker_fn worker_step_fn(worker_id) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 91, in worker_step_fn t_out = run_reduce() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_worker_continuous_run_test.py", line 89, in run_reduce return strategy.reduce(reduce_util.ReduceOp.MEAN, t_in, axis=None) Node: 'CollectiveReduceV2' Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) The error could be from a previous operation. Restart your program to reset. [[{{node CollectiveReduceV2}}]] [Op:__inference_run_reduce_67] ---------------------------------------------------------------------- Ran 3 tests in 14.205s FAILED (errors=1, skipped=1) ================================================================================ ==================== Test output for //tensorflow/python/distribute:cross_device_ops_test_cpu (shard 1 of 4): Running tests under Python 3.11.2: /usr/local/bin/python3 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:35.303246 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: I0331 21:55:35.360097 281473209627520 multi_process_runner.py:840] Subprocess with PID 1074890 (worker, 0) is now being started. [worker-0]: I0331 21:55:35.360467 281473209627520 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:18799"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' I0331 21:55:35.457891 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: I0331 21:55:35.470115 281473209627520 multi_process_runner.py:840] Subprocess with PID 1075980 (worker, 0) is now being started. [worker-0]: I0331 21:55:35.470498 281473209627520 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22145", "localhost:21413"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' I0331 21:55:35.473288 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: I0331 21:55:35.590497 281473209627520 multi_process_runner.py:840] Subprocess with PID 1076435 (worker, 1) is now being started. [worker-1]: I0331 21:55:35.590866 281473209627520 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22145", "localhost:21413"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 5.1s I0331 21:55:35.594514 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 5.1s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:35.596053 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.597683 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.607324 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.06s I0331 21:55:35.657404 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.06s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:35.659018 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.660557 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.661896 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:35.662467 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:35.663724 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.665070 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.666158 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s I0331 21:55:35.666898 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:35.668034 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.669359 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.670438 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s I0331 21:55:35.670936 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:35.672097 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.673937 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.675032 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:35.675558 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:35.676758 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.678828 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.679312 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:35.684132 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: 2023-03-31 21:55:35.813571: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:18799 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.28s I0331 21:55:35.956406 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.28s [ OK ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:35.957847 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.958642 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.960551 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.961216 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s I0331 21:55:35.961914 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:35.963230 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.963716 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.965295 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.965728 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:35.966212 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:35.967324 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.967825 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.969248 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.969698 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s I0331 21:55:35.970380 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:35.971466 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.971952 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.973343 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.973803 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s I0331 21:55:35.974265 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:35.975343 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.975759 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.977261 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.977686 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:35.978220 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:35.979279 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.979606 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.980901 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.981299 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s I0331 21:55:35.981917 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:35.982935 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.983238 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.984634 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.985034 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s I0331 21:55:35.985470 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:35.986535 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.986830 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.988028 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.988444 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:35.988893 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:35.989933 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.990195 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.991345 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.991746 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:35.996509 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1552: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: tf.py_func is deprecated in TF V2. Instead, there are two [worker-0]: options available in V2. [worker-0]: - tf.py_function takes a python function which manipulates tf eager [worker-0]: tensors instead of numpy arrays. It's easy to convert a tf eager tensor to [worker-0]: an ndarray (just call tensor.numpy()) but having access to eager tensors [worker-0]: means `tf.py_function`s can use accelerators such as GPUs as well as [worker-0]: being differentiable using a gradient tape. [worker-0]: - tf.numpy_function maintains the semantics of the deprecated tf.py_func [worker-0]: (it is not differentiable, and manipulates numpy arrays). It drops the [worker-0]: stateful argument making all functions stateful. [worker-0]: [worker-0]: W0331 21:55:36.352572 281473209627520 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1552: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: tf.py_func is deprecated in TF V2. Instead, there are two [worker-0]: options available in V2. [worker-0]: - tf.py_function takes a python function which manipulates tf eager [worker-0]: tensors instead of numpy arrays. It's easy to convert a tf eager tensor to [worker-0]: an ndarray (just call tensor.numpy()) but having access to eager tensors [worker-0]: means `tf.py_function`s can use accelerators such as GPUs as well as [worker-0]: being differentiable using a gradient tape. [worker-0]: - tf.numpy_function maintains the semantics of the deprecated tf.py_func [worker-0]: (it is not differentiable, and manipulates numpy arrays). It drops the [worker-0]: stateful argument making all functions stateful. [worker-0]: INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.39s I0331 21:55:36.376834 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.39s [ OK ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:36.378440 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.380791 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.379070 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.387230 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.01s I0331 21:55:36.387834 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:36.389102 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.389643 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.390961 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.391951 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:36.392378 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:36.393427 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.393869 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.395061 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.396027 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s I0331 21:55:36.396701 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:36.397723 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.406583 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.408130 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.408565 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.01s I0331 21:55:36.409135 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:36.410244 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.411379 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.412620 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.413201 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.01s I0331 21:55:36.414990 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:36.416288 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.416740 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.418021 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.419205 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s I0331 21:55:36.419893 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:36.420953 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.421574 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.422929 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.423914 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s I0331 21:55:36.424339 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:36.425350 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.425875 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.427179 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.428119 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:36.428535 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:36.429543 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.429845 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.431009 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.431413 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:36.435682 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.01s I0331 21:55:36.442561 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.01s [ OK ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:36.443915 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.444388 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.445724 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.446158 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s I0331 21:55:36.446566 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:36.447536 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.447785 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.448892 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.449262 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:36.449626 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:36.450540 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.450765 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.451776 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.452091 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s I0331 21:55:36.452632 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:36.453545 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.453779 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.454767 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.455082 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s I0331 21:55:36.455441 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:36.456363 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.456768 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.457814 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.458643 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:36.459012 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:36.459941 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.460321 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.461384 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.462235 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s I0331 21:55:36.462774 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:36.463706 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.464099 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.465176 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.466015 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s I0331 21:55:36.466412 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:36.467352 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.467744 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.468845 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.469720 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:36.470098 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:36.471049 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.471447 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.472543 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.473413 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:36.496536 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.04s I0331 21:55:36.514266 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.04s [ OK ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:36.515780 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.516284 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.517795 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.520194 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.01s I0331 21:55:36.520658 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:36.521720 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.522314 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.523538 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.524459 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:36.524857 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:36.525840 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.526273 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.527483 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.529544 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s I0331 21:55:36.530147 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:36.531120 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.534681 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.540190 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.546262 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.02s I0331 21:55:36.546847 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.02s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:36.548069 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.557320 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.558905 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.559986 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.01s I0331 21:55:36.560454 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:36.561527 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.561945 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.563094 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.563964 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s I0331 21:55:36.564517 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:36.565455 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.565843 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.567111 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.567974 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s I0331 21:55:36.568341 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:36.569271 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.569643 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.570669 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.571451 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:36.571793 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:36.572682 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.573050 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.574079 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.574868 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:36.578788 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.01s I0331 21:55:36.585891 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.01s [ OK ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:36.587443 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.587952 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.589459 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.590509 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s I0331 21:55:36.590957 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:36.591998 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.592432 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.593642 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.594578 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:36.594993 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:36.595990 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.596426 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.597637 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.598554 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s I0331 21:55:36.599156 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:36.600138 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.600545 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.601700 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.602555 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s I0331 21:55:36.602913 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:36.603862 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.604239 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.605305 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.606126 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:36.606491 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:36.607395 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.607767 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.608807 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.609598 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s I0331 21:55:36.610108 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:36.611000 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.611369 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.612405 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.613221 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s I0331 21:55:36.613562 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:36.614454 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.614823 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.615855 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.616657 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:36.617002 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:36.617891 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.618254 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.619274 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.620047 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:36.636589 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.05s I0331 21:55:36.664838 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0): 0.05s [ OK ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 I0331 21:55:36.666458 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.667397 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.670478 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.672884 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.01s I0331 21:55:36.673356 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:36.674488 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.674925 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.676330 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.677263 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:36.677675 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:36.678701 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.679128 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.680343 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.681258 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_0): 0.0s I0331 21:55:36.681847 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:36.682842 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.683257 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.684432 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.685325 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s I0331 21:55:36.685715 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:36.686689 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.687113 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.688287 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.689214 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s I0331 21:55:36.689604 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:36.690568 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.690985 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.692157 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.693056 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_0): 0.0s I0331 21:55:36.693857 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:36.694829 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.695247 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.696439 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.697027 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s I0331 21:55:36.698225 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:36.699216 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.700968 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.702237 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.703209 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s I0331 21:55:36.703628 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:36.704622 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.705050 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.706262 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:36.707204 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:36.711658 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 1, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-0]: I0331 21:55:36.965557 281473209627520 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 1, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 2 all_reduces, num_devices = 1, group_size = 1, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-0]: I0331 21:55:37.112233 281473209627520 cross_device_ops.py:1151] Collective all_reduce tensors: 2 all_reduces, num_devices = 1, group_size = 1, implementation = CommunicationImplementation.RING, num_packs = 1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_0): 0.47s I0331 21:55:37.175103 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_0): 0.47s [ OK ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:37.176669 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.177038 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.178549 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:37.178968 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s I0331 21:55:37.179392 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:37.180427 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.180723 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.181853 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:37.182223 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s I0331 21:55:37.182604 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceMixedDenseAndSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:37.183573 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.183847 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.185008 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:37.185403 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceMixedDenseAndSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceMixedDenseAndSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:37.186009 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceMixedDenseAndSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:37.187168 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.187715 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.188889 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:37.189806 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:37.190367 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:37.191345 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.191753 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.192844 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:37.193372 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s I0331 21:55:37.193750 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:37.194701 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.194969 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.196260 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:37.196834 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:37.197227 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:37.198201 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.198635 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.199847 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:37.200782 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:37.201365 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:37.202352 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.202774 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.203941 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:37.204909 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s I0331 21:55:37.205312 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:37.206321 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.206760 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.207972 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:37.208953 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:37.209416 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:37.210474 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.210952 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.212272 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:37.213102 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:37.217772 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: INFO:tensorflow:Collective all_reduce IndexedSlices: 1 all_reduces, num_devices =1, group_size = 1, implementation = CommunicationImplementation.RING [worker-0]: I0331 21:55:37.474728 281473209627520 cross_device_ops.py:1166] Collective all_reduce IndexedSlices: 1 all_reduces, num_devices =1, group_size = 1, implementation = CommunicationImplementation.RING [worker-0]: INFO:tensorflow:Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 1, implementation = CommunicationImplementation.RING [worker-0]: I0331 21:55:38.013617 281473209627520 cross_device_ops.py:1166] Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 1, implementation = CommunicationImplementation.RING INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0): 1.59s I0331 21:55:38.798920 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0): 1.59s [ OK ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:38.800478 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.800839 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.802865 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.805331 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1): 0.01s I0331 21:55:38.805873 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:38.807176 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.807694 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.809068 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.819545 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s I0331 21:55:38.820169 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:38.821459 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.821823 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.837149 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.837612 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.03s I0331 21:55:38.849289 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.03s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:38.850659 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.851012 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.852637 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.853160 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s I0331 21:55:38.853622 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:38.854745 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.855212 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.856600 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.858294 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:38.858772 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:38.859890 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.860246 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.861507 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.862081 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:38.862688 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:38.863730 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.864178 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.865474 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.866464 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s I0331 21:55:38.866913 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:38.867952 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.868404 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.869639 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.870633 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:38.871088 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:38.872144 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.872437 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.873597 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.874193 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:38.874790 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:38.875822 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.876333 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.877766 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.878891 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s I0331 21:55:38.879343 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:38.880396 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.880851 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.882111 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.883226 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:38.883663 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:38.884725 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.885074 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.886432 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.887833 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:38.888493 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:38.889597 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.890087 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.893440 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.893926 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.01s I0331 21:55:38.894433 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:38.895593 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.895947 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.897298 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.898158 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:38.898629 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:38.899773 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.900567 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.901835 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:38.902869 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:38.908230 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.31s I0331 21:55:39.210826 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.31s [ OK ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:39.212364 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.219664 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.221528 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:39.222698 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.01s I0331 21:55:39.223221 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:39.224417 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.224899 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.226202 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:39.227221 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:39.227664 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:39.228755 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.229222 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.230489 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:39.231504 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:39.236195 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: E0331 21:55:39.247968367 1075980 server_chttp2.cc:40] {"created":"@1680299739.247850680","description":"No address added out of total 1 resolved","file":"external/com_github_grpc_grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":395,"referenced_errors":[{"created":"@1680299739.247845304","description":"Failed to add any wildcard listeners","file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_posix.cc","file_line":341,"referenced_errors":[{"created":"@1680299739.247811451","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1680299739.247781832","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]},{"created":"@1680299739.247844059","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1680299739.247834793","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]}]}]} [worker-0]: 2023-03-31 21:55:39.248081: E tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:601] UNKNOWN: Could not start gRPC server [worker-1]: 2023-03-31 21:55:39.280632: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:21413 [worker-1]: 2023-03-31 21:55:39.967527: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort FAILED_PRECONDITION: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299739.967376432","description":"Error received from peer ipv6:[::1]:22145","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster.","grpc_status":9} [worker-0]: 2023-03-31 21:55:40.281128: E tensorflow/core/common_runtime/eager/context_distributed_manager.cc:699] Could not start gRPC server I0331 21:55:40.284266 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ FAILED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 1.07s I0331 21:55:40.295184 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 1.07s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:40.296653 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:40.297199 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:40.298539 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:40.298999 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:40.298939 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s I0331 21:55:40.300521 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:40.301645 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:40.301949 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:40.303080 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:40.303480 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:40.303379 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:40.304141 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:40.305134 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:40.305400 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:40.306490 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:40.307201 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:40.313395 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s I0331 21:55:40.314493 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:40.315851 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:40.316184 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:40.317527 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:40.317946 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:40.318011 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:40.319113 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:40.320199 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:40.320520 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:40.321727 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:40.322196 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:40.322212 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s I0331 21:55:40.323198 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:40.324293 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:40.324773 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:40.326021 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:40.326531 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:40.326514 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:40.327391 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:40.328490 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:40.328811 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:40.329993 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:40.330448 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 [worker-1]: W0331 21:55:40.330453 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:40.331562 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:40.332696 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:40.333039 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:40.334221 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:40.334716 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:40.334570 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s I0331 21:55:40.335424 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:40.336493 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:40.338045 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:40.338467 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:40.339242 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:40.340349 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:40.336820 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:40.338397 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:40.340692 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:40.341876 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:40.342310 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:40.342198 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:40.348175 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: E0331 21:55:40.350409224 1075980 server_chttp2.cc:40] {"created":"@1680299740.350291222","description":"No address added out of total 1 resolved","file":"external/com_github_grpc_grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":395,"referenced_errors":[{"created":"@1680299740.350286546","description":"Failed to add any wildcard listeners","file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_posix.cc","file_line":341,"referenced_errors":[{"created":"@1680299740.350253283","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1680299740.350219429","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]},{"created":"@1680299740.350285071","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1680299740.350277185","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]}]}]} [worker-0]: 2023-03-31 21:55:40.350504: E tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:601] UNKNOWN: Could not start gRPC server [worker-0]: 2023-03-31 21:55:40.350708: E tensorflow/core/common_runtime/eager/context_distributed_manager.cc:699] Could not start gRPC server I0331 21:55:40.353312 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: INFO:tensorflow:Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-1]: I0331 21:55:41.009763 281473209627520 cross_device_ops.py:1166] Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-1]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1552: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: tf.py_func is deprecated in TF V2. Instead, there are two [worker-1]: options available in V2. [worker-1]: - tf.py_function takes a python function which manipulates tf eager [worker-1]: tensors instead of numpy arrays. It's easy to convert a tf eager tensor to [worker-1]: an ndarray (just call tensor.numpy()) but having access to eager tensors [worker-1]: means `tf.py_function`s can use accelerators such as GPUs as well as [worker-1]: being differentiable using a gradient tape. [worker-1]: - tf.numpy_function maintains the semantics of the deprecated tf.py_func [worker-1]: (it is not differentiable, and manipulates numpy arrays). It drops the [worker-1]: stateful argument making all functions stateful. [worker-1]: [worker-1]: W0331 21:55:41.362512 281473209627520 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1552: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: tf.py_func is deprecated in TF V2. Instead, there are two [worker-1]: options available in V2. [worker-1]: - tf.py_function takes a python function which manipulates tf eager [worker-1]: tensors instead of numpy arrays. It's easy to convert a tf eager tensor to [worker-1]: an ndarray (just call tensor.numpy()) but having access to eager tensors [worker-1]: means `tf.py_function`s can use accelerators such as GPUs as well as [worker-1]: being differentiable using a gradient tape. [worker-1]: - tf.numpy_function maintains the semantics of the deprecated tf.py_func [worker-1]: (it is not differentiable, and manipulates numpy arrays). It drops the [worker-1]: stateful argument making all functions stateful. [worker-1]: [worker-1]: 2023-03-31 21:55:41.586735: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort FAILED_PRECONDITION: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299741.586635704","description":"Error received from peer ipv6:[::1]:22145","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster.","grpc_status":9} [worker-1]: 2023-03-31 21:55:41.586825: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:1/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): FAILED_PRECONDITION: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299741.586635704","description":"Error received from peer ipv6:[::1]:22145","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster.","grpc_status":9} [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: [[{{node CollectiveGatherV2}}]] [type.googleapis.com/tensorflow.DerivedStatus=''] [ FAILED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 2.02s I0331 21:55:42.359180 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 2.02s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:42.360431 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.360966 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.362452 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.363098 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.363028 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s I0331 21:55:42.368836 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testCollectiveV2ControlFlow_test_implementation_CommunicationImplementationRING_numprocesses_1_requiredgpus_2 I0331 21:55:42.370066 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.372219 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.370620 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.372861 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testCollectiveV2ControlFlow_test_implementation_CommunicationImplementationRING_numprocesses_1_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testCollectiveV2ControlFlow_test_implementation_CommunicationImplementationRING_numprocesses_1_requiredgpus_2): 0.0s I0331 21:55:42.373736 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testCollectiveV2ControlFlow_test_implementation_CommunicationImplementationRING_numprocesses_1_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testInputsAreFunctionArgs_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:42.374805 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.375241 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.376563 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.378380 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testInputsAreFunctionArgs_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testInputsAreFunctionArgs_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:42.378788 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testInputsAreFunctionArgs_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testMultiThreadedCollectiveLaunchNoInterleave_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_2 I0331 21:55:42.379810 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.380265 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.381540 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.382096 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testMultiThreadedCollectiveLaunchNoInterleave_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testMultiThreadedCollectiveLaunchNoInterleave_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_2): 0.0s I0331 21:55:42.382862 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testMultiThreadedCollectiveLaunchNoInterleave_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testNcclOrdering_test_numprocesses_1_requiredgpus_2 I0331 21:55:42.383823 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.384190 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.385362 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.385851 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.372728 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:42.377676 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:42.381977 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:42.385754 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testNcclOrdering_test_numprocesses_1_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testNcclOrdering_test_numprocesses_1_requiredgpus_2): 0.0s I0331 21:55:42.386635 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testNcclOrdering_test_numprocesses_1_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:42.387617 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.388000 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.389269 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.389815 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.389722 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:42.390806 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:42.391778 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.392203 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.393487 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.394021 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.393934 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s I0331 21:55:42.394929 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:42.395934 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.396372 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.408628 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.409280 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.409149 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s I0331 21:55:42.410238 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:42.411348 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.411870 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.413304 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.413883 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.413770 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:42.414881 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:42.415899 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.416327 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.417742 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.418229 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.418153 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s I0331 21:55:42.419069 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:42.420087 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.420497 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.421789 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.422293 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.422203 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:42.423091 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:42.424069 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.424451 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.425690 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:42.426062 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.426447 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:42.427187 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:42.428198 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.428635 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.429904 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.430483 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.430371 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s I0331 21:55:42.431286 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:42.432277 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.432738 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.434012 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.434598 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.434475 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:42.435395 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:42.436441 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.437009 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.438270 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.438811 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.438705 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:42.439872 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:42.440861 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.441321 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.442954 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.443614 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.443432 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s I0331 21:55:42.444413 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:42.445656 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.446096 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.447642 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.448235 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.448245 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:42.449170 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:42.450191 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.450611 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.451868 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.452404 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.452417 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.457455 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.18s I0331 21:55:42.625235 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.18s [ OK ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:42.626588 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.627275 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.630112 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.630973 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.631421 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.01s I0331 21:55:42.632423 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:42.633542 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.634039 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.635356 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.635908 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.635766 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s I0331 21:55:42.639643 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:42.640804 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.641303 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.642695 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.643268 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.643178 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.648550 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: E0331 21:55:42.652862349 1075980 server_chttp2.cc:40] {"created":"@1680299742.652747002","description":"No address added out of total 1 resolved","file":"external/com_github_grpc_grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":395,"referenced_errors":[{"created":"@1680299742.652741487","description":"Failed to add any wildcard listeners","file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_posix.cc","file_line":341,"referenced_errors":[{"created":"@1680299742.652709513","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1680299742.652671229","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]},{"created":"@1680299742.652740777","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1680299742.652733181","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]}]}]} [worker-0]: 2023-03-31 21:55:42.652955: E tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:601] UNKNOWN: Could not start gRPC server [worker-0]: 2023-03-31 21:55:42.653176: E tensorflow/core/common_runtime/eager/context_distributed_manager.cc:699] Could not start gRPC server I0331 21:55:42.657771 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: 2023-03-31 21:55:42.687642: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort INVALID_ARGUMENT: Shape mismatch in the collective instance 100. Op at device /job:worker/replica:0/task:1/device:CPU:0 expected shape [] but another member in the group expected shape [2]. This is likely due to different input shapes at different members of the collective op. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299742.687544842","description":"Error received from peer ipv6:[::1]:22145","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Shape mismatch in the collective instance 100. Op at device /job:worker/replica:0/task:1/device:CPU:0 expected shape [] but another member in the group expected shape [2]. This is likely due to different input shapes at different members of the collective op.","grpc_status":3} [ FAILED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.06s I0331 21:55:42.696908 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.06s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:42.698208 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.699021 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.700610 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.701518 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.702221 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.01s I0331 21:55:42.704470 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:42.705659 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.706345 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.707886 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.708541 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.708502 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:42.709529 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:42.710629 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.711079 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.712440 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.713036 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.712875 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s I0331 21:55:42.713835 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:42.714892 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.715294 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.716598 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.717198 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.717029 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:42.718395 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:42.719416 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.719837 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.721117 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:42.721586 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.721897 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:42.722715 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:42.723741 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.724183 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.725450 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.726002 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.725871 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s I0331 21:55:42.726892 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:42.727918 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.728316 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.729557 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.730129 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.729989 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:42.730886 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:42.731897 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.732300 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.733510 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.734075 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.734041 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:42.735096 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:42.736086 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.736503 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.737723 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.738224 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.738144 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s I0331 21:55:42.739025 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:42.740002 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.740409 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.741633 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.742147 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.742040 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:42.742887 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:42.743858 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.744240 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.745429 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:42.745922 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:42.745819 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.751376 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: E0331 21:55:42.753989449 1075980 server_chttp2.cc:40] {"created":"@1680299742.753871857","description":"No address added out of total 1 resolved","file":"external/com_github_grpc_grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":395,"referenced_errors":[{"created":"@1680299742.753867771","description":"Failed to add any wildcard listeners","file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_posix.cc","file_line":341,"referenced_errors":[{"created":"@1680299742.753834663","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1680299742.753805340","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]},{"created":"@1680299742.753866776","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1680299742.753859286","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]}]}]} [worker-0]: 2023-03-31 21:55:42.754082: E tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:601] UNKNOWN: Could not start gRPC server [worker-0]: 2023-03-31 21:55:42.754557: E tensorflow/core/common_runtime/eager/context_distributed_manager.cc:699] Could not start gRPC server I0331 21:55:42.757544 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: INFO:tensorflow:Collective all_reduce IndexedSlices: 1 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-1]: I0331 21:55:42.915148 281473209627520 cross_device_ops.py:1166] Collective all_reduce IndexedSlices: 1 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [ FAILED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0): 0.96s I0331 21:55:43.700345 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0): 0.96s [ RUN ] CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:43.701728 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:43.702267 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: 2023-03-31 21:55:43.688432: E tensorflow/core/common_runtime/ring_alg.cc:291] Aborting RingGather with DEADLINE_EXCEEDED: Collective ops is aborted by: Collective has timed out during execution. [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299743.688248591","description":"Error received from peer ipv6:[::1]:22145","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Collective has timed out during execution.\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":4} [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-1]: 2023-03-31 21:55:43.688472: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort DEADLINE_EXCEEDED: Collective ops is aborted by: Collective has timed out during execution. [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299743.688248591","description":"Error received from peer ipv6:[::1]:22145","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Collective has timed out during execution.\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":4} [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-1]: 2023-03-31 21:55:43.688769: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:1/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): DEADLINE_EXCEEDED: Collective ops is aborted by: Collective ops is aborted by: Collective has timed out during execution. [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299743.688248591","description":"Error received from peer ipv6:[::1]:22145","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Collective has timed out during execution.\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":4} [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: [[{{node CollectiveGatherV2}}]] [type.googleapis.com/tensorflow.DerivedStatus=''] I0331 21:55:43.727140 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:43.728208 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:43.727792 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.03s I0331 21:55:43.731175 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.03s [ RUN ] CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:43.732481 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:43.732986 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.906975 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:43.916403 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.935683 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.2s I0331 21:55:43.936352 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.2s [ RUN ] CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:43.937653 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:43.938188 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.939564 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:43.940527 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.941180 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:43.941624 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:43.942722 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:43.943176 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.944400 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:43.945289 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.945876 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:43.946340 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:43.947409 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:43.947873 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.954040 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:43.954751 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:43.954688 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.02s I0331 21:55:43.966527 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.02s [ RUN ] CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:43.967767 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:43.968447 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.969963 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:43.971206 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.971970 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.01s I0331 21:55:43.972408 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:43.973522 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:43.974149 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.975650 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:43.976884 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.977709 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.01s I0331 21:55:43.978132 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:43.979209 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:43.979797 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.981203 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:43.982330 281473209627520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.983105 281473064203136 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:43.983505 281473064203136 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:44.441818 281473064203136 multi_process_runner.py:646] worker-0 exit code: 0 I0331 21:55:44.442031 281473064203136 multi_process_runner.py:646] worker-1 exit code: 0 I0331 21:55:44.444620 281473064203136 multi_process_runner.py:662] Joining log reading threads. I0331 21:55:44.444805 281473064203136 multi_process_runner.py:665] Joined log reading threads. I0331 21:55:45.346870 281473064203136 multi_process_runner.py:646] worker-0 exit code: 0 I0331 21:55:45.352910 281473064203136 multi_process_runner.py:662] Joining log reading threads. I0331 21:55:45.353130 281473064203136 multi_process_runner.py:665] Joined log reading threads. ====================================================================== ERROR: testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 (__main__.CollectiveOpsTest) CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0(implementation=, num_processes=2, prefer_unique_instance_key=False, reduce_op=, required_gpus=0) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/testing/parameterized.py", line 314, in bound_param_test return test_method(self, **testcase_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 360, in decorated execute_test_method() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 343, in execute_test_method test_method(**kwargs_to_pass) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/combinations.py", line 559, in decorator test_method(self, **kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 499, in testBatchReduceDense self.batch_reduce_and_verify(inputs, expect, options) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 320, in batch_reduce_and_verify get_global_mpr(options.num_processes).run(replica_fn) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1003, in run six.reraise(*process_status.exc_info) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/six_archive/six.py", line 719, in reraise raise value File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained return_value = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 285, in replica_fn collective, devices, pid = self.make_collective(options.num_processes, ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 185, in make_collective collective = cross_device_ops_lib.CollectiveAllReduce( ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops.py", line 1088, in __init__ self._devices = tuple(device_util.canonicalize(d) for d in devices) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops.py", line 1088, in self._devices = tuple(device_util.canonicalize(d) for d in devices) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/device_util.py", line 60, in canonicalize config.list_logical_devices("CPU")[0].name) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/config.py", line 480, in list_logical_devices return context.context().list_logical_devices(device_type=device_type) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/context.py", line 1617, in list_logical_devices self.ensure_initialized() ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/context.py", line 595, in ensure_initialized pywrap_tfe.TFE_EnableCollectiveOps(context_handle, server_def_str) ^^^^^^^^^^^^^^^^^ tensorflow.python.framework.errors_impl.UnknownError: Could not start gRPC server ====================================================================== ERROR: testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 (__main__.CollectiveOpsTest) CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0(implementation=, num_processes=2, prefer_unique_instance_key=False, reduce_op=, required_gpus=0) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/testing/parameterized.py", line 314, in bound_param_test return test_method(self, **testcase_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 360, in decorated execute_test_method() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 343, in execute_test_method test_method(**kwargs_to_pass) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/combinations.py", line 559, in decorator test_method(self, **kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 590, in testBatchReduceSparse self.batch_reduce_and_verify(inputs, expect, options) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 320, in batch_reduce_and_verify get_global_mpr(options.num_processes).run(replica_fn) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1003, in run six.reraise(*process_status.exc_info) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/six_archive/six.py", line 719, in reraise raise value File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained return_value = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 285, in replica_fn collective, devices, pid = self.make_collective(options.num_processes, ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 185, in make_collective collective = cross_device_ops_lib.CollectiveAllReduce( ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops.py", line 1088, in __init__ self._devices = tuple(device_util.canonicalize(d) for d in devices) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops.py", line 1088, in self._devices = tuple(device_util.canonicalize(d) for d in devices) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/device_util.py", line 60, in canonicalize config.list_logical_devices("CPU")[0].name) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/config.py", line 480, in list_logical_devices return context.context().list_logical_devices(device_type=device_type) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/context.py", line 1617, in list_logical_devices self.ensure_initialized() ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/context.py", line 595, in ensure_initialized pywrap_tfe.TFE_EnableCollectiveOps(context_handle, server_def_str) ^^^^^^^^^^^^^^^^^ tensorflow.python.framework.errors_impl.UnknownError: Could not start gRPC server ====================================================================== ERROR: testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 (__main__.CollectiveOpsTest) CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0(implementation=, num_processes=2, prefer_unique_instance_key=False, reduce_op=, required_gpus=0) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/testing/parameterized.py", line 314, in bound_param_test return test_method(self, **testcase_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 360, in decorated execute_test_method() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 343, in execute_test_method test_method(**kwargs_to_pass) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/combinations.py", line 559, in decorator test_method(self, **kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 366, in testReduceDense self.reduce_and_verify(inputs, expect, options) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 269, in reduce_and_verify get_global_mpr(options.num_processes).run(replica_fn) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1003, in run six.reraise(*process_status.exc_info) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/six_archive/six.py", line 719, in reraise raise value File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained return_value = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 244, in replica_fn collective, devices, pid = self.make_collective(options.num_processes, ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 185, in make_collective collective = cross_device_ops_lib.CollectiveAllReduce( ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops.py", line 1088, in __init__ self._devices = tuple(device_util.canonicalize(d) for d in devices) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops.py", line 1088, in self._devices = tuple(device_util.canonicalize(d) for d in devices) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/device_util.py", line 60, in canonicalize config.list_logical_devices("CPU")[0].name) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/config.py", line 480, in list_logical_devices return context.context().list_logical_devices(device_type=device_type) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/context.py", line 1617, in list_logical_devices self.ensure_initialized() ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/context.py", line 595, in ensure_initialized pywrap_tfe.TFE_EnableCollectiveOps(context_handle, server_def_str) ^^^^^^^^^^^^^^^^^ tensorflow.python.framework.errors_impl.UnknownError: Could not start gRPC server ====================================================================== ERROR: testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 (__main__.CollectiveOpsTest) CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0(implementation=, num_processes=2, prefer_unique_instance_key=True, reduce_op=, required_gpus=0) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/testing/parameterized.py", line 314, in bound_param_test return test_method(self, **testcase_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 360, in decorated execute_test_method() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 343, in execute_test_method test_method(**kwargs_to_pass) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/combinations.py", line 559, in decorator test_method(self, **kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 430, in testReduceSparse self.reduce_and_verify(inputs, expect, options) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 269, in reduce_and_verify get_global_mpr(options.num_processes).run(replica_fn) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1003, in run six.reraise(*process_status.exc_info) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/six_archive/six.py", line 719, in reraise raise value File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained return_value = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 244, in replica_fn collective, devices, pid = self.make_collective(options.num_processes, ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 185, in make_collective collective = cross_device_ops_lib.CollectiveAllReduce( ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops.py", line 1088, in __init__ self._devices = tuple(device_util.canonicalize(d) for d in devices) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops.py", line 1088, in self._devices = tuple(device_util.canonicalize(d) for d in devices) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/device_util.py", line 60, in canonicalize config.list_logical_devices("CPU")[0].name) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/config.py", line 480, in list_logical_devices return context.context().list_logical_devices(device_type=device_type) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/context.py", line 1617, in list_logical_devices self.ensure_initialized() ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/context.py", line 595, in ensure_initialized pywrap_tfe.TFE_EnableCollectiveOps(context_handle, server_def_str) ^^^^^^^^^^^^^^^^^ tensorflow.python.framework.errors_impl.UnknownError: Could not start gRPC server ---------------------------------------------------------------------- Ran 139 tests in 14.858s FAILED (errors=4, skipped=125) ================================================================================ ==================== Test output for //tensorflow/python/distribute:cross_device_ops_test_cpu (shard 4 of 4): Running tests under Python 3.11.2: /usr/local/bin/python3 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:33.665998 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: I0331 21:55:33.686621 281472859730816 multi_process_runner.py:840] Subprocess with PID 1071301 (worker, 0) is now being started. [worker-0]: I0331 21:55:33.686965 281472859730816 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:18171"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' I0331 21:55:33.717833 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: I0331 21:55:33.742809 281472859730816 multi_process_runner.py:840] Subprocess with PID 1071381 (worker, 0) is now being started. [worker-0]: I0331 21:55:33.743180 281472859730816 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22145", "localhost:19325"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' I0331 21:55:33.745930 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: I0331 21:55:33.757425 281472859730816 multi_process_runner.py:840] Subprocess with PID 1071509 (worker, 1) is now being started. [worker-1]: I0331 21:55:33.757799 281472859730816 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22145", "localhost:19325"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 3.27s I0331 21:55:33.761403 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 3.27s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:33.762762 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.764178 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.764650 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.01s I0331 21:55:33.770483 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:33.771790 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.772971 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.773471 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:33.773883 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:33.774892 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.775959 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.776380 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:33.776987 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:33.777982 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.779011 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.779421 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:33.779844 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:33.780900 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.781880 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.782269 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:33.782690 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:33.783699 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.784654 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.785029 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:33.789454 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: 2023-03-31 21:55:33.835743: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:18171 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.18s I0331 21:55:33.958230 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.18s [ OK ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:33.959693 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:33.960092 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:33.961549 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.962086 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:33.962523 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:33.963554 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:33.963847 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:33.965017 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.965398 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:33.965871 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:33.966920 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:33.967243 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:33.968445 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.968828 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:33.969381 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:33.970323 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:33.970612 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:33.971737 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.972104 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:33.972462 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:33.973387 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:33.973674 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:33.974769 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.975158 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:33.975510 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:33.976457 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:33.976741 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:33.977863 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.978202 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:33.978715 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:33.979641 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:33.979937 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:33.981008 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.981357 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:33.981706 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:33.982626 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:33.982924 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:33.984030 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.984387 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:33.984746 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:33.985673 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:33.985931 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:33.986999 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.987397 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:33.992558 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1552: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: tf.py_func is deprecated in TF V2. Instead, there are two [worker-0]: options available in V2. [worker-0]: - tf.py_function takes a python function which manipulates tf eager [worker-0]: tensors instead of numpy arrays. It's easy to convert a tf eager tensor to [worker-0]: an ndarray (just call tensor.numpy()) but having access to eager tensors [worker-0]: means `tf.py_function`s can use accelerators such as GPUs as well as [worker-0]: being differentiable using a gradient tape. [worker-0]: - tf.numpy_function maintains the semantics of the deprecated tf.py_func [worker-0]: (it is not differentiable, and manipulates numpy arrays). It drops the [worker-0]: stateful argument making all functions stateful. [worker-0]: [worker-0]: W0331 21:55:34.438854 281472859730816 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1552: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: tf.py_func is deprecated in TF V2. Instead, there are two [worker-0]: options available in V2. [worker-0]: - tf.py_function takes a python function which manipulates tf eager [worker-0]: tensors instead of numpy arrays. It's easy to convert a tf eager tensor to [worker-0]: an ndarray (just call tensor.numpy()) but having access to eager tensors [worker-0]: means `tf.py_function`s can use accelerators such as GPUs as well as [worker-0]: being differentiable using a gradient tape. [worker-0]: - tf.numpy_function maintains the semantics of the deprecated tf.py_func [worker-0]: (it is not differentiable, and manipulates numpy arrays). It drops the [worker-0]: stateful argument making all functions stateful. [worker-0]: INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.62s I0331 21:55:34.602958 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.62s [ OK ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:34.604315 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.616641 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.618394 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.619523 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.02s I0331 21:55:34.619963 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.02s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:34.621051 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.621519 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.622796 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.623759 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:34.624158 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:34.625189 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.625637 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.626902 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.627882 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:34.628485 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:34.629493 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.629958 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.631221 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.632236 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:34.632640 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:34.633642 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.634105 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.635366 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.656471 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.02s I0331 21:55:34.657005 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.02s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:34.658370 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.658913 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.660276 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.661328 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:34.661939 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:34.662981 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.663443 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.664704 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.665715 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:34.666127 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:34.667154 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.667728 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.668974 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.669924 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:34.670321 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:34.671312 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.671754 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.673306 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.674251 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:34.687693 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.02s I0331 21:55:34.688034 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.02s [ OK ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:34.689314 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.689873 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.691423 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.716253 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.03s I0331 21:55:34.716753 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.03s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:34.717921 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.718456 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.719834 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.720860 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:34.721268 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:34.722317 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.722775 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.724035 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.724994 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:34.725574 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:34.726587 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.727039 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.728293 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.729246 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:34.729649 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:34.730631 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.731065 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.732273 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.733222 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:34.733614 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:34.734594 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.735030 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.736261 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.737240 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:34.737818 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:34.738801 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.739258 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.740486 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.741452 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:34.741851 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:34.742859 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.743415 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.745238 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.746355 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:34.746777 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:34.747772 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.748233 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.749458 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.750427 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:34.776339 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.04s I0331 21:55:34.785562 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.04s [ OK ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:34.786955 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.787595 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.789180 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.790539 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:34.790968 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:34.792030 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.792498 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.793847 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.794826 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:34.795230 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_1_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:34.796287 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.796770 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.798102 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.799105 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:34.799703 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:34.800708 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.801866 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.803144 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.804158 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:34.804561 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:34.805557 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.805998 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.807287 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.808264 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:34.808667 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:34.809674 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.810121 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.811379 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.812350 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:34.812923 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:34.813915 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.814372 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.815636 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.816662 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:34.817102 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:34.818137 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.818608 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.819873 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.820874 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:34.821283 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:34.822293 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.822740 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.823974 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.824975 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:34.831504 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.02s I0331 21:55:34.838644 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.02s [ OK ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:34.839989 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.840517 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.842026 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.843380 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:34.843813 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:34.844891 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.845369 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.846668 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.847767 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:34.848175 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:34.849213 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.849679 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.850947 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.851932 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:34.852514 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:34.853517 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.853968 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.855239 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.856252 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:34.856666 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:34.857701 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.858158 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.859391 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.860398 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:34.860799 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:34.861803 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.862249 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.863469 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.864435 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:34.865013 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:34.866011 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.866474 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.867716 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.868709 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:34.869112 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:34.870115 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.870572 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.871805 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.872771 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:34.873176 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:34.874162 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.874598 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.875812 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.876783 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:34.896346 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.04s I0331 21:55:34.912446 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0): 0.04s [ OK ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:34.913777 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.914275 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.915781 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.916851 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:34.917283 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:34.918351 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.918980 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.920281 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.921270 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:34.921679 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_2_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:34.922720 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.923174 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.924424 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.925384 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:34.925954 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:34.926991 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.927434 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.928679 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.929641 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s I0331 21:55:34.930041 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:34.931037 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.931477 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.932688 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.933630 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:34.934028 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:34.935006 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.935443 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.936689 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.937662 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:34.938452 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:34.939446 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.939909 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.941139 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.942113 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s I0331 21:55:34.942511 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:34.943499 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.943946 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.945170 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.946161 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:34.946572 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:34.947564 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:34.948032 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:34.949265 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:34.950217 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:34.957393 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 1, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-0]: I0331 21:55:35.050766 281472859730816 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 1, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 2 all_reduces, num_devices = 1, group_size = 1, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-0]: I0331 21:55:35.185266 281472859730816 cross_device_ops.py:1151] Collective all_reduce tensors: 2 all_reduces, num_devices = 1, group_size = 1, implementation = CommunicationImplementation.RING, num_packs = 1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0): 0.29s I0331 21:55:35.233777 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0): 0.29s [ OK ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpSUM_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1 I0331 21:55:35.235119 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.235480 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.236829 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.237293 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s I0331 21:55:35.237683 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:35.238707 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.239034 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.240243 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.240639 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s I0331 21:55:35.241024 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:35.242028 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.242325 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.243435 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.244031 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s I0331 21:55:35.244408 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:35.245362 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.245647 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.246805 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.247193 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_0): 0.0s I0331 21:55:35.247739 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:35.248705 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.249002 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.250195 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.250596 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s I0331 21:55:35.250992 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:35.251990 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.252451 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.253648 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.254086 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s I0331 21:55:35.254490 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:35.255487 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.255804 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.257089 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.257514 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_0): 0.0s I0331 21:55:35.258085 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:35.259068 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.259407 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.260622 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.261033 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s I0331 21:55:35.261424 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:35.262402 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.262692 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.263883 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.264278 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s I0331 21:55:35.264864 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:35.265838 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:35.266164 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.267306 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.267776 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:35.274436 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: 2023-03-31 21:55:35.296147: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:22145 [worker-1]: 2023-03-31 21:55:35.302513: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:19325 [worker-0]: INFO:tensorflow:Collective all_reduce IndexedSlices: 1 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-0]: I0331 21:55:35.478440 281472859730816 cross_device_ops.py:1166] Collective all_reduce IndexedSlices: 1 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-1]: INFO:tensorflow:Collective all_reduce IndexedSlices: 1 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-1]: I0331 21:55:35.484882 281472859730816 cross_device_ops.py:1166] Collective all_reduce IndexedSlices: 1 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-0]: INFO:tensorflow:Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-0]: I0331 21:55:36.006691 281472859730816 cross_device_ops.py:1166] Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-1]: INFO:tensorflow:Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-1]: I0331 21:55:36.010775 281472859730816 cross_device_ops.py:1166] Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING I0331 21:55:36.967122 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_0): 1.7s I0331 21:55:36.967530 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_0): 1.7s [ OK ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpMEAN_requiredgpus_0 [ RUN ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:36.968971 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.986562 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:36.988107 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:36.988681 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:37.006094 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.009126 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_1): 0.04s I0331 21:55:37.010525 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_reduceop_ReduceOpSUM_requiredgpus_1): 0.04s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:37.011852 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.012377 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.013874 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:37.014382 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:37.014367 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.015161 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s I0331 21:55:37.015625 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:37.016764 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.017358 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.018709 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.019155 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.019986 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:37.019311 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.0s I0331 21:55:37.020744 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:37.021853 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.022282 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.023652 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.024254 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:37.024115 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.024976 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s I0331 21:55:37.025433 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:37.026521 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.027092 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.028556 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.028983 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:37.029204 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.029681 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s I0331 21:55:37.030179 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:37.031246 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.031647 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.049181 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.049631 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:37.049679 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.050405 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.02s I0331 21:55:37.051122 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.02s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:37.052308 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.052814 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.054280 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.054721 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:37.055445 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.055728 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s I0331 21:55:37.056491 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:37.057649 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.058124 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.059460 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.059874 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:37.060071 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.060833 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s I0331 21:55:37.061280 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:37.062347 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.062765 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.064083 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:37.064463 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:37.064572 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.065206 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.0s I0331 21:55:37.065815 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:37.066894 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:37.088540 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.086721 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:37.089422 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:37.090106 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.091252 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.03s I0331 21:55:37.091821 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.03s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:37.093063 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.102699 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.104743 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:37.105488 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:37.106962 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.107846 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.02s I0331 21:55:37.108413 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.02s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:37.109622 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.110255 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.127445 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:37.128219 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:37.129587 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.130345 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.02s I0331 21:55:37.131069 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.02s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:37.132251 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.132888 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.134329 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:37.134899 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:37.136009 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.136752 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s I0331 21:55:37.137209 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:37.138333 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.138887 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.140217 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:37.140757 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:37.141819 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.142502 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s I0331 21:55:37.142932 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:37.143975 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:37.144497 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.145795 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:37.146326 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:37.147397 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:37.148090 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:37.152816 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 1.02s I0331 21:55:38.158802 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 1.02s [ OK ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:38.160340 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.160972 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.162711 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.163389 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:38.164603 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.166461 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s I0331 21:55:38.166977 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:38.168169 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.168940 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.170431 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:38.172253 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:38.171079 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.172994 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s I0331 21:55:38.173458 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:38.174577 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.175150 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.176538 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:38.177158 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:38.178276 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:38.178983 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:38.183876 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 2 all_reduces, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-0]: I0331 21:55:39.036827 281472859730816 cross_device_ops.py:1151] Collective all_reduce tensors: 2 all_reduces, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-0]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1552: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: tf.py_func is deprecated in TF V2. Instead, there are two [worker-0]: options available in V2. [worker-0]: - tf.py_function takes a python function which manipulates tf eager [worker-0]: tensors instead of numpy arrays. It's easy to convert a tf eager tensor to [worker-0]: an ndarray (just call tensor.numpy()) but having access to eager tensors [worker-0]: means `tf.py_function`s can use accelerators such as GPUs as well as [worker-0]: being differentiable using a gradient tape. [worker-0]: - tf.numpy_function maintains the semantics of the deprecated tf.py_func [worker-0]: (it is not differentiable, and manipulates numpy arrays). It drops the [worker-0]: stateful argument making all functions stateful. [worker-0]: [worker-0]: W0331 21:55:39.062056 281472859730816 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1552: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: tf.py_func is deprecated in TF V2. Instead, there are two [worker-0]: options available in V2. [worker-0]: - tf.py_function takes a python function which manipulates tf eager [worker-0]: tensors instead of numpy arrays. It's easy to convert a tf eager tensor to [worker-0]: an ndarray (just call tensor.numpy()) but having access to eager tensors [worker-0]: means `tf.py_function`s can use accelerators such as GPUs as well as [worker-0]: being differentiable using a gradient tape. [worker-0]: - tf.numpy_function maintains the semantics of the deprecated tf.py_func [worker-0]: (it is not differentiable, and manipulates numpy arrays). It drops the [worker-0]: stateful argument making all functions stateful. [worker-0]: [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 2 all_reduces, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-1]: I0331 21:55:39.091815 281472859730816 cross_device_ops.py:1151] Collective all_reduce tensors: 2 all_reduces, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-1]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1552: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: tf.py_func is deprecated in TF V2. Instead, there are two [worker-1]: options available in V2. [worker-1]: - tf.py_function takes a python function which manipulates tf eager [worker-1]: tensors instead of numpy arrays. It's easy to convert a tf eager tensor to [worker-1]: an ndarray (just call tensor.numpy()) but having access to eager tensors [worker-1]: means `tf.py_function`s can use accelerators such as GPUs as well as [worker-1]: being differentiable using a gradient tape. [worker-1]: - tf.numpy_function maintains the semantics of the deprecated tf.py_func [worker-1]: (it is not differentiable, and manipulates numpy arrays). It drops the [worker-1]: stateful argument making all functions stateful. [worker-1]: [worker-1]: W0331 21:55:39.115294 281472859730816 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1552: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: tf.py_func is deprecated in TF V2. Instead, there are two [worker-1]: options available in V2. [worker-1]: - tf.py_function takes a python function which manipulates tf eager [worker-1]: tensors instead of numpy arrays. It's easy to convert a tf eager tensor to [worker-1]: an ndarray (just call tensor.numpy()) but having access to eager tensors [worker-1]: means `tf.py_function`s can use accelerators such as GPUs as well as [worker-1]: being differentiable using a gradient tape. [worker-1]: - tf.numpy_function maintains the semantics of the deprecated tf.py_func [worker-1]: (it is not differentiable, and manipulates numpy arrays). It drops the [worker-1]: stateful argument making all functions stateful. [worker-1]: I0331 21:55:39.204679 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 1.03s I0331 21:55:39.205103 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 1.03s [ OK ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 [ RUN ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:39.206561 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.207243 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.209026 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:39.209466 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:39.209817 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.211850 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s I0331 21:55:39.212399 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:39.213651 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.214313 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.215910 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:39.217390 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:39.218655 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.219552 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s I0331 21:55:39.220140 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:39.221433 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.222085 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.223644 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:39.224301 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:39.225497 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.226303 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s I0331 21:55:39.226822 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:39.228025 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.228706 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.230232 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:39.230884 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:39.232130 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.232888 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0): 0.01s I0331 21:55:39.233895 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:39.235077 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.235707 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.237298 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:39.237982 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:39.239273 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.240094 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s I0331 21:55:39.240647 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:39.241927 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.242579 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.244211 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:39.244888 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:39.246210 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.247050 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s I0331 21:55:39.247596 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:39.248866 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.249511 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.251083 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:39.251739 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:39.253022 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.253817 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0): 0.01s I0331 21:55:39.254532 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:39.255780 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.256504 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.277171 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:39.277935 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:39.279244 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.280082 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1): 0.03s I0331 21:55:39.280660 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_1): 0.03s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:39.281930 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.282579 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.284152 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.284813 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:39.286027 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.286934 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s I0331 21:55:39.287493 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:39.288790 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.289417 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.290934 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:39.291414 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:39.291443 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:39.292153 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:39.297880 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: INFO:tensorflow:Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-1]: I0331 21:55:39.323383 281472859730816 cross_device_ops.py:1166] Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-0]: INFO:tensorflow:Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-0]: I0331 21:55:39.324549 281472859730816 cross_device_ops.py:1166] Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-0]: 2023-03-31 21:55:39.967086: E tensorflow/core/distributed_runtime/worker.cc:431] Bad status from CompleteGroupDistributed: FAILED_PRECONDITION: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-0]: 2023-03-31 21:55:39.967244: E tensorflow/core/distributed_runtime/worker.cc:431] Bad status from CompleteGroupDistributed: FAILED_PRECONDITION: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-1]: 2023-03-31 21:55:39.967675: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort FAILED_PRECONDITION: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299739.967583248","description":"Error received from peer ipv6:[::1]:22145","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster.","grpc_status":9} [worker-1]: 2023-03-31 21:55:39.967752: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:1/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): FAILED_PRECONDITION: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299739.967583248","description":"Error received from peer ipv6:[::1]:22145","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster.","grpc_status":9} [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: [[{{node CollectiveGatherV2}}]] [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-0]: 2023-03-31 21:55:40.425660: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort FAILED_PRECONDITION: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-0]: 2023-03-31 21:55:40.425754: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:0/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): FAILED_PRECONDITION: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-0]: The error could be from a previous operation. Restart your program to reset. [worker-0]: [[{{node CollectiveGatherV2}}]] [type.googleapis.com/tensorflow.DerivedStatus=''] I0331 21:55:41.095303 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ FAILED ] CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0): 1.82s I0331 21:55:41.104682 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0): 1.82s [ RUN ] CollectiveOpsTest.testCollectiveV2ControlFlow_test_implementation_CommunicationImplementationRING_numprocesses_1_requiredgpus_1 I0331 21:55:41.107362 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.108153 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.109905 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.110649 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.117641 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.122628 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testCollectiveV2ControlFlow_test_implementation_CommunicationImplementationRING_numprocesses_1_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testCollectiveV2ControlFlow_test_implementation_CommunicationImplementationRING_numprocesses_1_requiredgpus_1): 0.02s I0331 21:55:41.123263 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testCollectiveV2ControlFlow_test_implementation_CommunicationImplementationRING_numprocesses_1_requiredgpus_1): 0.02s [ RUN ] CollectiveOpsTest.testCollectiveV2ControlFlow_test_implementation_CommunicationImplementationRING_numprocesses_2_requiredgpus_2 I0331 21:55:41.124574 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.125255 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.126929 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.127625 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.164826 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.170279 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testCollectiveV2ControlFlow_test_implementation_CommunicationImplementationRING_numprocesses_2_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testCollectiveV2ControlFlow_test_implementation_CommunicationImplementationRING_numprocesses_2_requiredgpus_2): 0.05s I0331 21:55:41.170915 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testCollectiveV2ControlFlow_test_implementation_CommunicationImplementationRING_numprocesses_2_requiredgpus_2): 0.05s [ RUN ] CollectiveOpsTest.testInputsAreFunctionArgs_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:41.172233 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.172946 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.174679 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.175375 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.176762 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.177745 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testInputsAreFunctionArgs_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testInputsAreFunctionArgs_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.01s I0331 21:55:41.178308 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testInputsAreFunctionArgs_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testMultiThreadedCollectiveLaunchNoInterleave_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:41.179586 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.180232 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.181832 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.182505 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.183806 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.184613 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testMultiThreadedCollectiveLaunchNoInterleave_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testMultiThreadedCollectiveLaunchNoInterleave_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.01s I0331 21:55:41.185145 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testMultiThreadedCollectiveLaunchNoInterleave_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:41.186448 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.186975 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.191079 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.192011 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.194140 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.195036 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s I0331 21:55:41.195651 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:41.196962 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.197630 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.199302 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.199982 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.201271 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.202065 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.01s I0331 21:55:41.202789 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:41.204009 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.204655 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.206142 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.206805 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.208807 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.218970 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.02s I0331 21:55:41.219614 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.02s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:41.220922 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.221597 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.223311 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.223997 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.225335 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.226171 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s I0331 21:55:41.226705 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:41.228572 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.229238 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.230837 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.231497 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.232799 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.233650 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.01s I0331 21:55:41.234390 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:41.235661 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.236335 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.237947 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.238604 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.239880 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.240665 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s I0331 21:55:41.241180 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:41.242431 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.243075 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.244586 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.245232 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.246477 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.247310 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s I0331 21:55:41.247857 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:41.249132 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.249856 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.251505 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.252179 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.253486 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.254456 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.01s I0331 21:55:41.255229 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:41.256614 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.257379 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.259075 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.259750 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:41.260983 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.261842 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s I0331 21:55:41.262358 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:41.263588 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.264229 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.265736 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.266414 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:41.267582 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.268557 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s I0331 21:55:41.269081 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:41.270341 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.271154 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.272786 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.273334 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.273470 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.274265 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.01s I0331 21:55:41.274966 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:41.276279 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.276969 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.278598 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.279255 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.280509 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.281436 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s I0331 21:55:41.281951 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:41.283178 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.283823 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.285343 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.285973 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:41.287177 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.288160 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s I0331 21:55:41.288685 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:41.289952 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.290701 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.292652 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.293424 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:41.294642 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.296575 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:41.316366 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.19s I0331 21:55:41.478461 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.19s [ OK ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:41.480058 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.480586 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.482485 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.483198 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.485558 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s I0331 21:55:41.487119 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 I0331 21:55:41.488448 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.489066 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.490820 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:41.492260 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-0]: W0331 21:55:41.491509 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s I0331 21:55:41.492818 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpMEAN_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 I0331 21:55:41.494034 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.494540 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.495992 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.485920 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:41.491359 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.496451 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.497199 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:41.496496 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.502058 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: 2023-03-31 21:55:41.586234: E tensorflow/core/distributed_runtime/worker.cc:431] Bad status from CompleteGroupDistributed: FAILED_PRECONDITION: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-0]: I0331 21:55:41.749749 281472859730816 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-0]: 2023-03-31 21:55:41.797074: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort FAILED_PRECONDITION: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-0]: 2023-03-31 21:55:41.797162: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:0/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): FAILED_PRECONDITION: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-0]: The error could be from a previous operation. Restart your program to reset. [worker-0]: [[{{node CollectiveReduceV2}}]] [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-1]: I0331 21:55:41.799838 281472859730816 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, num_packs = 1 I0331 21:55:41.803266 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: 2023-03-31 21:55:41.845008: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort FAILED_PRECONDITION: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299741.844879566","description":"Error received from peer ipv6:[::1]:22145","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster.\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":9} [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-1]: 2023-03-31 21:55:41.845082: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:1/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): FAILED_PRECONDITION: Collective ops is aborted by: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299741.844879566","description":"Error received from peer ipv6:[::1]:22145","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster.\nThe error could be from a previous operation. Restart your program to reset.","grpc_status":9} [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: [[{{node CollectiveReduceV2}}]] [type.googleapis.com/tensorflow.DerivedStatus=''] [ FAILED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.37s I0331 21:55:41.858341 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0): 0.37s [ RUN ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:41.859794 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.871169 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.872974 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.873760 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:41.873526 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.880345 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.02s I0331 21:55:41.880980 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.02s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:41.882293 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.882952 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.884452 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.884926 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:41.884991 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.885650 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s I0331 21:55:41.886343 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:41.887500 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.887981 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.889354 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.890077 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.906730 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.907662 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.02s I0331 21:55:41.908277 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.02s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:41.909588 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.910544 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.912254 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.912914 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.914207 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.914965 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s I0331 21:55:41.915461 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:41.916707 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.917354 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.918848 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.919491 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.920710 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.921473 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.01s I0331 21:55:41.922169 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.01s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:41.923352 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.923982 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.925424 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.926049 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.927301 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.934004 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s I0331 21:55:41.934633 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:41.935935 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.936650 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.938249 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.938895 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.940107 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.940822 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s I0331 21:55:41.941263 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 I0331 21:55:41.942359 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.942874 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.944121 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:41.944588 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:41.945482 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:41.946052 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:41.951672 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.06s I0331 21:55:41.997816 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0): 0.06s [ OK ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_0 [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 I0331 21:55:41.999368 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:41.999998 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.001815 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.002470 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:42.003612 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.004397 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s I0331 21:55:42.004873 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_1): 0.01s [ RUN ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2 I0331 21:55:42.006022 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.006646 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.008037 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:42.008596 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:42.009752 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.010460 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s I0331 21:55:42.010927 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_reduceop_ReduceOpSUM_requiredgpus_2): 0.01s [ RUN ] CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:42.012055 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.012659 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.014015 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:42.014611 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:42.015749 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.016479 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 0.01s I0331 21:55:42.017126 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 0.01s [ RUN ] CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:42.018192 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:42.018774 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.020082 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:42.020636 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:42.021724 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:42.022421 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:42.026472 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 2 all_reduces, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-0]: I0331 21:55:42.090590 281472859730816 cross_device_ops.py:1151] Collective all_reduce tensors: 2 all_reduces, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-0]: 2023-03-31 21:55:43.687914: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort DEADLINE_EXCEEDED: Collective has timed out during execution. [worker-0]: 2023-03-31 21:55:43.688239: E tensorflow/core/common_runtime/ring_alg.cc:291] Aborting RingReduce with DEADLINE_EXCEEDED: Collective ops is aborted by: Collective has timed out during execution. [worker-0]: The error could be from a previous operation. Restart your program to reset. [type.googleapis.com/tensorflow.DerivedStatus=''] [worker-0]: 2023-03-31 21:55:43.688292: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:0/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): DEADLINE_EXCEEDED: Collective has timed out during execution. [worker-0]: [[{{node CollectiveReduceV2}}]] I0331 21:55:43.711759 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 1.69s I0331 21:55:43.712152 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 1.69s [ OK ] CollectiveOpsTest.testTimeoutBatchReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 [ RUN ] CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:43.713567 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:43.726718 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.728578 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:43.729549 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:43.730812 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.731918 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 0.02s I0331 21:55:43.732658 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 0.02s [ RUN ] CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:43.733840 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:43.734408 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.735745 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:43.736316 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:43.737401 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:43.738078 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:43.741712 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: INFO:tensorflow:Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-0]: I0331 21:55:43.830458 281472859730816 cross_device_ops.py:1166] Collective all_reduce IndexedSlices: 2 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-0]: 2023-03-31 21:55:45.476492: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort DEADLINE_EXCEEDED: Collective has timed out waiting for other workers. [worker-0]: 2023-03-31 21:55:45.476583: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:0/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): DEADLINE_EXCEEDED: Collective has timed out waiting for other workers. [worker-0]: [[{{node CollectiveGatherV2}}]] I0331 21:55:45.484684 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 1.75s I0331 21:55:45.485093 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 1.75s [ OK ] CollectiveOpsTest.testTimeoutBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 [ RUN ] CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:45.486561 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:45.487089 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:45.488732 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:45.489244 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:45.489264 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:45.490400 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 0.01s I0331 21:55:45.491099 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 0.01s [ RUN ] CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:45.492317 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:45.492904 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:45.494679 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:45.495173 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:45.495337 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:45.495944 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:45.500176 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, num_packs = 1 [worker-0]: I0331 21:55:45.550689 281472859730816 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, num_packs = 1 I0331 21:55:46.595918 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 1.1s I0331 21:55:46.596368 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 1.1s [ OK ] CollectiveOpsTest.testTimeoutReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 [ RUN ] CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:46.597817 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:46.598500 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:46.600087 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:46.600610 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:46.601592 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 0.01s I0331 21:55:46.602281 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 0.01s [ RUN ] CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 I0331 21:55:46.603491 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: W0331 21:55:46.604146 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:46.605882 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: 2023-03-31 21:55:46.590291: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort DEADLINE_EXCEEDED: Collective has timed out waiting for other workers. [worker-0]: 2023-03-31 21:55:46.590375: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:0/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): DEADLINE_EXCEEDED: Collective has timed out waiting for other workers. [worker-0]: [[{{node CollectiveReduceV2}}]] [worker-0]: W0331 21:55:46.600582 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-0]: W0331 21:55:46.606628 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [worker-1]: W0331 21:55:46.607835 281472859730816 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:46.608871 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:46.613168 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: INFO:tensorflow:Collective all_reduce IndexedSlices: 1 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-0]: I0331 21:55:46.946403 281472859730816 cross_device_ops.py:1166] Collective all_reduce IndexedSlices: 1 all_reduces, num_devices =1, group_size = 2, implementation = CommunicationImplementation.RING [worker-0]: 2023-03-31 21:55:48.251773: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort DEADLINE_EXCEEDED: Collective has timed out waiting for other workers. [worker-0]: 2023-03-31 21:55:48.251870: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:0/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): DEADLINE_EXCEEDED: Collective has timed out waiting for other workers. [worker-0]: [[{{node CollectiveGatherV2}}]] I0331 21:55:48.260223 281473800631168 multi_process_runner.py:989] Waiting for the result from worker-1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 1.66s I0331 21:55:48.260638 281473800631168 test_util.py:2462] time(__main__.CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0): 1.66s [ OK ] CollectiveOpsTest.testTimeoutReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_False_requiredgpus_0 [ RUN ] CollectiveOpsTest.test_session [ SKIPPED ] CollectiveOpsTest.test_session I0331 21:55:48.791197 281473800631168 multi_process_runner.py:646] worker-0 exit code: 0 I0331 21:55:48.791432 281473800631168 multi_process_runner.py:646] worker-1 exit code: 0 I0331 21:55:48.793790 281473800631168 multi_process_runner.py:662] Joining log reading threads. I0331 21:55:48.793959 281473800631168 multi_process_runner.py:665] Joined log reading threads. I0331 21:55:49.714704 281473800631168 multi_process_runner.py:646] worker-0 exit code: 0 I0331 21:55:49.715956 281473800631168 multi_process_runner.py:662] Joining log reading threads. I0331 21:55:49.716100 281473800631168 multi_process_runner.py:665] Joined log reading threads. ====================================================================== ERROR: testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 (__main__.CollectiveOpsTest) CollectiveOpsTest.testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0 testBatchReduceSparse_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpSUM_requiredgpus_0(implementation=, num_processes=2, prefer_unique_instance_key=True, reduce_op=, required_gpus=0) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/testing/parameterized.py", line 314, in bound_param_test return test_method(self, **testcase_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 360, in decorated execute_test_method() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 343, in execute_test_method test_method(**kwargs_to_pass) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/combinations.py", line 559, in decorator test_method(self, **kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 590, in testBatchReduceSparse self.batch_reduce_and_verify(inputs, expect, options) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 320, in batch_reduce_and_verify get_global_mpr(options.num_processes).run(replica_fn) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1003, in run six.reraise(*process_status.exc_info) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/six_archive/six.py", line 719, in reraise raise value File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained return_value = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 317, in replica_fn got = def_function.function(batch_reduce_fn)() ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/util/traceback_utils.py", line 141, in error_handler return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 840, in __call__ result = self._call(*args, **kwds) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 912, in _call return self._concrete_variable_creation_fn._call_flat( # pylint: disable=protected-access ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/monomorphic_function.py", line 1352, in _call_flat return self._build_call_outputs(self._inference_function.call( ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/atomic_function.py", line 176, in call outputs = execute.execute( ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/execute.py", line 53, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ^^^^^^^^^^^^^^^^^ tensorflow.python.framework.errors_impl.FailedPreconditionError: Graph execution error: Detected at node 'CollectiveGatherV2' defined at (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 1352, in test_util.main(config_logical_devices=False) File "", line 1, in File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 274, in main code = _serve_one(child_r, fds, File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 313, in _serve_one code = spawn._main(child_r, parent_sentinel) File "/usr/lib/python3.11/multiprocessing/spawn.py", line 133, in _main return self._bootstrap(parent_sentinel) File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 317, in replica_fn got = def_function.function(batch_reduce_fn)() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 298, in batch_reduce_fn reduced_values = collective.batch_reduce(options.reduce_op, Node: 'CollectiveGatherV2' Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. The error could be from a previous operation. Restart your program to reset. [[{{node CollectiveGatherV2}}]] [Op:__inference_batch_reduce_fn_958] ====================================================================== ERROR: testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 (__main__.CollectiveOpsTest) CollectiveOpsTest.testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0 testReduceDense_test_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_reduceop_ReduceOpMEAN_requiredgpus_0(implementation=, num_processes=2, prefer_unique_instance_key=True, reduce_op=, required_gpus=0) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/testing/parameterized.py", line 314, in bound_param_test return test_method(self, **testcase_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 360, in decorated execute_test_method() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 343, in execute_test_method test_method(**kwargs_to_pass) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/combinations.py", line 559, in decorator test_method(self, **kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 366, in testReduceDense self.reduce_and_verify(inputs, expect, options) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 269, in reduce_and_verify get_global_mpr(options.num_processes).run(replica_fn) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1003, in run six.reraise(*process_status.exc_info) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/six_archive/six.py", line 719, in reraise raise value File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained return_value = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 266, in replica_fn got = def_function.function(reduce_fn)() ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/util/traceback_utils.py", line 141, in error_handler return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 840, in __call__ result = self._call(*args, **kwds) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 912, in _call return self._concrete_variable_creation_fn._call_flat( # pylint: disable=protected-access ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/monomorphic_function.py", line 1352, in _call_flat return self._build_call_outputs(self._inference_function.call( ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/atomic_function.py", line 176, in call outputs = execute.execute( ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/execute.py", line 53, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ^^^^^^^^^^^^^^^^^ tensorflow.python.framework.errors_impl.FailedPreconditionError: Graph execution error: Detected at node 'CollectiveReduceV2' defined at (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 1352, in test_util.main(config_logical_devices=False) File "", line 1, in File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 274, in main code = _serve_one(child_r, fds, File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 313, in _serve_one code = spawn._main(child_r, parent_sentinel) File "/usr/lib/python3.11/multiprocessing/spawn.py", line 133, in _main return self._bootstrap(parent_sentinel) File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 266, in replica_fn got = def_function.function(reduce_fn)() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 250, in reduce_fn reduced_values = collective.reduce(options.reduce_op, per_replica_value, Node: 'CollectiveReduceV2' Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 current incarnation doesn't match with one in the group. This usually means this worker has restarted but the collective leader hasn't, or this worker connects to a wrong cluster. The error could be from a previous operation. Restart your program to reset. [[{{node CollectiveReduceV2}}]] [Op:__inference_reduce_fn_1011] ---------------------------------------------------------------------- Ran 139 tests in 19.224s FAILED (errors=2, skipped=121) ================================================================================ ==================== Test output for //tensorflow/python/distribute:cross_device_ops_test_cpu (shard 2 of 4): Running tests under Python 3.11.2: /usr/local/bin/python3 [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:33.587371 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: I0331 21:55:33.606945 281473053979520 multi_process_runner.py:840] Subprocess with PID 1071256 (worker, 0) is now being started. [worker-0]: I0331 21:55:33.607345 281473053979520 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:19249"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' I0331 21:55:33.641215 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: I0331 21:55:33.644099 281473053979520 multi_process_runner.py:840] Subprocess with PID 1071278 (worker, 0) is now being started. [worker-0]: I0331 21:55:33.644480 281473053979520 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:24185", "localhost:19779"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' I0331 21:55:33.647220 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: I0331 21:55:33.661909 281473053979520 multi_process_runner.py:840] Subprocess with PID 1071283 (worker, 1) is now being started. [worker-1]: I0331 21:55:33.662315 281473053979520 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:24185", "localhost:19779"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1): 3.18s I0331 21:55:33.671239 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1): 3.18s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:33.672635 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.673855 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.674303 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:33.674766 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:33.675828 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.676890 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.677333 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:33.678016 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:33.679057 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.680131 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.680524 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:33.680929 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:33.681909 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.682955 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.683335 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:33.683753 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:33.684767 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.685799 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.686252 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:33.686849 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:33.687838 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.688873 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.689263 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:33.689673 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:33.690684 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.691724 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.692130 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:33.692543 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:33.693516 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.694505 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:33.694872 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 I0331 21:55:33.700262 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: E0331 21:55:33.709252472 1071278 server_chttp2.cc:40] {"created":"@1680299733.709138230","description":"No address added out of total 1 resolved","file":"external/com_github_grpc_grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":395,"referenced_errors":[{"created":"@1680299733.709134049","description":"Failed to add any wildcard listeners","file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_posix.cc","file_line":341,"referenced_errors":[{"created":"@1680299733.709100936","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1680299733.709073853","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]},{"created":"@1680299733.709133134","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1680299733.709124603","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]}]}]} [worker-0]: 2023-03-31 21:55:33.709356: E tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:601] UNKNOWN: Could not start gRPC server [worker-0]: 2023-03-31 21:55:33.709598: E tensorflow/core/common_runtime/eager/context_distributed_manager.cc:699] Could not start gRPC server I0331 21:55:33.712326 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: 2023-03-31 21:55:33.727056: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:19779 [worker-1]: INFO:tensorflow:Collective batch_all_gather: 1 all-gathers, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, [worker-1]: I0331 21:55:33.806178 281473053979520 cross_device_ops.py:1316] Collective batch_all_gather: 1 all-gathers, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, [worker-1]: 2023-03-31 21:55:34.119627: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort INTERNAL: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299734.119233089","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)","grpc_status":13} [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299734.119495836","description":"Error received from peer ipv6:[::1]:24185","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nAdditional GRPC error information from remote target /job:worker/replica:0/task:0:\n:{"created":"@1680299734.119233089","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)","grpc_status":13}","grpc_status":13} [worker-1]: 2023-03-31 21:55:34.119718: I tensorflow/core/common_runtime/executor.cc:1210] [/job:worker/replica:0/task:1/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INTERNAL: Collective ops is aborted by: Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1) [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299734.119233089","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)","grpc_status":13} [worker-1]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-1]: :{"created":"@1680299734.119495836","description":"Error received from peer ipv6:[::1]:24185","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)\nAdditional GRPC error information from remote target /job:worker/replica:0/task:0:\n:{"created":"@1680299734.119233089","description":"Error received from peer ipv6:[::1]:21634","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Device /job:worker/replica:0/task:1/device:CPU:0 is joining a group with size2, but that group has size 5 (group_key=1)","grpc_status":13}","grpc_status":13} [worker-1]: The error could be from a previous operation. Restart your program to reset. [worker-1]: [[{{node allgather/CollectiveGatherV2}}]] [type.googleapis.com/tensorflow.DerivedStatus=''] [ FAILED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0): 1.41s I0331 21:55:35.100286 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_eager_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0): 1.41s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:35.101562 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.102781 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:35.103249 281473053979520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.116793 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1): 0.02s I0331 21:55:35.117312 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1): 0.02s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:35.118443 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.119710 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:35.120574 281473053979520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.121268 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:35.121661 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:35.122636 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.123769 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-1]: W0331 21:55:35.124093 281473053979520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.124383 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:35.125123 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationAUTO_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:35.126103 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.127274 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.128533 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:35.129048 281473053979520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:35.129934 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:35.130958 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.132019 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.132410 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:35.132479 281473053979520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:35.133338 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:35.134333 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.135354 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.135705 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:35.135850 281473053979520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0): 0.0s I0331 21:55:35.136861 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationNCCL_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1 I0331 21:55:35.137825 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.139760 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.140157 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:35.140210 281473053979520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1): 0.0s I0331 21:55:35.141050 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_False_requiredgpus_1): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 I0331 21:55:35.142033 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.143077 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.143451 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:35.143382 281473053979520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. [ SKIPPED ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2 INFO:tensorflow:time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.0s I0331 21:55:35.144167 281473553494912 test_util.py:2462] time(__main__.CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_1_preferuniqueinstancekey_True_requiredgpus_2): 0.0s [ RUN ] CollectiveOpsTest.testAllGatherSameShape_test_axis_0_funcmode_funcgraph_implementation_CommunicationImplementationRING_numprocesses_2_preferuniqueinstancekey_True_requiredgpus_0 I0331 21:55:35.145117 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.146201 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 I0331 21:55:35.146794 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: W0331 21:55:35.147075 281473053979520 context.py:866] Enabling collective ops after program startup may cause error when accessing previously created tensors. I0331 21:55:35.153187 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-0 [worker-0]: E0331 21:55:35.155076733 1071278 server_chttp2.cc:40] {"created":"@1680299735.154964261","description":"No address added out of total 1 resolved","file":"external/com_github_grpc_grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":395,"referenced_errors":[{"created":"@1680299735.154959766","description":"Failed to add any wildcard listeners","file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_posix.cc","file_line":341,"referenced_errors":[{"created":"@1680299735.154926412","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1680299735.154892494","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]},{"created":"@1680299735.154958976","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1680299735.154951210","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]}]}]} [worker-0]: 2023-03-31 21:55:35.155166: E tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:601] UNKNOWN: Could not start gRPC server [worker-0]: 2023-03-31 21:55:35.155380: E tensorflow/core/common_runtime/eager/context_distributed_manager.cc:699] Could not start gRPC server I0331 21:55:35.157510 281473553494912 multi_process_runner.py:989] Waiting for the result from worker-1 [worker-1]: INFO:tensorflow:Collective batch_all_gather: 1 all-gathers, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, [worker-1]: I0331 21:55:35.519327 281473053979520 cross_device_ops.py:1316] Collective batch_all_gather: 1 all-gathers, num_devices = 1, group_size = 2, implementation = CommunicationImplementation.RING, [worker-1]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1552: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: tf.py_func is deprecated in TF V2. Instead, there are two [worker-1]: options available in V2. [worker-1]: - tf.py_function takes a python function which manipulates tf eager [worker-1]: tensors instead of numpy arrays. It's easy to convert a tf eager tensor to [worker-1]: an ndarray (just call tensor.numpy()) but having access to eager tensors [worker-1]: means `tf.py_function`s can use accelerators such as GPUs as well as [worker-1]: being differentiable using a gradient tape. [worker-1]: - tf.numpy_function maintains the semantics of the deprecated tf.py_func [worker-1]: (it is not differentiable, and manipulates numpy arrays). It drops the [worker-1]: stateful argument making all functions stateful. [worker-1]: [worker-1]: W0331 21:55:35.596148 281473053979520 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py:1552: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: tf.py_func is deprecated in TF V2. Instead, there are two [worker-1]: options available in V2. [worker-1]: - tf.py_function takes a python function which manipulates tf eager [worker-1]: tensors instead of numpy arrays. It's easy to convert a tf eager tensor to [worker-1]: an ndarray (just call tensor.numpy()) but having access to eager tensors [worker-1]: means `tf.py_function`s can use accelerators such as GPUs as well as [worker-1]: being differentiable using a gradient tape. [worker-1]: - tf.numpy_function maintains the semantics of the deprecated tf.py_func [worker-1]: (it is not differentiable, and manipulates numpy arrays). It drops the [worker-1]: stateful argument making all functions stateful. [worker-1]: -- Test timed out at 2023-03-31 22:10:22 UTC -- Thread 0x0000fffefa7cf1e0 (most recent call first): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 260 in _continuously_readline_from_sub File "/usr/lib/python3.11/threading.py", line 975 in run File [worker-1]: Thread 0x0000ffff6aeff1e0 (most recent call first): "/usr/lib/python3.11/threading.py", line 1038 in _bootstrap_inner File "/usr/lib/python3.11/threading.py", line 995 in _bootstrap Thread 0x0000fffefafdf1e0 (most recent call first): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 527 in _process_watchdog File "/usr/lib/python3.11/threading.py", line 975 in run File "/usr/lib/python3.11/threading.py", line 1038 in _bootstrap_inner File "/usr/lib/python3.11/threading.py"[worker-0]: Thread 0x0000ffff6aeff1e0 (most recent call first): , line 995 in _bootstrap Thread 0x0000fffefb7ef1e0 (most recent call first): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 264 in _continuously_readline_from_sub File "/usr/lib/python3.11/threading.py", line 975 in run[worker-0]: Thread 0x0000ffff6aeff1e0 (most recent call first): File "/usr/lib/python3.11/threading.py", line 1038 in _bootstrap_inner File "/usr/lib/python3.11/threading.py", line 995 in _bootstrap Thread 0x0000fffefbfff1e0 (most recent call first): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 527[worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 789 in _message_checking_func in _process_watchdog File "/usr/lib/python3.11/threading.py", line 975 in [worker-1]: File "/usr/lib/python3.11/threading.py", line 975 in run run File "/usr/lib/python3.11/threading.py[worker-1]: File "/usr/lib/python3.11/threading.py", line 1038 in _bootstrap_inner ", line 1038 in [worker-1]: File "/usr/lib/python3.11/threading.py", line 995 in _bootstrap _bootstrap_inner[worker-1]: File "/usr/lib/python3.11/threading.py[worker-1]: Current thread 0x0000ffff8d657380 (most recent call first): ", line 995 in _bootstrap[worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/execute.py", line 53 in quick_execute Thread 0x0000ffff00a1f1e0 (most recent call first): [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/atomic_function.py", line 176 in call File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py"[worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/monomorphic_function.py", line 1352 in _call_flat , line 264 in _continuously_readline_from_sub[worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 912 in _call File "/usr/lib/python3.11/threading.py[worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 840 in __call__ ", line 975 in [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/util/traceback_utils.py", line 141 in error_handler run File "[worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 886 in replica_fn /usr/lib/python3.11/threading.py", line 1038[worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060 in _run_contained in _bootstrap_inner File [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1033 in _pool_runner_worker "/usr/lib/python3.11/threading.py", line [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060 in _run_contained 995 in _bootstrap [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 854 in __call__ Current thread 0x0000ffffab2b7380 (most recent call first): File "/usr/lib/python3.11/multiprocessing/connection.py", line 378 in _recv File "/usr/lib/python3.11/multiprocessing/connection.py", line 413 in _recv_bytes File "/usr/lib/python3.11/multiprocessing/connection.py", line 249 in recv File [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 789 in _message_checking_func "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 991 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py[worker-0]: File "/usr/lib/python3.11/threading.py", line 975 in run ", line 889 in [worker-0]: File "/usr/lib/python3.11/threading.py", line 1038 in _bootstrap_inner testAllGatherSameShape File "[worker-0]: File "/usr/lib/python3.11/threading.py", line 995 in _bootstrap /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/combinations.py", line 559[worker-0]: in decorator File [worker-0]: Current thread 0x0000ffff8d657380 (most recent call first): "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py"[worker-0]: File "/usr/lib/python3.11/multiprocessing/connection.py", line 378 in _recv , line 343 in [worker-0]: File "/usr/lib/python3.11/multiprocessing/connection.py", line 413 in _recv_bytes execute_test_method File "[worker-0]: File "/usr/lib/python3.11/multiprocessing/connection.py", line 249 in recv /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1029 in _pool_runner_worker 360 in decorated [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060 in _run_contained File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/testing/parameterized.py"[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 854 in __call__ , line 314 in bound_param_test[worker-0]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108 in run File "/usr/lib/python3.11/unittest/case.py[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54 in ", line 579 in _callTestMethod[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/app.py", line 258 in _run_main File "/usr/lib/python3.11/unittest/case.py"[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/app.py", line 312 in run , line 623 in run[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54 in _run_with_absl File "/usr/lib/python3.11/unittest/case.py[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755 in _run_with_setenv ", line 678 in [worker-0]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314 in _bootstrap __call__ File "[worker-0]: File "/usr/lib/python3.11/multiprocessing/spawn.py", line 133 in _main /usr/lib/python3.11/unittest/suite.py", line [worker-0]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 313 in _serve_one 122 in run [worker-0]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 274 in main File "/usr/lib/python3.11/unittest/suite.py"[worker-0]: File "", line 1 in , line 84 in __call__[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 151 in _if_spawn_run_and_exit File "/usr/lib/python3.11/unittest/suite.py[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 164 in test_main ", line 122 in [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1455 in test_main run File "[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/test_util.py", line 138 in main /usr/lib/python3.11/unittest/suite.py", line 84[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 1352 in in __call__ File "/usr/lib/python3.11/unittest/runner.py", line 217 in run File "/usr/lib/python3.11/unittest/main.py", line 274 in runTests File "/usr/lib/python3.11/unittest/main.py", line 102 in __init__ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/testing/absltest.py", line 2537 in _run_and_get_tests_result File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/testing/absltest.py", line 2568 in run_tests File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/testing/absltest.py", line 2156 in _run_in_app File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/testing/absltest.py", line 2049 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 789 in _message_checking_func 51 in g_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/app.py", line 258 in _run_main File "[worker-0]: File "/usr/lib/python3.11/threading.py", line 975 in run /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/app.py", line 312[worker-0]: File "/usr/lib/python3.11/threading.py", line 1038 in _bootstrap_inner in run File [worker-0]: File "/usr/lib/python3.11/threading.py", line 995 in _bootstrap "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line [worker-0]: 60 in main_wrapper [worker-0]: Current thread 0x0000ffff8d657380 (most recent call first): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/platform/benchmark.py[worker-0]: File "/usr/lib/python3.11/multiprocessing/connection.py", line 378 in _recv ", line 489 in [worker-0]: File "/usr/lib/python3.11/multiprocessing/connection.py", line 413 in _recv_bytes benchmarks_main File "[worker-0]: File "/usr/lib/python3.11/multiprocessing/connection.py", line 249 in recv /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/platform/googletest.py", line 62[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1029 in _pool_runner_worker in main File [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060 in _run_contained "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/platform/test.py", line [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 854 in __call__ 56 in main [worker-0]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108 in run File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/test.py"[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54 in , line 25 in main [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/app.py", line 258 in _run_main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py"[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/app.py", line 312 in run , line 167 in test_main[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54 in _run_with_absl File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py[worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755 in _run_with_setenv ", line 1455 in [worker-0]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314 in _bootstrap test_main File "[worker-0]: File "/usr/lib/python3.11/multiprocessing/spawn.py", line 133 in _main /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/test_util.py", line [worker-0]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 313 in _serve_one 138 in main [worker-0]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 274 in main File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py"[worker-0]: File "", line 1 in , line 1352 in [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 151 in _if_spawn_run_and_exit [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 164 in test_main [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1455 in test_main [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/test_util.py", line 138 in main [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/cross_device_ops_test.py", line 1352 in [worker-1]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108 in run [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54 in [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/app.py", line 258 in _run_main [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/absl_py/absl/app.py", line 312 in run [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54 in _run_with_absl [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/cross_device_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755 in _run_with_setenv [worker-1]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314 in _bootstrap [worker-1]: File "/usr/lib/python3.11/multiprocessing/spawn.py", line 133 in _main [worker-1]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 313 in _serve_one [worker-1]: File "/usr/lib/python3.11/multiprocessing/forkserver.py", line 274 in main ================================================================================ //tensorflow/c:c_api_experimental_test PASSED in 29.7s //tensorflow/c:c_api_function_test PASSED in 28.6s //tensorflow/c:c_api_test_cpu PASSED in 31.1s //tensorflow/c:c_test PASSED in 43.1s //tensorflow/c:env_test_cpu PASSED in 23.6s //tensorflow/c:kernels_test_cpu PASSED in 33.0s //tensorflow/c:ops_test PASSED in 23.5s //tensorflow/c:while_loop_test PASSED in 27.9s //tensorflow/c/eager:c_api_cluster_test_cpu PASSED in 30.8s //tensorflow/c/eager:c_api_remote_function_test_cpu PASSED in 31.3s //tensorflow/c/eager:c_api_remote_test_cpu PASSED in 29.6s //tensorflow/c/eager:c_api_test_cpu PASSED in 35.1s //tensorflow/c/eager:custom_device_test PASSED in 30.1s //tensorflow/c/eager/parallel_device:parallel_device_lib_test PASSED in 34.1s //tensorflow/c/eager/parallel_device:parallel_device_remote_test PASSED in 33.3s //tensorflow/c/eager/parallel_device:parallel_device_test PASSED in 31.0s //tensorflow/c/experimental/filesystem/plugins/gcs:expiring_lru_cache_test PASSED in 0.1s //tensorflow/c/experimental/filesystem/plugins/gcs:ram_file_block_cache_test PASSED in 2.5s //tensorflow/c/experimental/grappler:grappler_test PASSED in 27.1s //tensorflow/c/experimental/ops/gen/common:case_format_test PASSED in 0.8s //tensorflow/c/experimental/ops/gen/cpp:cpp_generator_test PASSED in 1.1s //tensorflow/c/experimental/ops/gen/cpp/renderers:renderer_test PASSED in 0.5s //tensorflow/c/experimental/saved_model/core:constant_loading_test PASSED in 23.6s //tensorflow/c/experimental/saved_model/core:object_graph_traversal_test PASSED in 13.8s //tensorflow/c/experimental/saved_model/core:saved_variable_loading_test PASSED in 21.8s //tensorflow/c/experimental/saved_model/core:signature_flattening_test PASSED in 11.9s //tensorflow/c/experimental/saved_model/core:tf_concrete_function_loading_test PASSED in 11.4s //tensorflow/c/experimental/saved_model/core/ops:restore_ops_test PASSED in 10.7s //tensorflow/c/experimental/saved_model/core/ops:variable_ops_test PASSED in 17.8s //tensorflow/c/experimental/saved_model/internal:saved_model_api_test PASSED in 30.8s //tensorflow/c/experimental/stream_executor:stream_executor_test PASSED in 0.1s //tensorflow/c/kernels:bitcast_op_test PASSED in 1.1s //tensorflow/c/kernels:summary_op_benchmark_test PASSED in 0.9s //tensorflow/c/kernels:summary_op_test PASSED in 0.5s //tensorflow/c/kernels:tensor_shape_utils_test PASSED in 0.2s //tensorflow/cc:cc_op_gen_test PASSED in 0.3s //tensorflow/cc:client_client_session_test PASSED in 1.8s //tensorflow/cc:coordinator_test PASSED in 4.9s //tensorflow/cc:framework_cc_ops_test PASSED in 2.1s //tensorflow/cc:framework_gradient_checker_test PASSED in 2.5s //tensorflow/cc:framework_gradients_test PASSED in 17.0s //tensorflow/cc:framework_scope_test PASSED in 0.6s //tensorflow/cc:framework_while_gradients_test PASSED in 3.4s //tensorflow/cc:gradients_array_grad_test PASSED in 7.3s //tensorflow/cc:gradients_data_flow_grad_test PASSED in 3.2s //tensorflow/cc:gradients_functional_grad_test PASSED in 3.4s //tensorflow/cc:gradients_image_grad_test PASSED in 7.2s //tensorflow/cc:gradients_linalg_grad_test PASSED in 3.2s //tensorflow/cc:gradients_manip_grad_test PASSED in 2.8s //tensorflow/cc:gradients_math_grad_test PASSED in 7.7s //tensorflow/cc:gradients_nn_grad_test PASSED in 4.1s //tensorflow/cc:gradients_resource_variable_grad_test PASSED in 1.7s //tensorflow/cc:ops_const_op_test PASSED in 0.8s //tensorflow/cc:ops_while_loop_test PASSED in 1.3s //tensorflow/cc:queue_runner_test PASSED in 12.1s //tensorflow/cc/experimental/base/tests:tensor_test PASSED in 0.2s //tensorflow/cc/experimental/base/tests:tensorhandle_test PASSED in 27.5s //tensorflow/cc/experimental/libexport:load_test PASSED in 0.2s //tensorflow/cc/experimental/libexport:save_test PASSED in 0.2s //tensorflow/cc/experimental/libtf:libtf_module_test PASSED in 24.2s //tensorflow/cc/experimental/libtf:libtf_object_test PASSED in 0.1s //tensorflow/cc/experimental/libtf:libtf_perf_test PASSED in 0.2s //tensorflow/cc/experimental/libtf:libtf_runtime_test PASSED in 30.5s //tensorflow/cc/experimental/libtf:libtf_transform_test PASSED in 27.4s //tensorflow/cc/experimental/libtf:libtf_value_test PASSED in 0.2s //tensorflow/cc/experimental/libtf:libtf_visit_test PASSED in 0.2s //tensorflow/cc/experimental/libtf/impl:iostream_test PASSED in 0.1s //tensorflow/cc/experimental/libtf/impl:none_test PASSED in 0.2s //tensorflow/cc/experimental/libtf/impl:scalars_test PASSED in 0.1s //tensorflow/cc/experimental/libtf/impl:string_test PASSED in 0.1s //tensorflow/cc/experimental/libtf/impl:tensor_spec_test PASSED in 0.2s //tensorflow/cc/saved_model:bundle_v2_test PASSED in 0.1s //tensorflow/cc/saved_model:fingerprinting_test PASSED in 1.1s //tensorflow/cc/saved_model:metrics_test PASSED in 0.2s //tensorflow/cc/saved_model:reader_test PASSED in 0.2s //tensorflow/cc/saved_model:saved_model_bundle_lite_test PASSED in 6.7s //tensorflow/cc/saved_model:saved_model_bundle_test PASSED in 21.3s //tensorflow/cc/saved_model:util_test PASSED in 0.4s //tensorflow/cc/saved_model/experimental/tests:saved_model_api_test PASSED in 29.3s //tensorflow/cc/tools:freeze_saved_model_test PASSED in 1.8s //tensorflow/compiler/aot:codegen_test PASSED in 29.1s //tensorflow/compiler/jit:compilability_check_util_test PASSED in 29.2s //tensorflow/compiler/jit:deadness_analysis_test PASSED in 9.5s //tensorflow/compiler/jit:device_compilation_cache_test PASSED in 4.9s //tensorflow/compiler/jit:device_compilation_cluster_signature_test PASSED in 6.6s //tensorflow/compiler/jit:device_compilation_profiler_test PASSED in 25.0s //tensorflow/compiler/jit:device_compiler_client_test PASSED in 18.6s //tensorflow/compiler/jit:device_compiler_disable_test PASSED in 19.3s //tensorflow/compiler/jit:device_executable_persistor_test PASSED in 21.9s //tensorflow/compiler/jit:device_util_test PASSED in 5.2s //tensorflow/compiler/jit:encapsulate_util_test PASSED in 0.7s //tensorflow/compiler/jit:node_matchers_test PASSED in 0.5s //tensorflow/compiler/jit:resource_operation_safety_analysis_test PASSED in 11.5s //tensorflow/compiler/jit:shape_inference_test PASSED in 1.0s //tensorflow/compiler/jit:xla_activity_listener_test PASSED in 23.3s //tensorflow/compiler/jit:xla_cluster_util_test PASSED in 10.4s //tensorflow/compiler/jit:xla_compile_util_test PASSED in 5.2s //tensorflow/compiler/jit:xla_kernel_creator_test PASSED in 9.1s //tensorflow/compiler/jit/tests:auto_clustering_test PASSED in 24.2s //tensorflow/compiler/mlir:mlir_graph_optimization_pass_test PASSED in 11.6s //tensorflow/compiler/mlir:register_common_dialects_test PASSED in 17.1s //tensorflow/compiler/mlir/lite:lstm_utils_test PASSED in 0.7s //tensorflow/compiler/mlir/lite:perception_ops_utils_test PASSED in 1.0s //tensorflow/compiler/mlir/lite:size_utils_test PASSED in 0.1s //tensorflow/compiler/mlir/lite:tftext_utils_test PASSED in 0.7s //tensorflow/compiler/mlir/lite/experimental/remat:rematerializer_test PASSED in 1.1s //tensorflow/compiler/mlir/lite/experimental/tac:execution_metadata_exporter_test PASSED in 3.8s //tensorflow/compiler/mlir/lite/experimental/tac/tests:compute-cost.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/experimental/tac/tests:device-transform-gpu.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/experimental/tac/tests:device-transform-nnapi.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/experimental/tac/tests:fold-constants-to-subgraph.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/experimental/tac/tests:get-alternative-subgraph.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/experimental/tac/tests:get-op-cost.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/experimental/tac/tests:pick-subgraphs.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/experimental/tac/tests:raise-target-subgraphs.mlir.test PASSED in 13.6s //tensorflow/compiler/mlir/lite/experimental/tac/tests:target-annotation.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/experimental/tac/tests/e2e:device-transform-nnapi.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/experimental/tac/tests/e2e:simple-graph.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/metrics:error_collector_inst_test PASSED in 0.5s //tensorflow/compiler/mlir/lite/quantization:numerical_utils_test PASSED in 0.5s //tensorflow/compiler/mlir/lite/quantization/lite:quantize_model_test PASSED in 11.0s //tensorflow/compiler/mlir/lite/quantization/lite:quantize_weights_test PASSED in 10.9s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:fallback_to_flex_ops_default.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:fallback_to_flex_ops_legacy.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:tf_to_quant.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:tf_to_quant_4bit.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/quantization/tests:import_quant_stats.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/sparsity:sparsify_model_test PASSED in 1.3s //tensorflow/compiler/mlir/lite/stablehlo/tests:fold_broadcast.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:fuse_mhlo_convolution.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-inplaceupdate.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-skip-quantization-ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tf-fb-tf.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-add.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-broadcast_in_dim.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-clamp.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-compare.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-concat.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-constant.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-conv.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-dot.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-gather.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-max.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-mul.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-pad.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-reshape.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-rsqrt.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-scatter.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-sub.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl.mlir.test PASSED in 6.2s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-add.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-broadcast.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-clamp.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-concat.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-constant.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-conv.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-max.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-mul.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-pad.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-reshape.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-rsqrt.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-sub.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/stablehlo/tests:odml-to-stablehlo-allow-tf.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/stablehlo/tests:odml-to-stablehlo-smuggle-resize.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:optimize.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-clamp.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-concat.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-conv.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-division.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-logistic.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-multiply.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-reduce-window.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-resize-bilinear.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-subtract.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-tf-quantize.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/stablehlo/tests:unfuse_mhlo_batch_norm.mlir.test PASSED in 6.3s //tensorflow/compiler/mlir/lite/tests:analyze-variables.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:canonicalize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:const-fold.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:decompose-hybrid-quantization.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:default_quant_params.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:dilated-conv.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:fuse-tftext.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:get-arithmetic-count.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:guarantee_func_has_one_use.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:inlining.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests:insert_call_once_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:legalize-tf-assert.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests:legalize-tf-hashtables.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:legalize-tf-no-runtime-verification.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:legalize-tf-variables.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:legalize-tf-while.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:legalize-tf.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests:legalize_jax_random.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests:lift_tflite_flex_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests:lower-static-tensor-list-default-to-single-batch.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:lower-static-tensor-list-enable-dynamic-update-slice.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests:lower-static-tensor-list.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:modify_io_nodes.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:ops.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests:optimize-after-quantization.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:optimize.mlir.test PASSED in 5.7s //tensorflow/compiler/mlir/lite/tests:optimize_functional_ops.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:optimize_no_verify.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:optimize_op_order.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:partitioned-topological-sort.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:pin-ops-with-side-effects.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:post-quantize-dynamic-range.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests:post-quantize.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:prepare-composite-functions-tf.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-dynamic-range.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-post-training-16bits.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-post-training.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-signed.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests:prepare-quantize.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:prepare-tf-fake-quant-4bit.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests:prepare-tf-fake-quant.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests:prepare-tf-with-allowing-bf16-and-f16-type-legalization.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:prepare-tf.mlir.test PASSED in 2.9s //tensorflow/compiler/mlir/lite/tests:quantize-dynamic-range.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests:quantize-numeric-verify.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:quantize-variables.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:quantize.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests:raise-custom-ops.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:reduce_while_operands.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:shape-inference.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:split-merged-operands.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests:tfl_while_op_licm.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:tfl_while_outline.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests:trim-functions-tf.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests:unfold-large-splat-constant.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/debuginfo:v1_1.0_224_frozen.wrong_attr.line.part.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/debuginfo:v1_1.0_224_frozen.wrong_attr.stack.part.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:add.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:back2back_fake_quant.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:control_flow_v1.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:conv_2d.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/end2end:conv_2d_nchw.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/end2end:custom_opdef.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:disallow_stateful_partitioned_call.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_per_channel.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_per_channel_4bit.pbtxt.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_without_identity.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_without_identity_4bit.pbtxt.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests/end2end:graph-input-node.pbtxt.test PASSED in 2.1s //tensorflow/compiler/mlir/lite/tests/end2end:graph_with_placeholder_with_default.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/end2end:if_op.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/end2end:quant_stats.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/end2end:unroll_batch_matmul.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/end2end:unroll_batch_matmul_disabled.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:basic_lstm.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:bucketize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:constants.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:control_edges.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:custom_op.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:dynamic_shape.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:external_constant.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:if_op.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:import_json.json.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:importer_test_min_max.cc.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:importer_test_min_max.cc.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:input_arrays.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:input_output_names_attr.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:legacy_reshape.json.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:lstm.json.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:lstm.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:many_attribute_op.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:math.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:matmul.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:multi_output_op.json.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:optional.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:optional_input.json.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:output_arrays.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:pruning.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:pruning_function_input_as_output.mlir.test PASSED in 2.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:quant_stats.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:quantization.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:reshape.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:signature.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:signature_with_multiple_entry_points.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:simple.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:tf_variant_type.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:unranked_function_output.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:unranked_tensor.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:while_op.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/mlir2exec:tfl_while_op.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:basic_lstm.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:bucketize.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:custom_op_with_tflite_op.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:depthwise_conv2d.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:depthwise_conv2d_v2.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_builtin.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_custom.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_flex.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_flex_enable_builtin.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:dynamic_shape_constant.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:fake_quant.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_exclusively.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_op_with_complex128.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_op_with_f64.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_op_with_tflite_op.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:fully_connected.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:fully_connected_v2.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:hashtable_resource.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:if_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:logical.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:low_bit_packing.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:lstm.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:lstm_asym_attr.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:lstm_quantized.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:math.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:metadata.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:mul_v2.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:mul_v3.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:nn.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:numeric_verify.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:optional.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:quantization.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:reshape.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def_output_override.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def_with_multiple_entry_points.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def_with_no_inputs.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:simple.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:simple_with_connected_control_nodes.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:simple_with_unconnected_control_nodes.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:svdf.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:svdf_v2.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:tf_entry_function.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:tfl_while_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:transpose_conv_optional.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:type_attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unidirectional_sequence_lstm.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unidirectional_sequence_rnn.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unranked_tensor.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unsorted_segment_prod.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:variant_type_on_func.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:variant_type_on_op.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:while_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/calibrator:calibrator_singleton_test PASSED in 0.2s //tensorflow/compiler/mlir/quantization/tensorflow/calibrator:custom_aggregator_op_test PASSED in 13.9s //tensorflow/compiler/mlir/quantization/tensorflow/cc:const_op_size_test PASSED in 0.3s //tensorflow/compiler/mlir/quantization/tensorflow/cc:convert_asset_args_test PASSED in 5.0s //tensorflow/compiler/mlir/quantization/tensorflow/cc:save_variables_test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/cc:status_macro_test PASSED in 0.3s //tensorflow/compiler/mlir/quantization/tensorflow/debugging:mlir_dump_test PASSED in 0.4s //tensorflow/compiler/mlir/quantization/tensorflow/python:concurrency_test PASSED in 34.8s //tensorflow/compiler/mlir/quantization/tensorflow/python:pywrap_quantize_model_test PASSED in 17.9s //tensorflow/compiler/mlir/quantization/tensorflow/python:representative_dataset_test PASSED in 9.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:cast_bf16_ops_to_f32.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_custom_aggregation_op_to_quant_stats.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_fake_quant_to_qdq.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_tf_quant_ops_to_mhlo.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_tpu_model_to_cpu.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:duplicate_shape_determining_constants.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:fake_quant_e2e_flow.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:fake_quant_e2e_xla.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_custom_aggregation_ops.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_main_function.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_quantized_functions.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_quantized_functions_drq.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_quantized_functions_weight_only.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_restore_op.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_save_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:issue_ids_of_custom_aggregation_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_drq.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_drq_min_elements.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_xla.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:mark_functions_noinline.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:merge_initializer_function_ops_to_main.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:merge_save_function_ops_to_main.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:optimize.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_lifting.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_drq.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_drq_per_channel.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_ptq.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_ptq_per_channel.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:preprocess_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions_drq.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions_weight_only.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions_xla.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_drq.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_xla.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:remove_var_init_by_const.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:replace_cast_hacks_with_tf_xla_ops.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/quantization/tensorflow/tests:replace_cast_hacks_with_tf_xla_ops_large_constants.mlir.test PASSED in 17.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:unfreeze_constants.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/utils:tf_to_xla_attribute_utils_test PASSED in 26.7s //tensorflow/compiler/mlir/tensorflow:bridge_logger_test PASSED in 4.5s //tensorflow/compiler/mlir/tensorflow:cluster_util_test PASSED in 0.3s //tensorflow/compiler/mlir/tensorflow:convert_tensor_test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow:convert_type_test PASSED in 0.3s //tensorflow/compiler/mlir/tensorflow:device_util_test PASSED in 0.4s //tensorflow/compiler/mlir/tensorflow:dump_graph_test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow:dump_mlir_util_test PASSED in 9.5s //tensorflow/compiler/mlir/tensorflow:error_util_test PASSED in 0.3s //tensorflow/compiler/mlir/tensorflow:tf_saved_model_test PASSED in 0.3s //tensorflow/compiler/mlir/tensorflow:tpu_rewrite_device_util_test PASSED in 0.3s //tensorflow/compiler/mlir/tensorflow/tests:add_functions_for_exported_names.mlir.test PASSED in 14.5s //tensorflow/compiler/mlir/tensorflow/tests:annotate-parameter-replication.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:batchmatmul_to_einsum.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:breakup-islands.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:cannonicalize_ops_outside_compilation.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:canonicalize.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:canonicalize_compile_and_replicate_attributes.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:check_control_dependencies.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:cluster_formation.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:cluster_ops_by_policy.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:cluster_outlining.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:cluster_tf_ops_pass.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:constant-fold.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:constant_op_device_assignment.mlir.test PASSED in 6.0s //tensorflow/compiler/mlir/tensorflow/tests:convert-tf-control-flow-to-scf.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:convert_control_to_data_outputs.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:convert_launch_func_to_tf_call.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:convert_session_initializer_to_function.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:convert_to_legacy_compile_and_replicate_attributes.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:decompose_reduce_dataset.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:decompose_resource_ops.mlir.test PASSED in 6.2s //tensorflow/compiler/mlir/tensorflow/tests:device_assignment.mlir.test PASSED in 6.1s //tensorflow/compiler/mlir/tensorflow/tests:device_assignment_by_func_attr.mlir.test PASSED in 6.3s //tensorflow/compiler/mlir/tensorflow/tests:device_attribute_to_launch.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:device_canonicalize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:device_copy.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:drop_while_shape_invariant.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:einsum.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:empty-main.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:end-to-end-tpu-reshard-variables.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:executor_canonicalize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:executor_island_coarsening.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:executor_island_materialize_const.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:extract_head_tail_outside_compilation.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:extract_outside_compilation.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:extract_tpu_copy_with_dynamic_shape_op.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:fold-broadcast.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:freeze_variables.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:func-attr-invalid.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:func-attr.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:functional-control-flow-to-cfg.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:functional-control-flow-to-regions.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:functionalize-if-fail.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:functionalize-if.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:fused_kernel_matcher.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:gpu_fusion.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:graph_pruning.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:graph_pruning_preserve_ops.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:group_by_dialect.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:guarantee-all-funcs-one-use.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:hoist_loop_invariant.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:hoist_replicate_invariant_resource_writes.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:host_launch_to_outside_compiled.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:init_text_file_to_import.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:init_text_file_to_import_invalid.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:init_text_file_to_import_saved_model.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:inlining.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:isolate-placer.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:launch_outlining.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:launch_to_device_attribute.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:launch_to_device_attribute_legacy.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_gpu_cc_60.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_gpu_cc_70.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_to_nchw.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_to_nhwc.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_move_transposes_begin.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_move_transposes_end.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_to_nchw.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_to_nhwc.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:legalize_hlo.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:legalize_tfg.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:legalize_tfg_arg_control_dep.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:legalize_tfg_with_control_flow.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:localize_var_handles.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:lower_globals_to_ml_program.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:lower_globals_to_ml_program_invalid.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:lower_quantized.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:lower_tf.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:lower_variable_ops_to_ml_program.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:mark_input_output_aliases.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:mark_ops_for_outside_compilation.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:materialize_passthrough_op.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:merge_control_flow.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/tensorflow/tests:mlprogram.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:name_anonymous_iterators.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:optimize-arg-operand-constraint.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:optimize.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:order_by_dialect.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:outside_compiled_to_host_launch.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:parallel_execute_to_islands.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:parallel_execute_to_islands_legacy.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:prepare_tpu_computation_for_tf_export.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:promote_resources_to_args.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:promote_resources_to_args_functions.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:promote_var_handles_to_args.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:readonly_references_to_resources.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:region-control-flow-to-functional.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:remove_unused_arguments.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:remove_unused_while_results.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:replica_id_to_device_ordinal.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:replicate_invariant_op_hoisting.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:replicate_tensor_list_init_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:replicate_to_island.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:replicate_to_island_legacy.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:resource-alias-analysis-test.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:resource-device-inference.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:resource_analyzer.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:resource_inlining.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:resource_op_lifting.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:rewrite_tpu_embedding_ops.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:roundtrip-tf-executor.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:shape_inference.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:side-effect-analysis-test.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:sink_constant.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:split_into_island_per_op.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:stack_ops_decomposition.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:strip_noinline.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:strip_saved_module_metadata.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:strip_tf_attributes.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tensor_array_ops_decomposition.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:tensor_list_ops_decomposition.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf-executor-to-functional.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tf-functional-to-executor.mlir.test PASSED in 6.3s //tensorflow/compiler/mlir/tensorflow/tests:tf-ops.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/tensorflow/tests:tf-reduce-identity.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_data_fuse_map_and_batch.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_data_fuse_pmap_and_batch.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:tf_device_index_selector.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_device_ops.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:tf_device_ops_invalid.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_invalid.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_location_roundtrip.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_printer.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_side_effect.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_optimize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_deduplicate_bound_input_bindings.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_freeze_assets.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_freeze_global_tensors.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_freeze_global_tensors_mutable_tensors.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_initialize_variables_in_session_init.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_initialize_variables_in_session_init_fail.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_lift_variables.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_lift_variables_invalid_session.mlir.test PASSED in 2.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_mark_initialized_variables.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_ops.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_ops_invalid.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_optimize_global_tensors.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_optimize_global_tensors_interprocedural.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_remove_vars_in_session_initializer.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_side_effect.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:tf_trait_folds.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu-annotate-dynamic-shape-inputs.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu-cluster-cleanup-attributes.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu-dynamic-layout-pass.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu-merge-variables-with-execute.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu-multiple-while-body-func.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tpu-resource-read-for-write.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tpu-variable-runtime-reformatting.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu_cluster_formation.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tpu_colocate_composite_resource_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tpu_device_propagation.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tpu_host_computation_expansion.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tpu_identity_pruning.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tpu_parallel_execute_sink_resource_write.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:tpu_partitioned_op_conversion.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_reorder_replicate_and_partitioned_inputs.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:tpu_resource_partitioning.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_rewrite.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:tpu_sharding_identification.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:tpu_space_to_depth_pass.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tpu_tail_with_tobool_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_update_embedding_enqueue_op_inputs.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:tpu_validate_inputs.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:transpose-op.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:unroll-batch-matmul.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:update_control_dependencies.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:warn_when_using_deprecated_dumps.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:while_licm.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:xla_cluster_formation.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:xla_inline_device_ops.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:xla_rewrite.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:add.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:argument-sharding-invalid.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:argument-sharding.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:constant-folding-hook.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:constant-folding.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:graph-resource.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:graph-resource.pbtxt.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:graph.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:mlir-module-serialized-str-attr.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:replicate-tensor-list-init-ops.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:result-sharding.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:serialized-mlir-module-str-attr-invalid.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:serialized-mlir-module-str-attr.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:shape-inference-after-legalization.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:shape-inference.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:stablehlo_add.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_coarsening:executor_tpuv1_island_coarsening.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_coarsening:while_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_inlining:executor_tpuv1_inline_tpu_island.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_inlining:while_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_outline_island:case_op.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_outline_island:executor_tpuv1_outline_tpu_island.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_outline_island:while_op.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:add.pbtxt.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-as-fetch.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-control-dep.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-data-type-with-subtype.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-data-type.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-multi-data-type-with-subtype.pbtxt.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-retval-attrs.pbtxt.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:case_op.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:const-values.pbtxt.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:device-arg-retval-attr.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:empty-input-shapes.pbtxt.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:empty-value-attr.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:feed-as-fetch.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:feed-control-dep.pbtxt.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:force_shared_name_for_resource_ops.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:function-func-attr.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:functional-if-ops.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:functional-while-ops.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-as-function-control-ret.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-as-function-retval-of-arg.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-as-function.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-custom-operation.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-default-attr.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-device-retval.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-empty-tensor-content.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-func-attr.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-call.pbtxt.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-control-ret-diff-island.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-control-ret-same-island.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-defs.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-input-shapes.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-name-bug.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-resource-args.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-gradient-def.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-input-func-arg-name-collision.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-library.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-malformed.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-scalar-input.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-uint8-return.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-undefined-output.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-version-info.pbtxt.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-while-loop.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:invalid-output-index.pbtxt.test PASSED in 2.2s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:legacy-fed-input-without-inputs.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:merge_node_with_function.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:mlir_passthrough_op.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:multi-output-feeds.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:multiple-use-next-iteration.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:node-locations.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:output-shapes-attr.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:output-shapes.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:parse_example.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:parse_example_v2.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:partial-device-name.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:prune_unused_nodes.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:quint8-const.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:shape-attrs.pbtxt.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:stateful-attribute.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:string-attr.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:switch_n.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:target.pbtxt.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:tensor-list.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:tf-data-pipeline.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:unregistered_kernel.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir/batch_use_same_function:saved_model.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:aliasing_arg_attr.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:case.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:convert_tensor.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:derived_shape_attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:derived_size_attr.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:device-arg-retval-attr.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:export_main_to_flib.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:fetch_feed_names.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:func_attr.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:func_list_attr.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-control-ret.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-order.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-resource-args-handle-info.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-resource-args.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:functional-if-ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:functional-while-ops.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:graph-as-function.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:infer_derived_attribute.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:invalid_input.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:legalized_name.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:missing-main.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:noop.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:optional_symbol_ref.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:output-shapes-attr.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:parse_example.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:parse_example_v2.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:preserve-entry-func-names.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:ref-type-attr.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:ref-while-loop.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:shape_list_attr.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:simple.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:simple_tf_dialect_op.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:stringescape.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:switchn.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf-gradient-attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf-legacy-call.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf_add.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf_identity_n.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf_tpu_embedding_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:type_attr.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:type_list_attr.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:unique_name.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:unique_output_name.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:while-loop.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/tf_to_hlo_pipeline:sccp-post-shape-inference.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/tpu_bridge_v1:end_to_end.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tf2xla/api/v0:compile_mlir_util_test PASSED in 6.7s //tensorflow/compiler/mlir/tf2xla/api/v1:legalize_tf_test PASSED in 1.1s //tensorflow/compiler/mlir/tf2xla/tests:adjust-layout.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tf2xla/tests:convert-mhlo-quant-to-int.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tf2xla/tests:hlo_xla_runtime_pipeline.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/tests:hlo_xla_sparsification.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-BatchMatMulV2.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-binary-elementwise.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-collective.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-communication.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-include-tf2xla-fallback.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-no-tf2xla-fallback.mlir.test PASSED in 5.3s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-prefer-tf2xla.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-types.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-with-tf2xla-hlo-importer.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-with-tf2xla.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf.mlir.test PASSED in 9.7s //tensorflow/compiler/mlir/tf2xla/tests:tfxla_device_specific_transformations_cpu.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tf2xla/tests:tfxla_device_specific_transformations_gpu.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tf2xla/tests:verify-tfxla-legalization-no-chlo.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tf2xla/tests:verify-tfxla-legalization.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tf2xla/transforms:tf2xla_rewriter_test PASSED in 16.0s //tensorflow/compiler/mlir/tf2xla/transforms:verify_tfxla_legalization_test PASSED in 15.0s //tensorflow/compiler/mlir/tf2xla/transforms:xla_legalize_targets_test PASSED in 0.6s //tensorflow/compiler/mlir/tf2xla/transforms:xla_legalize_tf_test PASSED in 2.8s //tensorflow/compiler/mlir/tfr:graph_decompose_test PASSED in 10.0s //tensorflow/compiler/mlir/tfr:node_expansion_test PASSED in 7.6s //tensorflow/compiler/mlir/tfr:op_reg_gen_test PASSED in 15.3s //tensorflow/compiler/mlir/tfr:tfr_decompose_ctx_test PASSED in 5.5s //tensorflow/compiler/mlir/tfr:tfr_gen_test PASSED in 26.4s //tensorflow/compiler/mlir/tfr/examples/customization:test_ops_test PASSED in 17.5s //tensorflow/compiler/mlir/tfr/examples/mnist:mnist_ops_test PASSED in 21.1s //tensorflow/compiler/mlir/tfr/examples/pad:pad_ops_test PASSED in 20.9s //tensorflow/compiler/mlir/tools/kernel_gen/tests:buffer_deallocation.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tools/kernel_gen/tests:buffer_reuse.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:bufferize.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tools/kernel_gen/tests:copy_cleanup.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tools/kernel_gen/tests:embed_tf_framework.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tools/kernel_gen/tests:invalid.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:isinf.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:ops.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tools/kernel_gen/tests:parallel_loops_to_sequential.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:rewrite_tf_framework_assert.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tanh.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf-legalize-to-lmhlo.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_abi_knowledge.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_framework_legalize_to_llvm.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_kernel_gpu_launch_to_llvm.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_to_jit_invocations.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:convert-tfl-uint8.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:fuse-bias-tf.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tosa/tests:lower-complex-types.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tosa/tests:strip-quant-types.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tosa/tests:tf-tfl-to-tosa-pipeline.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tosa/tests:tf-to-tosa-pipeline.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-dequantize_softmax.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-pipeline-filtered.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-pipeline.mlir.test PASSED in 5.7s //tensorflow/compiler/tests:adadelta_test_cpu PASSED in 11.7s //tensorflow/compiler/tests:adagrad_da_test_cpu PASSED in 10.6s //tensorflow/compiler/tests:adagrad_test_cpu PASSED in 10.3s //tensorflow/compiler/tests:adam_test_cpu PASSED in 12.7s //tensorflow/compiler/tests:add_n_test_cpu PASSED in 7.5s //tensorflow/compiler/tests:argminmax_test_cpu PASSED in 11.8s //tensorflow/compiler/tests:argminmax_test_cpu_mlir_bridge_test PASSED in 15.1s //tensorflow/compiler/tests:bucketize_op_test_cpu PASSED in 8.0s //tensorflow/compiler/tests:bucketize_op_test_cpu_mlir_bridge_test PASSED in 8.1s //tensorflow/compiler/tests:case_test_cpu PASSED in 7.5s //tensorflow/compiler/tests:cast_ops_test_cpu PASSED in 12.6s //tensorflow/compiler/tests:cast_ops_test_cpu_mlir_bridge_test PASSED in 8.9s //tensorflow/compiler/tests:categorical_op_test_cpu PASSED in 10.3s //tensorflow/compiler/tests:categorical_op_test_cpu_mlir_bridge_test PASSED in 12.6s //tensorflow/compiler/tests:cholesky_op_test_cpu PASSED in 38.0s //tensorflow/compiler/tests:cholesky_op_test_cpu_mlir_bridge_test PASSED in 25.4s //tensorflow/compiler/tests:clustering_test_cpu PASSED in 7.1s //tensorflow/compiler/tests:clustering_test_cpu_mlir_bridge_test PASSED in 13.3s //tensorflow/compiler/tests:concat_ops_test_cpu PASSED in 8.3s //tensorflow/compiler/tests:concat_ops_test_cpu_mlir_bridge_test PASSED in 11.8s //tensorflow/compiler/tests:cond_test_cpu PASSED in 11.2s //tensorflow/compiler/tests:const_arg_test_cpu PASSED in 7.0s //tensorflow/compiler/tests:const_test_cpu PASSED in 7.9s //tensorflow/compiler/tests:data_format_ops_test_cpu PASSED in 21.9s //tensorflow/compiler/tests:data_format_ops_test_cpu_mlir_bridge_test PASSED in 16.2s //tensorflow/compiler/tests:dense_layer_test_cpu PASSED in 11.3s //tensorflow/compiler/tests:dynamic_slice_ops_test_cpu PASSED in 10.5s //tensorflow/compiler/tests:dynamic_slice_ops_test_cpu_mlir_bridge_test PASSED in 10.9s //tensorflow/compiler/tests:dynamic_stitch_test_cpu PASSED in 12.6s //tensorflow/compiler/tests:dynamic_stitch_test_cpu_mlir_bridge_test PASSED in 7.9s //tensorflow/compiler/tests:eager_test_cpu PASSED in 14.3s //tensorflow/compiler/tests:einsum_op_test_cpu PASSED in 9.5s //tensorflow/compiler/tests:einsum_op_test_cpu_mlir_bridge_test PASSED in 7.7s //tensorflow/compiler/tests:ensure_shape_op_test_cpu PASSED in 7.7s //tensorflow/compiler/tests:extract_image_patches_op_test_cpu PASSED in 8.0s //tensorflow/compiler/tests:extract_image_patches_op_test_cpu_mlir_bridge_test PASSED in 8.5s //tensorflow/compiler/tests:fake_quant_ops_test_cpu PASSED in 12.6s //tensorflow/compiler/tests:fake_quant_ops_test_cpu_mlir_bridge_test PASSED in 14.9s //tensorflow/compiler/tests:fifo_queue_test_cpu PASSED in 7.8s //tensorflow/compiler/tests:fifo_queue_test_cpu_mlir_bridge_test PASSED in 8.6s //tensorflow/compiler/tests:ftrl_ops_test_cpu PASSED in 18.4s //tensorflow/compiler/tests:ftrl_ops_test_cpu_mlir_bridge_test PASSED in 18.2s //tensorflow/compiler/tests:ftrl_test_cpu PASSED in 14.5s //tensorflow/compiler/tests:function_test_cpu PASSED in 39.0s //tensorflow/compiler/tests:function_test_cpu_mlir_bridge_test PASSED in 7.6s //tensorflow/compiler/tests:gather_nd_op_test_cpu PASSED in 11.5s //tensorflow/compiler/tests:gather_nd_op_test_cpu_mlir_bridge_test PASSED in 9.8s //tensorflow/compiler/tests:gather_test_cpu PASSED in 36.3s //tensorflow/compiler/tests:gather_test_cpu_mlir_bridge_test PASSED in 44.6s //tensorflow/compiler/tests:jit_test_cpu PASSED in 70.3s //tensorflow/compiler/tests:listdiff_op_test_cpu PASSED in 11.2s //tensorflow/compiler/tests:listdiff_op_test_cpu_mlir_bridge_test PASSED in 12.0s //tensorflow/compiler/tests:lrn_ops_test_cpu PASSED in 8.2s //tensorflow/compiler/tests:lrn_ops_test_cpu_mlir_bridge_test PASSED in 7.8s //tensorflow/compiler/tests:lstm_test_cpu PASSED in 20.1s //tensorflow/compiler/tests:manip_ops_test_cpu PASSED in 13.4s //tensorflow/compiler/tests:manip_ops_test_cpu_mlir_bridge_test PASSED in 11.9s //tensorflow/compiler/tests:matrix_band_part_test_cpu PASSED in 25.2s //tensorflow/compiler/tests:matrix_band_part_test_cpu_mlir_bridge_test PASSED in 41.3s //tensorflow/compiler/tests:matrix_inverse_op_test_cpu PASSED in 18.4s //tensorflow/compiler/tests:matrix_inverse_op_test_cpu_mlir_bridge_test PASSED in 30.1s //tensorflow/compiler/tests:matrix_solve_op_test_cpu PASSED in 10.9s //tensorflow/compiler/tests:matrix_solve_op_test_cpu_mlir_bridge_test PASSED in 18.7s //tensorflow/compiler/tests:matrix_triangular_solve_op_test_cpu PASSED in 46.1s //tensorflow/compiler/tests:matrix_triangular_solve_op_test_cpu_mlir_bridge_test PASSED in 33.0s //tensorflow/compiler/tests:momentum_test_cpu PASSED in 9.6s //tensorflow/compiler/tests:nary_ops_test_cpu PASSED in 10.9s //tensorflow/compiler/tests:nary_ops_test_cpu_mlir_bridge_test PASSED in 11.0s //tensorflow/compiler/tests:nullary_ops_test_cpu PASSED in 8.5s //tensorflow/compiler/tests:nullary_ops_test_cpu_mlir_bridge_test PASSED in 7.9s //tensorflow/compiler/tests:placeholder_test_cpu PASSED in 6.2s //tensorflow/compiler/tests:placeholder_test_cpu_mlir_bridge_test PASSED in 7.7s //tensorflow/compiler/tests:proximal_adagrad_test_cpu PASSED in 11.0s //tensorflow/compiler/tests:proximal_gradient_descent_test_cpu PASSED in 9.4s //tensorflow/compiler/tests:quantized_ops_test_cpu PASSED in 14.1s //tensorflow/compiler/tests:reduce_window_test_cpu PASSED in 8.5s //tensorflow/compiler/tests:reduce_window_test_cpu_mlir_bridge_test PASSED in 7.6s //tensorflow/compiler/tests:reshape_op_test_cpu PASSED in 7.5s //tensorflow/compiler/tests:reshape_op_test_cpu_mlir_bridge_test PASSED in 9.1s //tensorflow/compiler/tests:reverse_ops_test_cpu PASSED in 13.3s //tensorflow/compiler/tests:reverse_ops_test_cpu_mlir_bridge_test PASSED in 10.8s //tensorflow/compiler/tests:reverse_sequence_op_test_cpu PASSED in 8.6s //tensorflow/compiler/tests:reverse_sequence_op_test_cpu_mlir_bridge_test PASSED in 11.5s //tensorflow/compiler/tests:risc_ops_test_cpu_mlir_bridge_test PASSED in 6.5s //tensorflow/compiler/tests:rmsprop_test_cpu PASSED in 10.3s //tensorflow/compiler/tests:scatter_nd_op_test_cpu PASSED in 25.0s //tensorflow/compiler/tests:scatter_nd_op_test_cpu_mlir_bridge_test PASSED in 29.8s //tensorflow/compiler/tests:searchsorted_op_test_cpu PASSED in 8.3s //tensorflow/compiler/tests:searchsorted_op_test_cpu_mlir_bridge_test PASSED in 10.0s //tensorflow/compiler/tests:segment_reduction_ops_test_cpu PASSED in 24.9s //tensorflow/compiler/tests:segment_reduction_ops_test_cpu_mlir_bridge_test PASSED in 53.1s //tensorflow/compiler/tests:self_adjoint_eig_op_test_cpu PASSED in 16.9s //tensorflow/compiler/tests:self_adjoint_eig_op_test_cpu_mlir_bridge_test PASSED in 15.9s //tensorflow/compiler/tests:slice_ops_test_cpu PASSED in 18.6s //tensorflow/compiler/tests:slice_ops_test_cpu_mlir_bridge_test PASSED in 16.4s //tensorflow/compiler/tests:sparse_to_dense_op_test_cpu PASSED in 12.0s //tensorflow/compiler/tests:sparse_to_dense_op_test_cpu_mlir_bridge_test PASSED in 8.7s //tensorflow/compiler/tests:stack_ops_test_cpu PASSED in 7.2s //tensorflow/compiler/tests:tensor_list_ops_test_cpu PASSED in 9.7s //tensorflow/compiler/tests:tridiagonal_matmul_ops_test_cpu PASSED in 12.7s //tensorflow/compiler/tests:tridiagonal_matmul_ops_test_cpu_mlir_bridge_test PASSED in 14.4s //tensorflow/compiler/tests:tridiagonal_solve_ops_test_cpu PASSED in 12.8s //tensorflow/compiler/tests:tridiagonal_solve_ops_test_cpu_mlir_bridge_test PASSED in 17.3s //tensorflow/compiler/tests:unique_ops_test_cpu PASSED in 7.2s //tensorflow/compiler/tests:variable_ops_test_cpu PASSED in 29.3s //tensorflow/compiler/tests:variable_ops_test_cpu_mlir_bridge_test PASSED in 14.0s //tensorflow/compiler/tests:where_op_test_cpu PASSED in 15.3s //tensorflow/compiler/tests:while_test_cpu PASSED in 10.4s //tensorflow/compiler/tests:xla_call_module_test_cpu PASSED in 9.4s //tensorflow/compiler/tests:xla_custom_call_ops_test_cpu PASSED in 6.4s //tensorflow/compiler/tests:xla_device_gpu_test_cpu PASSED in 7.8s //tensorflow/compiler/tests:xla_device_test_cpu PASSED in 10.5s //tensorflow/compiler/tests:xla_device_test_cpu_mlir_bridge_test PASSED in 25.8s //tensorflow/compiler/tests:xla_ops_test_cpu PASSED in 34.7s //tensorflow/compiler/tests:xla_ops_test_cpu_mlir_bridge_test PASSED in 36.6s //tensorflow/compiler/tests:xla_test_test PASSED in 8.0s //tensorflow/compiler/tf2xla:const_analysis_test PASSED in 6.5s //tensorflow/compiler/tf2xla:cpu_function_runtime_test PASSED in 0.3s //tensorflow/compiler/tf2xla:functionalize_cond_test PASSED in 0.6s //tensorflow/compiler/tf2xla:functionalize_control_flow_test PASSED in 1.2s //tensorflow/compiler/tf2xla:fused_batchnorm_reserve_space_test_cpu PASSED in 22.0s //tensorflow/compiler/tf2xla:graph_compiler_test PASSED in 5.7s //tensorflow/compiler/tf2xla:literal_util_test PASSED in 0.6s //tensorflow/compiler/tf2xla:resource_operation_table_test PASSED in 8.0s //tensorflow/compiler/tf2xla:resource_util_test_cpu PASSED in 1.9s //tensorflow/compiler/tf2xla:sharding_util_test PASSED in 1.0s //tensorflow/compiler/tf2xla:tf2xla_test PASSED in 15.9s //tensorflow/compiler/tf2xla:tf2xla_util_test PASSED in 0.7s //tensorflow/compiler/tf2xla:xla_compiler_test PASSED in 15.6s //tensorflow/compiler/tf2xla:xla_jit_compiled_cpu_function_test PASSED in 16.8s //tensorflow/compiler/tf2xla:xla_op_registry_test PASSED in 5.3s //tensorflow/compiler/tf2xla/kernels:rng_converter_utils_test PASSED in 1.8s //tensorflow/compiler/xla:array2d_test PASSED in 0.8s //tensorflow/compiler/xla:array3d_test PASSED in 0.2s //tensorflow/compiler/xla:array4d_test PASSED in 0.2s //tensorflow/compiler/xla:array_test PASSED in 0.2s //tensorflow/compiler/xla:bit_cast_test PASSED in 0.1s //tensorflow/compiler/xla:comparison_util_test PASSED in 0.4s //tensorflow/compiler/xla:debug_options_parsers_test PASSED in 0.2s //tensorflow/compiler/xla:index_util_test PASSED in 0.6s //tensorflow/compiler/xla:iterator_util_test PASSED in 1.3s //tensorflow/compiler/xla:layout_test PASSED in 0.1s //tensorflow/compiler/xla:layout_util_test PASSED in 8.2s //tensorflow/compiler/xla:literal_test PASSED in 0.3s //tensorflow/compiler/xla:parse_flags_from_env_test PASSED in 0.5s //tensorflow/compiler/xla:permutation_util_test PASSED in 0.1s //tensorflow/compiler/xla:primitive_util_test PASSED in 0.2s //tensorflow/compiler/xla:refcounting_hash_map_test PASSED in 0.3s //tensorflow/compiler/xla:reference_util_test PASSED in 0.3s //tensorflow/compiler/xla:shape_test PASSED in 0.3s //tensorflow/compiler/xla:shape_tree_test PASSED in 0.1s //tensorflow/compiler/xla:shape_util_test PASSED in 1.9s //tensorflow/compiler/xla:status_macros_test PASSED in 0.2s //tensorflow/compiler/xla:text_literal_reader_test PASSED in 0.3s //tensorflow/compiler/xla:text_literal_writer_test PASSED in 0.2s //tensorflow/compiler/xla:types_test PASSED in 0.4s //tensorflow/compiler/xla:util_test PASSED in 0.2s //tensorflow/compiler/xla:window_util_test PASSED in 0.1s //tensorflow/compiler/xla/client:padding_test PASSED in 0.2s //tensorflow/compiler/xla/client:xla_builder_test PASSED in 0.3s //tensorflow/compiler/xla/client/lib:arithmetic_test_cpu PASSED in 9.5s //tensorflow/compiler/xla/client/lib:comparators_test_cpu PASSED in 9.2s //tensorflow/compiler/xla/client/lib:constants_test_cpu PASSED in 7.0s //tensorflow/compiler/xla/client/lib:logdet_test_cpu PASSED in 9.6s //tensorflow/compiler/xla/client/lib:math_test_cpu PASSED in 13.9s //tensorflow/compiler/xla/client/lib:matrix_test_cpu PASSED in 10.8s //tensorflow/compiler/xla/client/lib:pooling_test_cpu PASSED in 9.5s //tensorflow/compiler/xla/client/lib:qr_test_cpu PASSED in 10.4s //tensorflow/compiler/xla/client/lib:slicing_test_cpu PASSED in 8.2s //tensorflow/compiler/xla/client/lib:sorting_test_cpu PASSED in 8.7s //tensorflow/compiler/xla/examples/axpy:stablehlo_compile_test PASSED in 8.0s //tensorflow/compiler/xla/experimental/conv_emitter:conv_emitter_test PASSED in 3.1s //tensorflow/compiler/xla/hlo/evaluator:hlo_evaluator_test PASSED in 6.0s //tensorflow/compiler/xla/hlo/transforms:hlo_constant_splitter_test PASSED in 1.2s //tensorflow/compiler/xla/hlo/utils:hlo_live_range_test PASSED in 1.4s //tensorflow/compiler/xla/hlo/utils:hlo_matchers_test PASSED in 1.0s //tensorflow/compiler/xla/hlo/utils:hlo_sharding_util_test PASSED in 0.2s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:collective_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:collective_ops_to_cpu_runtime.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:fft.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:legalize_i1_vector_transfers.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:lmhlo_custom_call.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:lmhlo_infeed.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:remove_copies_to_out_params.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:rng_bit_generator.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:xla_abi_legalization.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:xla_cpu_memref_element_cast_to_llvm.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:xla_cpu_outfeed.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:add_hlo_trace.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:gpu_launch.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:gpu_memcpy.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:gpu_memset.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_case.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_custom_call.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_fft.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_gpu_cholesky.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_gpu_conv.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_gpu_cublas_lt_matmul.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_gpu_gemm.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_infeed.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_outfeed.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_send_recv.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_while.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:memref_get_global_to_arg.mlir.test PASSED in 6.0s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:outline_cuda_graphs.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/framework/tests:legalize-xla-framework.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/framework/tests:outline-with-xla-framework.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/framework/tests:xla-framework.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/math/transforms/tests:math_optimization.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/memref/transforms/tests:aligned_allocations.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/runtime/ir/tests:ops.mlir.test PASSED in 1.1s //tensorflow/compiler/xla/mlir/runtime/ir/tests:ops_verify.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir/runtime/ir/tests:testlib.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/runtime/transforms:calling_convention_test PASSED in 0.4s //tensorflow/compiler/xla/mlir/runtime/transforms:type_converter_test PASSED in 0.5s //tensorflow/compiler/xla/mlir/runtime/transforms/tests:compilation_pipeline.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir/runtime/transforms/tests:convert_asserts.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir/runtime/transforms/tests:convert_custom_calls.mlir.test PASSED in 1.5s //tensorflow/compiler/xla/mlir/runtime/transforms/tests:export_functions.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/runtime/transforms/tests:ordinal_assignment.mlir.test PASSED in 1.1s //tensorflow/compiler/xla/mlir/runtime/transforms/tests:rt_to_llvm.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:erase-op-without-results.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:inline-scf-while.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:reduce-scf-forall-bounds.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:replace-op-with-constant.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:replace-op-with-value.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:replace-operand-with-constant.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:return-operands-of-terminator-operands.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:truncate-function.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/tests:bisect.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/tests:no-bug.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/tests:snapshot.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir/tools/mlir_replay/public:execution_trace_utils_test PASSED in 1.3s //tensorflow/compiler/xla/mlir/utils:error_util_test PASSED in 0.2s //tensorflow/compiler/xla/mlir/xla_cpu/tests:bufferize.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/xla_cpu/tests:invalid.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/xla_cpu/tests:ops.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/bufferization/hlo_one_shot_bufferize.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/chlo/chlo_legalize_to_hlo_broadcasts.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/chlo/chlo_legalize_to_hlo_no_broadcasts.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/chlo/chlo_legalize_to_mhlo.mlir.test PASSED in 1.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/chlo/sparse_chlo_legalize_to_linalg.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/buffer_reuse.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/convert_deallocation_ops_to_llvm.mlir.test PASSED in 1.2s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/deallocate.mlir.test PASSED in 1.2s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/deallocate_invalid.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/deallocation_ops.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/deallocation_simplification.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/deallocation_to_scf.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/split_alloc_tensors.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/add_debug_info.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/bufferization.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/collapse-shape.mlir.test PASSED in 1.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/collect_stats.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/compose_extract_insert_slice.mlir.test PASSED in 1.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/conv_2d_nhwc_hwcf.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/dot.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/duplicate_fusions.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/fibonacci.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/fusion_outlining.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/fusion_planning_for_cpu.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/inline_fusion_clusters.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/map_bcast_map.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/map_matmul.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/map_reduce.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/map_reduce_map.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/map_reshape_map.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/matmul.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/reduce_1d.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/reduce_2d.mlir.test PASSED in 1.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/reduce_window.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/reverse.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/scatter.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/sort.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/transpose.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/greedy_fusion.mlir.test PASSED in 1.3s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/invalid.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/lower_vectors.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/nested_tiling_softmax.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/ops.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/rewrite_forall_to_for.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/simplify_dead_copy.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/tile_by_one.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/tiling_softmax.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/vectorize_copy.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/vectorize_for_cpu.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo/lhlo-legalize-select-and-scatter.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo/lhlo-legalize-to-affine.mlir.test PASSED in 1.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo/lhlo-legalize-to-gpu.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo/lhlo-legalize-to-parallel-loops.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo/lhlo-legalize-to-tensor-op.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo/ops.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo_gpu/lhlo_gpu_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/attrs.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/broadcast_propagation.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/bitcast.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/canonicalize.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/concatenate.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/convert.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/convolution.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/custom_call.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/folder_limit.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/reduce.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/reshape.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/reverse.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/scatter.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/transpose.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/tuple.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/while.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/constraint_fusion.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/convert_to_signless.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/expand_hlo_tuples.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/expand_ops_simplifier.mlir.test PASSED in 0.3s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/group_reduction_dimensions.mlir.test PASSED in 1.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-collapse-elementwise-map.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-einsum-to-dot-general.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-gather-to-torch-index-select.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-rng-to-linalg.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-shape-ops-to-standard.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-sort.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-arithmetic.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-lhlo-only-dynamic.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-lhlo-unranked.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-lhlo.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-linalg.mlir.test PASSED in 2.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-memref-unranked.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-memref.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-stablehlo-experimental.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-stablehlo.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/inlining.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/invalid.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/legalize-control-flow.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/legalize-hlo-shape-computations.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/legalize-mhlo-to-thlo.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/legalize-to-std.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/lower-complex.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/lower-general-dot.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/materialize-broadcasts.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/merge_assuming_ops.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_bytecode_customizations.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_canonicalize_dot.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_canonicalize_gather.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_canonicalize_reduction.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_canonicalize_scatter.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_flatten_tuple.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_infer_shape_type_methods.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_ops_prettyprint.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_reduce_pretty_print.mlir.test PASSED in 6.2s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/ops.mlir.test PASSED in 1.1s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/optimize-hlo.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/prepare-for-export.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/reify-result-types.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/restrict_max_rank.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/shape_legalize_to_hlo.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/shape_reification.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/sink-constants-to-control-flow.mlir.test PASSED in 6.0s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/sparse_gendot_lower.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/sparse_lower.mlir.test PASSED in 1.1s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/sparse_ops.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/sparse_rewriting.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/sparse_transpose.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/stablehlo-legalize-to-hlo.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/symbolic-shape-optimization.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/unfuse_batch_norm.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_bounds.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_conv_op.mlir.test PASSED in 1.2s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_reduce_op.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_reduce_window_op.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_scatter_op.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_select_and_scatter_op.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_while_op.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/while_prettyprint.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/thlo/bufferize.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/thlo/canonicalize.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/thlo/invalid.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/thlo/legalize_sort.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/thlo/ops.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/thlo/tiling.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:alloc_to_arg.mlir.test PASSED in 1.4s //tensorflow/compiler/xla/mlir_hlo/tests:assuming-structural-propagation.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:buffer_packing.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:bufferize.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir_hlo/tests:bufferize_one_shot.mlir.test PASSED in 1.2s //tensorflow/compiler/xla/mlir_hlo/tests:collapse_parallel_loops_to_1d_pass.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:detensorize_scf_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:index_type_llvm_lowering.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:legalize-trigonometric-to-approximation.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:lower_index_cast.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:propagate_static_shapes.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:rank-specialization.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:scalarization.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:shape-component-analysis.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:shape_simplification.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:test_userange.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:tile_loops.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:unbufferize.mlir.test PASSED in 1.7s //tensorflow/compiler/xla/mlir_hlo/tests:unroll-loops.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tools/mlir_interpreter/framework/tests:interpreter_value_test PASSED in 0.1s //tensorflow/compiler/xla/mlir_hlo/tools/mlir_interpreter/framework/tests:tensor_or_memref_test PASSED in 0.1s //tensorflow/compiler/xla/mlir_hlo/tosa/tests:binary.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tosa/tests:nullary.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tosa/tests:prepare-mhlo.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tosa/tests:ternary.mlir.test PASSED in 1.2s //tensorflow/compiler/xla/mlir_hlo/tosa/tests:unary.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/pjrt:host_callback_test PASSED in 0.2s //tensorflow/compiler/xla/pjrt:lru_cache_test PASSED in 0.1s //tensorflow/compiler/xla/pjrt:pjrt_api_test PASSED in 0.6s //tensorflow/compiler/xla/pjrt:pjrt_client_test_cpu PASSED in 7.4s //tensorflow/compiler/xla/pjrt:pjrt_compiler_test PASSED in 0.3s //tensorflow/compiler/xla/pjrt:pjrt_executable_test PASSED in 0.1s //tensorflow/compiler/xla/pjrt:pjrt_stream_executor_client_test PASSED in 8.4s //tensorflow/compiler/xla/pjrt:semaphore_test PASSED in 0.2s //tensorflow/compiler/xla/pjrt:tf_pjrt_client_test PASSED in 5.4s //tensorflow/compiler/xla/pjrt:tfrt_cpu_pjrt_client_test PASSED in 6.6s //tensorflow/compiler/xla/pjrt:tracked_device_buffer_test PASSED in 6.9s //tensorflow/compiler/xla/pjrt:tracked_tfrt_cpu_device_buffer_test PASSED in 0.3s //tensorflow/compiler/xla/pjrt:transpose_test PASSED in 52.6s //tensorflow/compiler/xla/pjrt/c:pjrt_c_api_cpu_test PASSED in 10.8s //tensorflow/compiler/xla/pjrt/c:pjrt_c_api_helpers_test PASSED in 0.3s //tensorflow/compiler/xla/pjrt/distributed:client_server_test PASSED in 41.4s //tensorflow/compiler/xla/pjrt/distributed:service_test PASSED in 7.0s //tensorflow/compiler/xla/pjrt/gpu:se_gpu_pjrt_client_test PASSED in 1.9s //tensorflow/compiler/xla/python:outfeed_receiver_test_cpu PASSED in 8.7s //tensorflow/compiler/xla/python/ifrt:array_test PASSED in 0.2s //tensorflow/compiler/xla/python/ifrt:array_test_no_impl PASSED in 0.2s //tensorflow/compiler/xla/python/ifrt:client_test_no_impl PASSED in 0.3s //tensorflow/compiler/xla/python/ifrt:executable_test_no_impl PASSED in 1.0s //tensorflow/compiler/xla/python/ifrt:future_test PASSED in 0.5s //tensorflow/compiler/xla/python/ifrt:index_domain_test PASSED in 0.5s //tensorflow/compiler/xla/python/ifrt:index_test PASSED in 0.5s //tensorflow/compiler/xla/python/ifrt:shape_test PASSED in 0.3s //tensorflow/compiler/xla/python/ifrt:sharding_test PASSED in 0.2s //tensorflow/compiler/xla/python/ifrt:tuple_test_no_impl PASSED in 0.4s //tensorflow/compiler/xla/python/pjrt_ifrt:pjrt_array_impl_test_tfrt_cpu PASSED in 13.9s //tensorflow/compiler/xla/python/pjrt_ifrt:pjrt_client_impl_test_tfrt_cpu PASSED in 5.0s //tensorflow/compiler/xla/python/pjrt_ifrt:pjrt_executable_impl_test_tfrt_cpu PASSED in 7.1s //tensorflow/compiler/xla/python/pjrt_ifrt:pjrt_tuple_impl_test_tfrt_cpu PASSED in 7.7s //tensorflow/compiler/xla/python_api:xla_literal_test PASSED in 0.9s //tensorflow/compiler/xla/python_api:xla_shape_test PASSED in 0.8s //tensorflow/compiler/xla/rpc:grpc_client_test PASSED in 2.5s //tensorflow/compiler/xla/runtime:arguments_test PASSED in 0.2s //tensorflow/compiler/xla/runtime:async_runtime_test PASSED in 0.4s //tensorflow/compiler/xla/runtime:custom_call_test PASSED in 1.5s //tensorflow/compiler/xla/runtime:diagnostics_test PASSED in 0.1s //tensorflow/compiler/xla/runtime:executable_test PASSED in 2.2s //tensorflow/compiler/xla/runtime:ffi_test PASSED in 1.1s //tensorflow/compiler/xla/runtime:map_by_type_test PASSED in 0.5s //tensorflow/compiler/xla/runtime:module_test PASSED in 0.2s //tensorflow/compiler/xla/runtime:results_test PASSED in 0.3s //tensorflow/compiler/xla/runtime:state_test PASSED in 0.1s //tensorflow/compiler/xla/runtime:symbolic_shape_test PASSED in 0.8s //tensorflow/compiler/xla/runtime:type_id_test PASSED in 0.1s //tensorflow/compiler/xla/service:algebraic_simplifier_overflow_test_cpu PASSED in 11.1s //tensorflow/compiler/xla/service:algebraic_simplifier_test PASSED in 4.1s //tensorflow/compiler/xla/service:all_gather_broadcast_reorder_test PASSED in 1.4s //tensorflow/compiler/xla/service:all_gather_combiner_test PASSED in 1.2s //tensorflow/compiler/xla/service:all_gather_decomposer_test PASSED in 5.9s //tensorflow/compiler/xla/service:all_reduce_combiner_test PASSED in 0.9s //tensorflow/compiler/xla/service:all_reduce_contiguous_test PASSED in 1.4s //tensorflow/compiler/xla/service:all_reduce_folder_test PASSED in 1.5s //tensorflow/compiler/xla/service:all_reduce_promotion_test PASSED in 1.6s //tensorflow/compiler/xla/service:all_reduce_reassociate_test PASSED in 1.6s //tensorflow/compiler/xla/service:all_reduce_simplifier_test PASSED in 1.3s //tensorflow/compiler/xla/service:ar_crs_combiner_test PASSED in 2.1s //tensorflow/compiler/xla/service:async_collective_creator_test PASSED in 1.6s //tensorflow/compiler/xla/service:async_op_canonicalizer_test PASSED in 1.4s //tensorflow/compiler/xla/service:batch_dot_simplification_test PASSED in 0.8s //tensorflow/compiler/xla/service:batchnorm_expander_test_cpu PASSED in 8.9s //tensorflow/compiler/xla/service:bfloat16_conversion_folding_test PASSED in 0.9s //tensorflow/compiler/xla/service:bfloat16_propagation_test PASSED in 1.3s //tensorflow/compiler/xla/service:bitcast_dtypes_expander_test PASSED in 1.0s //tensorflow/compiler/xla/service:broadcast_canonicalizer_test PASSED in 1.2s //tensorflow/compiler/xla/service:buffer_assignment_test PASSED in 12.0s //tensorflow/compiler/xla/service:call_graph_test PASSED in 1.1s //tensorflow/compiler/xla/service:call_inliner_test PASSED in 1.3s //tensorflow/compiler/xla/service:change_op_data_type_test PASSED in 1.5s //tensorflow/compiler/xla/service:collective_ops_utils_test PASSED in 0.3s //tensorflow/compiler/xla/service:collectives_schedule_linearizer_test PASSED in 1.5s //tensorflow/compiler/xla/service:compilation_environments_test PASSED in 0.2s //tensorflow/compiler/xla/service:conditional_canonicalizer_test PASSED in 1.0s //tensorflow/compiler/xla/service:conditional_code_motion_test PASSED in 1.5s //tensorflow/compiler/xla/service:conditional_simplifier_test PASSED in 0.9s //tensorflow/compiler/xla/service:conditional_to_select_test PASSED in 1.2s //tensorflow/compiler/xla/service:convert_async_collectives_to_sync_test PASSED in 1.1s //tensorflow/compiler/xla/service:convert_mover_test PASSED in 1.0s //tensorflow/compiler/xla/service:convert_operand_folding_test PASSED in 1.3s //tensorflow/compiler/xla/service:convolution_4d_expander_test PASSED in 1.5s //tensorflow/compiler/xla/service:convolution_group_converter_test PASSED in 0.9s //tensorflow/compiler/xla/service:convolution_pred_expander_test PASSED in 0.9s //tensorflow/compiler/xla/service:copy_insertion_test PASSED in 2.4s //tensorflow/compiler/xla/service:custom_call_status_test PASSED in 0.5s //tensorflow/compiler/xla/service:defuser_test PASSED in 0.9s //tensorflow/compiler/xla/service:despecializer_test PASSED in 1.7s //tensorflow/compiler/xla/service:dfs_hlo_visitor_with_default_test PASSED in 1.1s //tensorflow/compiler/xla/service:dot_decomposer_test PASSED in 0.9s //tensorflow/compiler/xla/service:dot_merger_test PASSED in 1.6s //tensorflow/compiler/xla/service:dynamic_dimension_inference_test PASSED in 2.6s //tensorflow/compiler/xla/service:dynamic_dimension_simplifier_test PASSED in 2.5s //tensorflow/compiler/xla/service:dynamic_index_splitter_test PASSED in 1.5s //tensorflow/compiler/xla/service:dynamic_padder_test_cpu PASSED in 17.1s //tensorflow/compiler/xla/service:dynamic_parameter_binding_test PASSED in 1.1s //tensorflow/compiler/xla/service:dynamic_update_slice_test_cpu PASSED in 16.8s //tensorflow/compiler/xla/service:elemental_ir_emitter_test_cpu PASSED in 23.2s //tensorflow/compiler/xla/service:flatten_call_graph_test PASSED in 0.8s //tensorflow/compiler/xla/service:float_normalization_test PASSED in 1.2s //tensorflow/compiler/xla/service:fusion_node_indexing_evaluation_test PASSED in 0.9s //tensorflow/compiler/xla/service:gather_expander_test PASSED in 1.6s //tensorflow/compiler/xla/service:gather_simplifier_test PASSED in 1.0s //tensorflow/compiler/xla/service:heap_simulator_test PASSED in 1.6s //tensorflow/compiler/xla/service:hlo_alias_analysis_test PASSED in 1.0s //tensorflow/compiler/xla/service:hlo_casting_utils_test PASSED in 7.4s //tensorflow/compiler/xla/service:hlo_computation_deduplicator_test PASSED in 1.0s //tensorflow/compiler/xla/service:hlo_computation_test PASSED in 2.6s //tensorflow/compiler/xla/service:hlo_constant_folding_test PASSED in 6.6s //tensorflow/compiler/xla/service:hlo_cost_analysis_test PASSED in 8.2s //tensorflow/compiler/xla/service:hlo_creation_utils_test PASSED in 3.3s //tensorflow/compiler/xla/service:hlo_cse_test PASSED in 9.3s //tensorflow/compiler/xla/service:hlo_dataflow_analysis_test PASSED in 1.3s //tensorflow/compiler/xla/service:hlo_dce_test PASSED in 1.1s //tensorflow/compiler/xla/service:hlo_domain_test PASSED in 1.8s //tensorflow/compiler/xla/service:hlo_element_type_converter_test PASSED in 0.9s //tensorflow/compiler/xla/service:hlo_execution_profile_test PASSED in 7.6s //tensorflow/compiler/xla/service:hlo_graph_dumper_test PASSED in 1.1s //tensorflow/compiler/xla/service:hlo_input_output_alias_config_test PASSED in 0.9s //tensorflow/compiler/xla/service:hlo_instruction_test PASSED in 1.8s //tensorflow/compiler/xla/service:hlo_liveness_analysis_test PASSED in 1.1s //tensorflow/compiler/xla/service:hlo_memory_scheduler_test PASSED in 1.2s //tensorflow/compiler/xla/service:hlo_module_dce_test PASSED in 0.9s //tensorflow/compiler/xla/service:hlo_module_metadata_test PASSED in 0.2s //tensorflow/compiler/xla/service:hlo_module_test PASSED in 1.0s //tensorflow/compiler/xla/service:hlo_opcode_test PASSED in 0.2s //tensorflow/compiler/xla/service:hlo_ordering_test PASSED in 1.7s //tensorflow/compiler/xla/service:hlo_parser_test PASSED in 0.4s //tensorflow/compiler/xla/service:hlo_pass_pipeline_test PASSED in 1.0s //tensorflow/compiler/xla/service:hlo_phi_graph_test PASSED in 0.7s //tensorflow/compiler/xla/service:hlo_proto_util_test PASSED in 0.9s //tensorflow/compiler/xla/service:hlo_reachability_test PASSED in 1.1s //tensorflow/compiler/xla/service:hlo_rematerialization_test PASSED in 1.5s //tensorflow/compiler/xla/service:hlo_rematerialization_test_utils_test PASSED in 1.1s //tensorflow/compiler/xla/service:hlo_replication_analysis_test PASSED in 0.9s //tensorflow/compiler/xla/service:hlo_schedule_test PASSED in 1.2s //tensorflow/compiler/xla/service:hlo_sharding_test PASSED in 0.9s //tensorflow/compiler/xla/service:hlo_value_semantics_analysis_test PASSED in 1.2s //tensorflow/compiler/xla/service:hlo_verifier_test PASSED in 1.9s //tensorflow/compiler/xla/service:indexed_array_analysis_test PASSED in 14.6s //tensorflow/compiler/xla/service:instruction_fusion_test PASSED in 1.7s //tensorflow/compiler/xla/service:latency_hiding_scheduler_test PASSED in 1.5s //tensorflow/compiler/xla/service:layout_assignment_test PASSED in 10.2s //tensorflow/compiler/xla/service:layout_normalization_test PASSED in 2.5s //tensorflow/compiler/xla/service:logistic_expander_test PASSED in 0.8s //tensorflow/compiler/xla/service:loop_schedule_linearizer_test PASSED in 1.1s //tensorflow/compiler/xla/service:map_inliner_test PASSED in 1.2s //tensorflow/compiler/xla/service:mapped_ptr_container_sorter_test PASSED in 0.1s //tensorflow/compiler/xla/service:memory_space_assignment_best_fit_repacker_test PASSED in 0.3s //tensorflow/compiler/xla/service:memory_space_assignment_test PASSED in 6.7s //tensorflow/compiler/xla/service:memory_space_propagation_test PASSED in 1.3s //tensorflow/compiler/xla/service:name_uniquer_test PASSED in 0.1s //tensorflow/compiler/xla/service:operand_upcaster_test PASSED in 1.7s //tensorflow/compiler/xla/service:optimize_input_output_buffer_alias_test PASSED in 0.9s //tensorflow/compiler/xla/service:pattern_matcher_gmock_test PASSED in 0.6s //tensorflow/compiler/xla/service:pattern_matcher_test PASSED in 0.8s //tensorflow/compiler/xla/service:profile_guided_latency_estimator_test PASSED in 1.0s //tensorflow/compiler/xla/service:real_imag_expander_test PASSED in 1.8s //tensorflow/compiler/xla/service:reduce_decomposer_test PASSED in 1.2s //tensorflow/compiler/xla/service:reduce_scatter_combiner_test PASSED in 1.2s //tensorflow/compiler/xla/service:reduce_scatter_decomposer_test PASSED in 1.4s //tensorflow/compiler/xla/service:reduce_scatter_reassociate_test PASSED in 1.3s //tensorflow/compiler/xla/service:reshape_decomposer_test PASSED in 1.2s //tensorflow/compiler/xla/service:reshape_mover_test PASSED in 0.8s //tensorflow/compiler/xla/service:result_caster_test PASSED in 1.1s //tensorflow/compiler/xla/service:root_instruction_sinker_test PASSED in 1.0s //tensorflow/compiler/xla/service:scatter_expander_test PASSED in 1.0s //tensorflow/compiler/xla/service:scatter_simplifier_test PASSED in 1.3s //tensorflow/compiler/xla/service:select_and_scatter_expander_test PASSED in 1.7s //tensorflow/compiler/xla/service:shape_inference_test PASSED in 0.2s //tensorflow/compiler/xla/service:shaped_buffer_test PASSED in 7.3s //tensorflow/compiler/xla/service:sharding_propagation_test PASSED in 2.9s //tensorflow/compiler/xla/service:sharding_remover_test PASSED in 1.5s //tensorflow/compiler/xla/service:simplify_fp_conversions_test PASSED in 1.1s //tensorflow/compiler/xla/service:slice_sinker_test PASSED in 0.8s //tensorflow/compiler/xla/service:sort_simplifier_test PASSED in 1.0s //tensorflow/compiler/xla/service:space_to_batch_converter_test PASSED in 1.7s //tensorflow/compiler/xla/service:stable_sort_expander_test PASSED in 1.2s //tensorflow/compiler/xla/service:stochastic_convert_decomposer_test PASSED in 0.8s //tensorflow/compiler/xla/service:stream_pool_test PASSED in 0.2s //tensorflow/compiler/xla/service:topk_rewriter_test PASSED in 4.0s //tensorflow/compiler/xla/service:transpose_folding_test PASSED in 1.4s //tensorflow/compiler/xla/service:tuple_points_to_analysis_test PASSED in 0.7s //tensorflow/compiler/xla/service:tuple_simplifier_test PASSED in 1.6s //tensorflow/compiler/xla/service:tuple_util_test PASSED in 1.1s //tensorflow/compiler/xla/service:while_loop_all_reduce_code_motion_test PASSED in 1.4s //tensorflow/compiler/xla/service:while_loop_analysis_test PASSED in 0.9s //tensorflow/compiler/xla/service:while_loop_concat_code_motion_test PASSED in 1.4s //tensorflow/compiler/xla/service:while_loop_constant_sinking_test PASSED in 1.1s //tensorflow/compiler/xla/service:while_loop_expensive_invariant_code_motion_test PASSED in 1.4s //tensorflow/compiler/xla/service:while_loop_invariant_code_motion_test PASSED in 2.4s //tensorflow/compiler/xla/service:while_loop_simplifier_test PASSED in 1.0s //tensorflow/compiler/xla/service:while_loop_trip_count_annotator_test PASSED in 1.3s //tensorflow/compiler/xla/service:while_util_test PASSED in 1.1s //tensorflow/compiler/xla/service:xla_aot_compile_stablehlo_cpu_test PASSED in 8.5s //tensorflow/compiler/xla/service:xla_debug_info_manager_test PASSED in 1.3s //tensorflow/compiler/xla/service:zero_sized_hlo_elimination_test PASSED in 1.2s //tensorflow/compiler/xla/service/cpu:conv_canonicalization_test PASSED in 1.4s //tensorflow/compiler/xla/service/cpu:cpu_eigen_tensor_alignment_test PASSED in 1.4s //tensorflow/compiler/xla/service/cpu:cpu_instruction_fusion_test PASSED in 1.5s //tensorflow/compiler/xla/service/cpu:cpu_layout_assignment_test PASSED in 2.2s //tensorflow/compiler/xla/service/cpu:ir_emission_utils_test PASSED in 1.1s //tensorflow/compiler/xla/service/cpu:parallel_task_assignment_test PASSED in 3.0s //tensorflow/compiler/xla/service/cpu:runtime_fft_test PASSED in 0.2s //tensorflow/compiler/xla/service/cpu:shape_partition_test PASSED in 2.6s //tensorflow/compiler/xla/service/cpu:xfeed_manager_test PASSED in 0.7s //tensorflow/compiler/xla/service/cpu/tests:cpu_bytesizeof_test PASSED in 0.5s //tensorflow/compiler/xla/service/cpu/tests:cpu_dyn_shape_test PASSED in 10.3s //tensorflow/compiler/xla/service/cpu/tests:cpu_eigen_dot_operation_test PASSED in 10.5s //tensorflow/compiler/xla/service/cpu/tests:cpu_external_constants_test PASSED in 25.0s //tensorflow/compiler/xla/service/cpu/tests:cpu_fusion_test PASSED in 8.1s //tensorflow/compiler/xla/service/cpu/tests:cpu_infeed_test PASSED in 7.1s //tensorflow/compiler/xla/service/cpu/tests:cpu_intrinsic_test PASSED in 9.6s //tensorflow/compiler/xla/service/cpu/tests:cpu_key_value_sort_test PASSED in 6.6s //tensorflow/compiler/xla/service/cpu/tests:cpu_literal_caching_test PASSED in 9.2s //tensorflow/compiler/xla/service/cpu/tests:cpu_noalias_test PASSED in 8.6s //tensorflow/compiler/xla/service/cpu/tests:cpu_outfeed_test PASSED in 10.6s //tensorflow/compiler/xla/service/cpu/tests:cpu_profiling_test PASSED in 10.6s //tensorflow/compiler/xla/service/cpu/tests:cpu_spmd_compile_test PASSED in 6.2s //tensorflow/compiler/xla/service/cpu/tests:cpu_topk_test PASSED in 8.1s //tensorflow/compiler/xla/service/cpu/tests:cpu_vectorization_test PASSED in 9.0s //tensorflow/compiler/xla/service/cpu/tests:cpu_while_test PASSED in 8.2s //tensorflow/compiler/xla/service/cpu/tests:tree_reduction_rewriter_test PASSED in 9.0s //tensorflow/compiler/xla/service/gpu:alias_passthrough_params_test PASSED in 2.0s //tensorflow/compiler/xla/service/gpu:all_reduce_blueconnect_test PASSED in 0.9s //tensorflow/compiler/xla/service/gpu:cublas_pad_for_gemms_test PASSED in 1.6s //tensorflow/compiler/xla/service/gpu:cudnn_pad_for_convolutions_test PASSED in 1.7s //tensorflow/compiler/xla/service/gpu:cudnn_simplify_padding_test PASSED in 1.7s //tensorflow/compiler/xla/service/gpu:cudnn_support_utils_test PASSED in 1.0s //tensorflow/compiler/xla/service/gpu:cudnn_vectorize_convolutions_test PASSED in 2.9s //tensorflow/compiler/xla/service/gpu:fusion_merger_test PASSED in 2.9s //tensorflow/compiler/xla/service/gpu:gemm_rewriter_triton_test PASSED in 1.6s //tensorflow/compiler/xla/service/gpu:gpu_conv_padding_legalization_test PASSED in 0.9s //tensorflow/compiler/xla/service/gpu:gpu_conv_rewriter_test PASSED in 1.1s //tensorflow/compiler/xla/service/gpu:gpu_fusible_test PASSED in 2.1s //tensorflow/compiler/xla/service/gpu:gpu_hlo_cost_analysis_test PASSED in 2.9s //tensorflow/compiler/xla/service/gpu:gpu_performance_model_test PASSED in 1.5s //tensorflow/compiler/xla/service/gpu:gpu_sanitize_constant_names_test PASSED in 2.8s //tensorflow/compiler/xla/service/gpu:hlo_algorithm_denylist_test PASSED in 0.2s //tensorflow/compiler/xla/service/gpu:hlo_fusion_stats_test PASSED in 0.8s //tensorflow/compiler/xla/service/gpu:instruction_fusion_test PASSED in 2.2s //tensorflow/compiler/xla/service/gpu:ir_emission_utils_test PASSED in 2.2s //tensorflow/compiler/xla/service/gpu:matmul_utils_test PASSED in 0.7s //tensorflow/compiler/xla/service/gpu:move_copy_to_users_test PASSED in 2.8s //tensorflow/compiler/xla/service/gpu:multi_output_fusion_test PASSED in 3.2s //tensorflow/compiler/xla/service/gpu:non_atomically_upgradeable_rw_lock_test PASSED in 0.7s //tensorflow/compiler/xla/service/gpu:reduction_splitter_test PASSED in 2.1s //tensorflow/compiler/xla/service/gpu:scatter_slice_simplifier_test PASSED in 1.2s //tensorflow/compiler/xla/service/gpu:target_util_test PASSED in 0.6s //tensorflow/compiler/xla/service/gpu:variadic_op_splitter_test PASSED in 2.0s //tensorflow/compiler/xla/service/gpu:while_transformer_test PASSED in 2.1s //tensorflow/compiler/xla/service/gpu/llvm_gpu_backend:utils_test PASSED in 0.3s //tensorflow/compiler/xla/service/gpu/tests:gpu_reduce_scatter_creator_test PASSED in 1.4s //tensorflow/compiler/xla/service/gpu/tests:reduction_degenerate_dim_remover_test PASSED in 2.1s //tensorflow/compiler/xla/service/gpu/tests:reduction_dimension_grouper_test PASSED in 1.4s //tensorflow/compiler/xla/service/gpu/tests:tree_reduction_rewriter_test PASSED in 1.9s //tensorflow/compiler/xla/service/graphcycles:graphcycles_test PASSED in 0.9s //tensorflow/compiler/xla/service/graphcycles:ordered_set_test PASSED in 0.2s //tensorflow/compiler/xla/service/llvm_ir:alias_analysis_test PASSED in 7.9s //tensorflow/compiler/xla/service/llvm_ir:ir_array_test PASSED in 0.8s //tensorflow/compiler/xla/service/spmd:canonicalize_all_gather_for_cse_test PASSED in 1.4s //tensorflow/compiler/xla/service/spmd:collective_permute_motion_test PASSED in 1.2s //tensorflow/compiler/xla/service/spmd:partition_assignment_test PASSED in 1.1s //tensorflow/compiler/xla/service/spmd:schedule_aware_collective_ops_cse_test PASSED in 1.3s //tensorflow/compiler/xla/service/spmd:spmd_partitioner_test PASSED in 2.5s //tensorflow/compiler/xla/service/spmd:stateful_rng_spmd_partitioner_test PASSED in 1.0s //tensorflow/compiler/xla/stream_executor:dnn_test PASSED in 0.3s //tensorflow/compiler/xla/stream_executor:stream_test PASSED in 0.2s //tensorflow/compiler/xla/stream_executor/host:host_stream_test PASSED in 0.3s //tensorflow/compiler/xla/tests:all_reduce_test_cpu PASSED in 8.7s //tensorflow/compiler/xla/tests:axpy_simple_test_cpu PASSED in 7.4s //tensorflow/compiler/xla/tests:bad_rng_shape_validation_test_cpu PASSED in 8.9s //tensorflow/compiler/xla/tests:binop_scaling_test_cpu PASSED in 7.5s //tensorflow/compiler/xla/tests:bitcast_convert_test_cpu PASSED in 8.2s //tensorflow/compiler/xla/tests:broadcast_simple_test_cpu PASSED in 9.2s //tensorflow/compiler/xla/tests:broadcast_test_cpu PASSED in 9.1s //tensorflow/compiler/xla/tests:buffer_donation_test_cpu PASSED in 11.6s //tensorflow/compiler/xla/tests:call_test_cpu PASSED in 7.3s //tensorflow/compiler/xla/tests:check_execution_arity_test_cpu PASSED in 7.2s //tensorflow/compiler/xla/tests:cholesky_test_cpu PASSED in 16.1s //tensorflow/compiler/xla/tests:client_test_cpu PASSED in 8.3s //tensorflow/compiler/xla/tests:collective_ops_test_cpu PASSED in 49.7s //tensorflow/compiler/xla/tests:compilation_cache_test_cpu PASSED in 6.6s //tensorflow/compiler/xla/tests:compute_constant_test_cpu PASSED in 8.4s //tensorflow/compiler/xla/tests:concat_test_cpu PASSED in 9.0s //tensorflow/compiler/xla/tests:constant_reduction_function_test_cpu PASSED in 8.1s //tensorflow/compiler/xla/tests:constants_test_cpu PASSED in 9.9s //tensorflow/compiler/xla/tests:convert_test_cpu PASSED in 10.2s //tensorflow/compiler/xla/tests:copy_test_cpu PASSED in 9.5s //tensorflow/compiler/xla/tests:cpu_gpu_fusion_test_cpu PASSED in 12.9s //tensorflow/compiler/xla/tests:custom_call_test_cpu PASSED in 7.7s //tensorflow/compiler/xla/tests:deallocation_test_cpu PASSED in 6.8s //tensorflow/compiler/xla/tests:deconstruct_tuple_test_cpu PASSED in 9.1s //tensorflow/compiler/xla/tests:deep_graph_test_cpu PASSED in 6.8s //tensorflow/compiler/xla/tests:execution_profile_test_cpu PASSED in 7.2s //tensorflow/compiler/xla/tests:fft_test_cpu PASSED in 7.6s //tensorflow/compiler/xla/tests:float8_test_cpu PASSED in 8.2s //tensorflow/compiler/xla/tests:floor_ceil_test_cpu PASSED in 9.8s //tensorflow/compiler/xla/tests:fmax_fmin_test_cpu PASSED in 6.4s //tensorflow/compiler/xla/tests:gather_operation_test_cpu PASSED in 11.2s //tensorflow/compiler/xla/tests:get_dimension_size_test_cpu PASSED in 9.1s //tensorflow/compiler/xla/tests:half_test_cpu PASSED in 9.2s //tensorflow/compiler/xla/tests:hlo_metadata_test PASSED in 8.4s //tensorflow/compiler/xla/tests:literal_test_util_test PASSED in 5.4s //tensorflow/compiler/xla/tests:local_client_allocation_test_cpu PASSED in 9.6s //tensorflow/compiler/xla/tests:local_client_aot_test PASSED in 0.2s //tensorflow/compiler/xla/tests:log_test_cpu PASSED in 9.3s //tensorflow/compiler/xla/tests:map_test_cpu PASSED in 8.3s //tensorflow/compiler/xla/tests:matrix_ops_simple_test_cpu PASSED in 14.2s //tensorflow/compiler/xla/tests:multidimensional_slice_test_cpu PASSED in 9.4s //tensorflow/compiler/xla/tests:multiple_devices_on_host_test PASSED in 7.7s //tensorflow/compiler/xla/tests:multithreaded_compilation_test_cpu PASSED in 9.4s //tensorflow/compiler/xla/tests:outfeed_in_nested_computation_test_cpu PASSED in 7.5s //tensorflow/compiler/xla/tests:pad_test_cpu PASSED in 10.9s //tensorflow/compiler/xla/tests:pred_test_cpu PASSED in 7.0s //tensorflow/compiler/xla/tests:query_inferred_shape_test_cpu PASSED in 8.0s //tensorflow/compiler/xla/tests:reduce_hlo_test_cpu PASSED in 9.0s //tensorflow/compiler/xla/tests:reduce_precision_test_cpu PASSED in 9.1s //tensorflow/compiler/xla/tests:replay_test_cpu PASSED in 7.2s //tensorflow/compiler/xla/tests:reshape_motion_test_cpu PASSED in 7.6s //tensorflow/compiler/xla/tests:reverse_test_cpu PASSED in 7.7s //tensorflow/compiler/xla/tests:round_trip_packed_literal_test_cpu PASSED in 8.2s //tensorflow/compiler/xla/tests:round_trip_transfer_test_cpu PASSED in 8.2s //tensorflow/compiler/xla/tests:sample_text_test_cpu PASSED in 11.8s //tensorflow/compiler/xla/tests:scatter_test_cpu PASSED in 10.8s //tensorflow/compiler/xla/tests:select_test_cpu PASSED in 7.9s //tensorflow/compiler/xla/tests:test_utils_test_cpu PASSED in 5.9s //tensorflow/compiler/xla/tests:token_hlo_test_cpu PASSED in 9.1s //tensorflow/compiler/xla/tests:transfer_manager_test_cpu PASSED in 18.5s //tensorflow/compiler/xla/tests:transpose_test_cpu PASSED in 9.9s //tensorflow/compiler/xla/tests:tuple_test_cpu PASSED in 9.0s //tensorflow/compiler/xla/tests:unary_op_test_cpu PASSED in 7.2s //tensorflow/compiler/xla/tests:value_inference_test_cpu PASSED in 8.5s //tensorflow/compiler/xla/tests:vector_ops_reduce_test_cpu PASSED in 8.3s //tensorflow/compiler/xla/tests:vector_ops_simple_test_cpu PASSED in 9.3s //tensorflow/compiler/xla/tests:while_test_cpu PASSED in 9.0s //tensorflow/compiler/xla/tools:hlo_control_flow_flattening_test PASSED in 1.2s //tensorflow/compiler/xla/tools:hlo_extractor_test PASSED in 2.0s //tensorflow/compiler/xla/tools:hlo_module_loader_test PASSED in 1.4s //tensorflow/compiler/xla/tools:interactive_graphviz_bin_test PASSED in 0.3s //tensorflow/compiler/xla/tools:run_hlo_module_bin_test PASSED in 0.5s //tensorflow/compiler/xla/tools/hlo_bisect:hlo_bisect_state_test PASSED in 1.3s //tensorflow/compiler/xla/translate/hlo_to_mhlo:hlo_utils_test PASSED in 0.9s //tensorflow/compiler/xla/translate/hlo_to_mhlo:mlir_hlo_builder_test PASSED in 0.7s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:bool_compare.hlotxt.test PASSED in 0.7s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:case_conditional.hlotxt.test PASSED in 1.2s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:dynamic_param.hlo.test PASSED in 0.8s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:entry_computation_layout.hlotxt.test PASSED in 1.8s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:frontend_attributes.hlotxt.test PASSED in 0.4s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:fully_connected_reference_model.hlotxt.test PASSED in 0.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:fusion.hlotxt.test PASSED in 0.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:if_conditional.hlotxt.test PASSED in 0.4s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:import.hlotxt.test PASSED in 0.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:import_async.hlotxt.test PASSED in 0.8s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:layouts_and_names.hlotxt.test PASSED in 0.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:location.hlotxt.test PASSED in 0.6s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:module_attributes.hlo.test PASSED in 0.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:simple.hlo.test PASSED in 0.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:spmd_module_sharding.hlo.test PASSED in 0.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:types.hlotxt.test PASSED in 0.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:while.hlotxt.test PASSED in 0.5s //tensorflow/compiler/xla/translate/mhlo_to_hlo:type_to_shape_test PASSED in 0.9s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:add.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:case.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:dynamic.mlir.test PASSED in 1.5s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:export-with-layouts.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:export.mlir.test PASSED in 1.3s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:export_and_check_layouts.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:export_large_constants.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:export_replicas.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:frontend_attributes.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:fusion.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:if.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:input_output_aliasing.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:layouts_and_names.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:location_to_op_metadata.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:missing_main.mlir.test PASSED in 1.4s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:module_attributes.mlir.test PASSED in 1.4s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:multiple_return_tuple.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:opaque_elements_attr.mlir.test PASSED in 1.2s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:rng_get_and_update_state.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:sharding.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:simple.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:unsupported_type.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:while.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/translate/mhlo_to_lhlo_with_xla/tests:hlo_text_to_lhlo_no_opt.hlotxt.test PASSED in 2.0s //tensorflow/compiler/xla/translate/mhlo_to_lhlo_with_xla/tests:no_opt_ops.hlotxt.test PASSED in 0.5s //tensorflow/compiler/xla/translate/mhlo_to_lhlo_with_xla/tests:non_identity_layouts.hlotxt.test PASSED in 9.3s //tensorflow/compiler/xla/translate/mhlo_to_lhlo_with_xla/tests:ops.mlir.test PASSED in 4.1s //tensorflow/compiler/xla/translate/mhlo_to_lhlo_with_xla/tests:passthrough.mlir.test PASSED in 0.7s //tensorflow/core:__tensorflow_core_lib_core_legacy_lib_core_all_tests PASSED in 8.9s //tensorflow/core:__tensorflow_core_lib_gtl_legacy_lib_gtl_tests PASSED in 0.2s //tensorflow/core:__tensorflow_core_lib_monitoring_cell_reader_test PASSED in 44.7s //tensorflow/core:__tensorflow_core_lib_monitoring_collection_registry_test PASSED in 0.2s //tensorflow/core:__tensorflow_core_lib_monitoring_counter_test PASSED in 0.2s //tensorflow/core:__tensorflow_core_lib_monitoring_gauge_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_monitoring_metric_def_test PASSED in 0.2s //tensorflow/core:__tensorflow_core_lib_monitoring_percentile_sampler_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_monitoring_sampler_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_monitoring_test_utils_test PASSED in 0.7s //tensorflow/core:__tensorflow_core_lib_strings_legacy_low_level_library_tests PASSED in 0.3s //tensorflow/core:__tensorflow_core_lib_wav_wav_io_test PASSED in 0.2s //tensorflow/core:__tensorflow_core_util_mkl_util_test_srcs PASSED in 0.1s //tensorflow/core:__tensorflow_tsl_lib_core_legacy_lib_core_all_tests PASSED in 0.4s //tensorflow/core:lib_strings_ordered_code_test PASSED in 1.6s //tensorflow/core:lib_strings_proto_serialization_test PASSED in 0.1s //tensorflow/core/api_def:api_test PASSED in 3.9s //tensorflow/core/api_def:update_api_def_test PASSED in 0.3s //tensorflow/core/common_runtime:all_to_all_test_cpu PASSED in 0.7s //tensorflow/core/common_runtime:arg_ret_placement_test PASSED in 0.9s //tensorflow/core/common_runtime:buf_rendezvous_test PASSED in 1.2s //tensorflow/core/common_runtime:collective_executor_mgr_test PASSED in 1.3s //tensorflow/core/common_runtime:collective_param_resolver_local_test PASSED in 5.2s //tensorflow/core/common_runtime:collective_rma_local_test PASSED in 1.0s //tensorflow/core/common_runtime:composite_device_test PASSED in 0.6s //tensorflow/core/common_runtime:cost_measurement_registry_test PASSED in 2.2s //tensorflow/core/common_runtime:cost_util_test PASSED in 0.4s //tensorflow/core/common_runtime:device_mgr_test PASSED in 0.6s //tensorflow/core/common_runtime:device_propagation_test PASSED in 0.9s //tensorflow/core/common_runtime:device_resolver_local_test PASSED in 0.6s //tensorflow/core/common_runtime:device_set_test PASSED in 1.3s //tensorflow/core/common_runtime:direct_session_test_cpu PASSED in 2.9s //tensorflow/core/common_runtime:direct_session_with_debug_test PASSED in 2.3s //tensorflow/core/common_runtime:direct_session_with_tracking_alloc_test PASSED in 1.4s //tensorflow/core/common_runtime:dynamic_device_mgr_test PASSED in 1.2s //tensorflow/core/common_runtime:eval_const_tensor_test PASSED in 0.5s //tensorflow/core/common_runtime:executor_test PASSED in 1.4s //tensorflow/core/common_runtime:function_optimization_registration_test PASSED in 0.9s //tensorflow/core/common_runtime:function_optimization_registry_no_pass_test PASSED in 0.6s //tensorflow/core/common_runtime:function_optimization_registry_pass_failure_test PASSED in 0.9s //tensorflow/core/common_runtime:function_optimization_registry_test PASSED in 1.6s //tensorflow/core/common_runtime:function_threadpool_test PASSED in 3.6s //tensorflow/core/common_runtime:graph_constructor_test PASSED in 1.6s //tensorflow/core/common_runtime:graph_runner_test PASSED in 0.8s //tensorflow/core/common_runtime:hierarchical_tree_broadcaster_test_cpu PASSED in 3.2s //tensorflow/core/common_runtime:inline_function_utils_test PASSED in 0.5s //tensorflow/core/common_runtime:input_colocation_exemption_registry_test PASSED in 0.8s //tensorflow/core/common_runtime:int32_fulltype_test PASSED in 0.5s //tensorflow/core/common_runtime:isolate_placer_inspection_required_ops_pass_test PASSED in 0.8s //tensorflow/core/common_runtime:lower_case_op_test PASSED in 2.4s //tensorflow/core/common_runtime:lower_function_call_test PASSED in 1.7s //tensorflow/core/common_runtime:lower_functional_ops_test PASSED in 1.8s //tensorflow/core/common_runtime:lower_if_op_test PASSED in 2.0s //tensorflow/core/common_runtime:lower_while_op_test PASSED in 4.2s //tensorflow/core/common_runtime:mkl_cpu_allocator_test PASSED in 0.4s //tensorflow/core/common_runtime:mkl_threadpool_device_test PASSED in 0.3s //tensorflow/core/common_runtime:no_op_cost_measurement_test PASSED in 3.3s //tensorflow/core/common_runtime:null_request_cost_accessor_test PASSED in 0.3s //tensorflow/core/common_runtime:optimization_registry_test PASSED in 1.6s //tensorflow/core/common_runtime:optimize_cross_host_control_deps_test PASSED in 13.1s //tensorflow/core/common_runtime:optimize_function_graph_utils_test PASSED in 1.3s //tensorflow/core/common_runtime:partitioning_utils_test PASSED in 1.2s //tensorflow/core/common_runtime:pending_counts_test PASSED in 1.6s //tensorflow/core/common_runtime:permuter_test_cpu PASSED in 3.8s //tensorflow/core/common_runtime:placer_inspection_required_ops_utils_test PASSED in 0.7s //tensorflow/core/common_runtime:placer_test PASSED in 1.2s //tensorflow/core/common_runtime:process_function_library_runtime_test_cpu PASSED in 1.5s //tensorflow/core/common_runtime:process_util_test PASSED in 0.1s //tensorflow/core/common_runtime:quantize_training_test PASSED in 1.9s //tensorflow/core/common_runtime:rendezvous_util_test PASSED in 0.3s //tensorflow/core/common_runtime:replicate_per_replica_nodes_test PASSED in 0.8s //tensorflow/core/common_runtime:request_cost_accessor_registry_test PASSED in 2.5s //tensorflow/core/common_runtime:request_cost_test PASSED in 0.1s //tensorflow/core/common_runtime:ring_gatherer_test_cpu PASSED in 2.8s //tensorflow/core/common_runtime:ring_reducer_test_cpu PASSED in 20.5s //tensorflow/core/common_runtime:scoped_allocator_mgr_test PASSED in 4.4s //tensorflow/core/common_runtime:session_test PASSED in 0.8s //tensorflow/core/common_runtime:shape_refiner_test PASSED in 0.7s //tensorflow/core/common_runtime:single_threaded_executor_test PASSED in 0.8s //tensorflow/core/common_runtime:threadpool_device_test PASSED in 0.7s //tensorflow/core/common_runtime:type_inference_test PASSED in 3.4s //tensorflow/core/common_runtime/eager:attr_builder_test PASSED in 26.3s //tensorflow/core/common_runtime/eager:context_test PASSED in 11.9s //tensorflow/core/common_runtime/eager:custom_device_test PASSED in 12.5s //tensorflow/core/common_runtime/eager:eager_executor_test PASSED in 11.8s //tensorflow/core/common_runtime/eager:eager_op_rewrite_registry_test PASSED in 0.9s //tensorflow/core/common_runtime/eager:eager_operation_test PASSED in 11.4s //tensorflow/core/common_runtime/eager:execute_node_test PASSED in 9.1s //tensorflow/core/common_runtime/eager:execute_test PASSED in 26.7s //tensorflow/core/common_runtime/eager:kernel_and_device_test PASSED in 0.7s //tensorflow/core/common_runtime/eager:mkl_eager_op_rewrite_test PASSED in 13.0s //tensorflow/core/common_runtime/eager:placement_test PASSED in 11.0s //tensorflow/core/common_runtime/eager:placement_utils_test PASSED in 12.7s //tensorflow/core/common_runtime/eager:tensor_handle_data_test PASSED in 11.2s //tensorflow/core/common_runtime/eager:tensor_handle_test PASSED in 12.0s //tensorflow/core/common_runtime/gpu:gpu_device_on_non_gpu_machine_test PASSED in 0.8s //tensorflow/core/common_runtime/next_pluggable_device/c:plugin_c_api_test PASSED in 27.0s //tensorflow/core/config:flags_py_test PASSED in 6.2s //tensorflow/core/config:flags_test PASSED in 0.1s //tensorflow/core/data:compression_utils_test PASSED in 3.0s //tensorflow/core/data:dataset_utils_test PASSED in 1.1s //tensorflow/core/data:hash_utils_test PASSED in 1.0s //tensorflow/core/data:metric_utils_test PASSED in 5.7s //tensorflow/core/data:name_utils_test PASSED in 0.3s //tensorflow/core/data:rewrite_utils_test PASSED in 0.7s //tensorflow/core/data:serialization_utils_test PASSED in 0.6s //tensorflow/core/data:snapshot_utils_test PASSED in 0.6s //tensorflow/core/data:split_utils_test PASSED in 0.8s //tensorflow/core/data:standalone_save_restore_test PASSED in 4.7s //tensorflow/core/data:standalone_test PASSED in 1.6s //tensorflow/core/data:tfdataz_metrics_test PASSED in 1.8s //tensorflow/core/data:unbounded_thread_pool_test PASSED in 0.7s //tensorflow/core/data/service:auto_shard_rewriter_test PASSED in 1.2s //tensorflow/core/data/service:common_test PASSED in 0.2s //tensorflow/core/data/service:credentials_factory_test PASSED in 0.8s //tensorflow/core/data/service:cross_trainer_cache_test PASSED in 1.3s //tensorflow/core/data/service:data_service_test PASSED in 13.1s //tensorflow/core/data/service:data_transfer_test PASSED in 0.6s //tensorflow/core/data/service:dataset_store_test PASSED in 0.9s //tensorflow/core/data/service:dispatcher_client_test PASSED in 4.3s //tensorflow/core/data/service:dispatcher_state_test PASSED in 0.7s //tensorflow/core/data/service:grpc_dispatcher_impl_test PASSED in 2.5s //tensorflow/core/data/service:grpc_util_test PASSED in 1.0s //tensorflow/core/data/service:grpc_worker_impl_test PASSED in 2.1s //tensorflow/core/data/service:journal_test PASSED in 0.7s //tensorflow/core/data/service:logging_utils_test PASSED in 0.1s //tensorflow/core/data/service:task_runner_test PASSED in 3.8s //tensorflow/core/data/service:test_util_test PASSED in 2.5s //tensorflow/core/data/service:url_test PASSED in 0.1s //tensorflow/core/data/service:utils_test PASSED in 0.9s //tensorflow/core/data/service:validate_utils_test PASSED in 0.1s //tensorflow/core/data/service:worker_client_test PASSED in 3.4s //tensorflow/core/data/service:worker_impl_test PASSED in 3.8s //tensorflow/core/data/service/client:data_service_client_test PASSED in 4.7s //tensorflow/core/data/service/client:utils_test PASSED in 3.0s //tensorflow/core/data/service/client:validate_utils_test PASSED in 1.7s //tensorflow/core/data/service/snapshot:distributed_snapshot_test PASSED in 27.1s //tensorflow/core/data/service/snapshot:file_utils_test PASSED in 0.8s //tensorflow/core/data/service/snapshot:path_utils_test PASSED in 0.1s //tensorflow/core/data/service/snapshot:snapshot_manager_test PASSED in 11.1s //tensorflow/core/data/service/snapshot:snapshot_split_provider_test PASSED in 1.1s //tensorflow/core/data/service/snapshot:snapshot_stream_writer_checkpoint_test PASSED in 3.7s //tensorflow/core/data/service/snapshot:snapshot_stream_writer_test PASSED in 2.7s //tensorflow/core/data/service/snapshot:utils_test PASSED in 0.9s //tensorflow/core/debug:debug_graph_utils_test PASSED in 1.0s //tensorflow/core/distributed_runtime:call_options_test PASSED in 0.8s //tensorflow/core/distributed_runtime:cluster_function_library_runtime_test PASSED in 3.1s //tensorflow/core/distributed_runtime:collective_param_resolver_distributed_test PASSED in 1.3s //tensorflow/core/distributed_runtime:collective_rma_distributed_test PASSED in 0.5s //tensorflow/core/distributed_runtime:device_resolver_distributed_test PASSED in 0.7s //tensorflow/core/distributed_runtime:message_wrappers_test PASSED in 0.1s //tensorflow/core/distributed_runtime:partial_run_mgr_test PASSED in 0.5s //tensorflow/core/distributed_runtime:recent_request_ids_test PASSED in 1.2s //tensorflow/core/distributed_runtime:request_id_test PASSED in 0.1s //tensorflow/core/distributed_runtime:rpc_collective_executor_mgr_test PASSED in 0.6s //tensorflow/core/distributed_runtime:server_lib_test PASSED in 0.1s //tensorflow/core/distributed_runtime:session_mgr_test PASSED in 0.8s //tensorflow/core/distributed_runtime:tensor_coding_test PASSED in 0.2s //tensorflow/core/distributed_runtime/coordination:coordination_service_barrier_proxy_test PASSED in 2.5s //tensorflow/core/distributed_runtime/eager:eager_service_impl_test PASSED in 20.0s //tensorflow/core/distributed_runtime/eager:remote_mgr_test PASSED in 11.6s //tensorflow/core/distributed_runtime/integration_test:c_api_coordination_test_cpu PASSED in 40.2s //tensorflow/core/distributed_runtime/integration_test:c_api_multi_client_test_cpu PASSED in 34.4s //tensorflow/core/distributed_runtime/integration_test:c_api_recoverable_jobs_test_cpu PASSED in 54.2s //tensorflow/core/distributed_runtime/integration_test:c_api_session_coordination_test_cpu PASSED in 28.3s //tensorflow/core/distributed_runtime/rpc:grpc_tensor_coding_test PASSED in 2.5s //tensorflow/core/distributed_runtime/rpc:grpc_worker_cache_test PASSED in 1.1s //tensorflow/core/distributed_runtime/rpc/eager:grpc_eager_client_test PASSED in 0.6s //tensorflow/core/example:example_parser_configuration_test PASSED in 0.8s //tensorflow/core/example:feature_util_test PASSED in 0.1s //tensorflow/core/framework:allocator_test PASSED in 11.5s //tensorflow/core/framework:attr_value_util_test PASSED in 1.0s //tensorflow/core/framework:batch_util_test PASSED in 0.9s //tensorflow/core/framework:bfloat16_test PASSED in 0.8s //tensorflow/core/framework:common_shape_fns_test PASSED in 0.8s //tensorflow/core/framework:dataset_test PASSED in 0.8s //tensorflow/core/framework:device_base_test PASSED in 1.0s //tensorflow/core/framework:disable_jit_test PASSED in 1.3s //tensorflow/core/framework:framework_op_gen_lib_test PASSED in 0.1s //tensorflow/core/framework:framework_op_segment_test PASSED in 1.1s //tensorflow/core/framework:framework_resource_var_test PASSED in 0.2s //tensorflow/core/framework:framework_run_handler_test PASSED in 2.8s //tensorflow/core/framework:framework_run_handler_util_test PASSED in 3.1s //tensorflow/core/framework:full_type_inference_util_test PASSED in 1.2s //tensorflow/core/framework:full_type_util_test PASSED in 0.8s //tensorflow/core/framework:function_test PASSED in 1.6s //tensorflow/core/framework:graph_def_util_test PASSED in 1.6s //tensorflow/core/framework:graph_to_functiondef_test PASSED in 1.1s //tensorflow/core/framework:kernel_def_builder_test PASSED in 1.1s //tensorflow/core/framework:kernel_def_util_test PASSED in 0.7s //tensorflow/core/framework:memory_types_test PASSED in 0.7s //tensorflow/core/framework:model_test PASSED in 1.5s //tensorflow/core/framework:node_def_builder_test PASSED in 1.5s //tensorflow/core/framework:node_def_util_test PASSED in 0.8s //tensorflow/core/framework:node_properties_test PASSED in 1.8s //tensorflow/core/framework:op_compatibility_test PASSED in 1.7s //tensorflow/core/framework:op_def_builder_test PASSED in 0.9s //tensorflow/core/framework:op_def_util_test PASSED in 1.7s //tensorflow/core/framework:op_kernel_test PASSED in 1.1s //tensorflow/core/framework:op_registration_test PASSED in 0.8s //tensorflow/core/framework:partial_tensor_shape_test PASSED in 1.1s //tensorflow/core/framework:rendezvous_test PASSED in 3.7s //tensorflow/core/framework:resource_handle_test PASSED in 0.1s //tensorflow/core/framework:resource_mgr_test PASSED in 2.1s //tensorflow/core/framework:resource_op_kernel_test PASSED in 1.4s //tensorflow/core/framework:shape_inference_test PASSED in 1.2s //tensorflow/core/framework:shape_inference_testutil_test PASSED in 1.2s //tensorflow/core/framework:tensor_shape_test PASSED in 6.6s //tensorflow/core/framework:tensor_slice_test PASSED in 1.3s //tensorflow/core/framework:tensor_test PASSED in 35.0s //tensorflow/core/framework:tensor_testutil_test PASSED in 1.4s //tensorflow/core/framework:tensor_util_test PASSED in 1.4s //tensorflow/core/framework:tracking_allocator_test PASSED in 1.1s //tensorflow/core/framework:types_test PASSED in 1.5s //tensorflow/core/framework:variant_op_registry_test PASSED in 21.4s //tensorflow/core/framework:variant_test PASSED in 1.2s //tensorflow/core/framework/registration:registration_test PASSED in 1.0s //tensorflow/core/function/capture:by_ref_capture_test PASSED in 7.2s //tensorflow/core/function/capture:capture_container_test PASSED in 7.8s //tensorflow/core/function/integration_test:side_inputs_manual_api_test PASSED in 13.8s //tensorflow/core/function/integration_test:side_inputs_test PASSED in 17.2s //tensorflow/core/function/polymorphism:function_cache_test PASSED in 6.8s //tensorflow/core/function/polymorphism:function_type_test PASSED in 7.6s //tensorflow/core/function/polymorphism:type_dispatch_test PASSED in 7.2s //tensorflow/core/function/runtime_client:runtime_client_cc_test PASSED in 32.6s //tensorflow/core/function/trace_type:default_types_test PASSED in 6.4s //tensorflow/core/function/trace_type:serialization_test PASSED in 6.5s //tensorflow/core/function/trace_type:trace_type_test PASSED in 10.4s //tensorflow/core/graph:algorithm_test PASSED in 1.2s //tensorflow/core/graph:collective_order_test PASSED in 0.8s //tensorflow/core/graph:control_flow_test PASSED in 1.0s //tensorflow/core/graph:costmodel_test PASSED in 1.0s //tensorflow/core/graph:edgeset_test PASSED in 1.0s //tensorflow/core/graph:graph_def_builder_test PASSED in 0.9s //tensorflow/core/graph:graph_partition_test PASSED in 1.1s //tensorflow/core/graph:graph_test PASSED in 0.8s //tensorflow/core/graph:node_builder_test PASSED in 1.0s //tensorflow/core/graph:optimizer_cse_test PASSED in 1.1s //tensorflow/core/graph:subgraph_test PASSED in 0.8s //tensorflow/core/graph:tensor_id_test PASSED in 0.9s //tensorflow/core/graph:validate_test PASSED in 0.9s //tensorflow/core/graph/regularization:simple_delete_test PASSED in 0.2s //tensorflow/core/graph/regularization:util_test PASSED in 0.1s //tensorflow/core/grappler:graph_topology_view_test PASSED in 0.1s //tensorflow/core/grappler:graph_view_test PASSED in 1.6s //tensorflow/core/grappler:grappler_item_builder_test PASSED in 1.7s //tensorflow/core/grappler:grappler_item_test PASSED in 2.2s //tensorflow/core/grappler:mutable_graph_view_test PASSED in 1.1s //tensorflow/core/grappler:utils_test PASSED in 3.1s //tensorflow/core/grappler/clusters:single_machine_test PASSED in 24.2s //tensorflow/core/grappler/clusters:virtual_cluster_test PASSED in 1.5s //tensorflow/core/grappler/costs:analytical_cost_estimator_test PASSED in 1.7s //tensorflow/core/grappler/costs:cost_estimator_test PASSED in 0.1s //tensorflow/core/grappler/costs:graph_memory_test PASSED in 1.5s //tensorflow/core/grappler/costs:graph_properties_test PASSED in 4.3s //tensorflow/core/grappler/costs:robust_stats_test PASSED in 0.1s //tensorflow/core/grappler/costs:utils_test PASSED in 1.1s //tensorflow/core/grappler/costs:virtual_placer_test PASSED in 0.4s //tensorflow/core/grappler/costs:virtual_scheduler_test PASSED in 2.6s //tensorflow/core/grappler/graph_analyzer:gen_node_test PASSED in 2.1s //tensorflow/core/grappler/graph_analyzer:graph_analyzer_test PASSED in 1.7s //tensorflow/core/grappler/graph_analyzer:hash_tools_test PASSED in 2.3s //tensorflow/core/grappler/graph_analyzer:sig_node_test PASSED in 3.0s //tensorflow/core/grappler/graph_analyzer:subgraph_test PASSED in 1.8s //tensorflow/core/grappler/inputs:utils_test PASSED in 0.3s //tensorflow/core/grappler/optimizers:arithmetic_optimizer_test_cpu PASSED in 3.0s //tensorflow/core/grappler/optimizers:auto_mixed_precision_test_cpu PASSED in 2.1s //tensorflow/core/grappler/optimizers:auto_parallel_test_cpu PASSED in 1.6s //tensorflow/core/grappler/optimizers:common_subgraph_elimination_test_cpu PASSED in 1.5s //tensorflow/core/grappler/optimizers:custom_graph_optimizer_registry_test_cpu PASSED in 4.0s //tensorflow/core/grappler/optimizers:debug_stripper_test_cpu PASSED in 2.1s //tensorflow/core/grappler/optimizers:dependency_optimizer_test_cpu PASSED in 3.3s //tensorflow/core/grappler/optimizers:evaluation_utils_test PASSED in 0.6s //tensorflow/core/grappler/optimizers:function_api_info_test PASSED in 0.2s //tensorflow/core/grappler/optimizers:function_optimizer_test_cpu PASSED in 2.7s //tensorflow/core/grappler/optimizers:generic_layout_optimizer_test_cpu PASSED in 1.8s //tensorflow/core/grappler/optimizers:generic_layout_optimizer_transposer_factory_test PASSED in 0.2s //tensorflow/core/grappler/optimizers:generic_layout_optimizer_transposer_test_cpu PASSED in 1.9s //tensorflow/core/grappler/optimizers:graph_optimizer_stage_test_cpu PASSED in 2.0s //tensorflow/core/grappler/optimizers:implementation_selector_test PASSED in 1.7s //tensorflow/core/grappler/optimizers:loop_optimizer_test_cpu PASSED in 1.4s //tensorflow/core/grappler/optimizers:memory_optimizer_test_cpu PASSED in 1.7s //tensorflow/core/grappler/optimizers:meta_optimizer_test_cpu PASSED in 7.3s //tensorflow/core/grappler/optimizers:mkl_remapper_test PASSED in 1.8s //tensorflow/core/grappler/optimizers:model_pruner_test_cpu PASSED in 2.9s //tensorflow/core/grappler/optimizers:pin_to_host_optimizer_test_cpu PASSED in 2.8s //tensorflow/core/grappler/optimizers:remapper_test_cpu PASSED in 2.0s //tensorflow/core/grappler/optimizers:scoped_allocator_optimizer_test PASSED in 1.7s //tensorflow/core/grappler/optimizers:shape_optimizer_test_cpu PASSED in 2.2s //tensorflow/core/grappler/optimizers:static_schedule_test_cpu PASSED in 2.1s //tensorflow/core/grappler/optimizers:tfg_optimizer_hook_test PASSED in 0.6s //tensorflow/core/grappler/optimizers/data:auto_shard_test PASSED in 0.5s //tensorflow/core/grappler/optimizers/data:autotune_buffer_sizes_test PASSED in 0.5s //tensorflow/core/grappler/optimizers/data:batch_parallelization_test PASSED in 0.5s //tensorflow/core/grappler/optimizers/data:disable_intra_op_parallelism_test PASSED in 0.6s //tensorflow/core/grappler/optimizers/data:disable_prefetch_legacy_autotune_test PASSED in 1.3s //tensorflow/core/grappler/optimizers/data:enable_gradient_descent_test PASSED in 1.4s //tensorflow/core/grappler/optimizers/data:filter_fusion_test PASSED in 0.8s //tensorflow/core/grappler/optimizers/data:filter_parallelization_test PASSED in 0.7s //tensorflow/core/grappler/optimizers/data:function_utils_test PASSED in 0.7s //tensorflow/core/grappler/optimizers/data:fusion_utils_test PASSED in 0.5s //tensorflow/core/grappler/optimizers/data:graph_utils_test PASSED in 1.0s //tensorflow/core/grappler/optimizers/data:inject_prefetch_test PASSED in 0.5s //tensorflow/core/grappler/optimizers/data:make_deterministic_test PASSED in 1.0s //tensorflow/core/grappler/optimizers/data:make_sloppy_test PASSED in 0.7s //tensorflow/core/grappler/optimizers/data:map_and_batch_fusion_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:map_and_filter_fusion_test PASSED in 0.6s //tensorflow/core/grappler/optimizers/data:map_fusion_test PASSED in 0.8s //tensorflow/core/grappler/optimizers/data:map_parallelization_test PASSED in 0.6s //tensorflow/core/grappler/optimizers/data:noop_elimination_test PASSED in 0.3s //tensorflow/core/grappler/optimizers/data:parallel_batch_test PASSED in 0.9s //tensorflow/core/grappler/optimizers/data:replicate_on_split_test PASSED in 0.5s //tensorflow/core/grappler/optimizers/data:shuffle_and_repeat_fusion_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:slack_test PASSED in 1.2s //tensorflow/core/grappler/optimizers/data:split_utils_test PASSED in 1.3s //tensorflow/core/grappler/optimizers/data:use_private_thread_pool_test PASSED in 0.5s //tensorflow/core/grappler/optimizers/inference:batch_op_rewriter_test PASSED in 0.1s //tensorflow/core/grappler/utils:canonicalizer_test PASSED in 3.4s //tensorflow/core/grappler/utils:colocation_test PASSED in 0.8s //tensorflow/core/grappler/utils:frame_test PASSED in 0.2s //tensorflow/core/grappler/utils:functions_test PASSED in 2.2s //tensorflow/core/grappler/utils:graph_view_internal_test PASSED in 0.5s //tensorflow/core/grappler/utils:graph_view_test PASSED in 2.1s //tensorflow/core/grappler/utils:grappler_test_test PASSED in 8.2s //tensorflow/core/grappler/utils:pattern_utils_test PASSED in 0.7s //tensorflow/core/grappler/utils:scc_test PASSED in 1.7s //tensorflow/core/grappler/utils:symbolic_shapes_test PASSED in 0.1s //tensorflow/core/grappler/utils:topological_sort_test PASSED in 0.8s //tensorflow/core/grappler/utils:tpu_test PASSED in 0.1s //tensorflow/core/grappler/utils:transitive_fanin_test PASSED in 0.9s //tensorflow/core/grappler/utils:traversal_test PASSED in 0.7s //tensorflow/core/grappler/verifiers:structure_verifier_test PASSED in 1.0s //tensorflow/core/ir:interfaces_test PASSED in 0.1s //tensorflow/core/ir:ops_test PASSED in 0.2s //tensorflow/core/ir:shape_inference_utils_test PASSED in 0.3s //tensorflow/core/ir:tf_op_registry_test PASSED in 0.6s //tensorflow/core/ir:tf_op_wrapper_test PASSED in 0.1s //tensorflow/core/ir:utility_test PASSED in 0.2s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:arg_as_control_ret.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:backedge_segment.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:empty.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:error_during_backedge.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_case_with_attr_inference.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_if_with_attr_inference.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_iterator_get_next_attr_inference.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_underscore_output_shapes.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_while_with_attr_inference.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:infeed_dequeue.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:infer_arg_handle_type.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:infer_with_output_shapes.pbtxt.test PASSED in 0.4s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_arg_name.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_backedge_input_size.pbtxt.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_duplicated_node_name.pbtxt.test PASSED in 1.1s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_edge_index.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_edge_name.pbtxt.test PASSED in 0.4s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_attr_key.pbtxt.test PASSED in 0.4s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_func_attr_key.pbtxt.test PASSED in 1.4s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_func_attr_name.pbtxt.test PASSED in 1.0s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_op_type.pbtxt.test PASSED in 1.2s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_func_with_empty_name.pbtxt.test PASSED in 1.1s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_function_import.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_control_result.pbtxt.test PASSED in 1.2s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_input.pbtxt.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_name.pbtxt.test PASSED in 1.3s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_result.pbtxt.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_function_attr_name.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_function_named_edge_index.pbtxt.test PASSED in 1.2s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_handle_data.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_control_input.pbtxt.test PASSED in 2.1s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_control_result.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_control_result_value.pbtxt.test PASSED in 1.1s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_data_result.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_data_result_value.pbtxt.test PASSED in 1.1s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_input.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_two_inputs.pbtxt.test PASSED in 1.1s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_named_edge_index.pbtxt.test PASSED in 1.1s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_op_name.pbtxt.test PASSED in 1.1s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_type_list.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:legacy_call.pbtxt.test PASSED in 1.3s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:negative_shape.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:negative_zero_constant.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:three_nodes_with_attrs.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:version.pbtxt.test PASSED in 0.4s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:empty.mlir.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:fulltype.mlir.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:func_with_no_args_or_results.mlir.test PASSED in 1.2s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:negative_zero_constant.mlir.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:nested_legacy_call.mlir.test PASSED in 1.2s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:three_nodes_with_attrs.mlir.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:version.mlir.test PASSED in 1.1s //tensorflow/core/ir/importexport/tests/saved_model:saved_model_roundtrip_test PASSED in 0.4s //tensorflow/core/ir/tests:attributes.mlir.test PASSED in 0.6s //tensorflow/core/ir/tests:canonicalize.mlir.test PASSED in 0.5s //tensorflow/core/ir/tests:compatible_types.mlir.test PASSED in 0.9s //tensorflow/core/ir/tests:concrete-ops.mlir.test PASSED in 1.0s //tensorflow/core/ir/tests:generic_concrete_ops.mlir.test PASSED in 1.1s //tensorflow/core/ir/tests:invalid-concrete-ops.mlir.test PASSED in 0.4s //tensorflow/core/ir/tests:invalid-preserved-attrs.mlir.test PASSED in 0.7s //tensorflow/core/ir/tests:invalid.mlir.test PASSED in 0.5s //tensorflow/core/ir/tests:invalid_types.mlir.test PASSED in 1.1s //tensorflow/core/ir/tests:ops.mlir.test PASSED in 0.4s //tensorflow/core/ir/tests:region-invalid-ops.mlir.test PASSED in 0.8s //tensorflow/core/ir/tests:region-ops-graph.mlir.test PASSED in 0.5s //tensorflow/core/ir/tests:region-ops.mlir.test PASSED in 0.6s //tensorflow/core/ir/tests:types.mlir.test PASSED in 0.6s //tensorflow/core/ir/types:dialect_test PASSED in 0.6s //tensorflow/core/kernels:as_string_op_test PASSED in 0.7s //tensorflow/core/kernels:basic_ops_benchmark_test PASSED in 0.4s //tensorflow/core/kernels:batch_kernels_env_test PASSED in 0.8s //tensorflow/core/kernels:batch_kernels_test PASSED in 6.3s //tensorflow/core/kernels:bias_op_test PASSED in 0.8s //tensorflow/core/kernels:bincount_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:broadcast_to_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:cast_op_test_cpu PASSED in 1.0s //tensorflow/core/kernels:checkpoint_callback_manager_test PASSED in 1.6s //tensorflow/core/kernels:clustering_ops_test PASSED in 0.6s //tensorflow/core/kernels:composite_tensor_variant_test PASSED in 0.5s //tensorflow/core/kernels:concat_op_test PASSED in 0.9s //tensorflow/core/kernels:constant_op_test_cpu PASSED in 0.7s //tensorflow/core/kernels:control_flow_ops_test PASSED in 6.2s //tensorflow/core/kernels:conv_grad_filter_ops_benchmark_test_cpu PASSED in 0.7s //tensorflow/core/kernels:conv_grad_input_ops_benchmark_test_cpu PASSED in 0.8s //tensorflow/core/kernels:conv_ops_benchmark_test_cpu PASSED in 0.9s //tensorflow/core/kernels:conv_ops_test_cpu PASSED in 7.2s //tensorflow/core/kernels:count_ops_test PASSED in 0.5s //tensorflow/core/kernels:cross_op_test PASSED in 0.7s //tensorflow/core/kernels:cwise_ops_test_cpu PASSED in 0.6s //tensorflow/core/kernels:debug_ops_test PASSED in 1.1s //tensorflow/core/kernels:decode_wav_op_test PASSED in 2.4s //tensorflow/core/kernels:deep_conv2d_test PASSED in 0.6s //tensorflow/core/kernels:dequantize_op_test PASSED in 0.6s //tensorflow/core/kernels:diag_op_test_cpu PASSED in 0.7s //tensorflow/core/kernels:dynamic_partition_op_test_cpu PASSED in 0.5s //tensorflow/core/kernels:dynamic_stitch_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:eigen_activations_test PASSED in 0.1s //tensorflow/core/kernels:eigen_attention_test PASSED in 0.1s //tensorflow/core/kernels:eigen_backward_cuboid_convolutions_test PASSED in 0.6s //tensorflow/core/kernels:eigen_backward_spatial_convolutions_test PASSED in 0.2s //tensorflow/core/kernels:eigen_benchmark_cpu_test PASSED in 0.1s //tensorflow/core/kernels:eigen_mkldnn_contraction_kernel_test PASSED in 0.1s //tensorflow/core/kernels:eigen_pooling_test PASSED in 0.5s //tensorflow/core/kernels:encode_wav_op_test PASSED in 1.6s //tensorflow/core/kernels:fingerprint_op_test PASSED in 1.3s //tensorflow/core/kernels:fused_batch_norm_ex_op_test_cpu PASSED in 0.8s //tensorflow/core/kernels:fused_batch_norm_op_test_cpu PASSED in 0.9s //tensorflow/core/kernels:gather_nd_op_test_cpu PASSED in 0.7s //tensorflow/core/kernels:gather_op_test_cpu PASSED in 1.1s //tensorflow/core/kernels:guarantee_const_op_test PASSED in 0.8s //tensorflow/core/kernels:identity_n_op_test PASSED in 0.5s //tensorflow/core/kernels:identity_op_test PASSED in 0.9s //tensorflow/core/kernels:immutable_constant_op_test PASSED in 0.9s //tensorflow/core/kernels:in_topk_op_test PASSED in 0.6s //tensorflow/core/kernels:isotonic_regression_op_test PASSED in 0.8s //tensorflow/core/kernels:logging_ops_test PASSED in 1.8s //tensorflow/core/kernels:lookup_ops_test PASSED in 1.7s //tensorflow/core/kernels:loss_test PASSED in 0.3s //tensorflow/core/kernels:lrn_op_test_cpu PASSED in 0.7s //tensorflow/core/kernels:matmul_op_test_cpu PASSED in 3.1s //tensorflow/core/kernels:merge_v2_checkpoints_op_test PASSED in 1.5s //tensorflow/core/kernels:mfcc_dct_test PASSED in 0.2s //tensorflow/core/kernels:mfcc_mel_filterbank_test PASSED in 0.1s //tensorflow/core/kernels:mfcc_op_test_cpu PASSED in 3.8s //tensorflow/core/kernels:mfcc_test PASSED in 0.3s //tensorflow/core/kernels:multinomial_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:nn_ops_test_cpu PASSED in 1.1s //tensorflow/core/kernels:one_hot_op_test PASSED in 1.2s //tensorflow/core/kernels:ops_testutil_test PASSED in 0.7s //tensorflow/core/kernels:ops_util_test PASSED in 0.2s //tensorflow/core/kernels:parameterized_truncated_normal_op_test_cpu PASSED in 0.5s //tensorflow/core/kernels:parse_tensor_test PASSED in 0.9s //tensorflow/core/kernels:quantization_utils_test PASSED in 0.7s //tensorflow/core/kernels:quantize_and_dequantize_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:quantize_down_and_shrink_range_op_test PASSED in 0.6s //tensorflow/core/kernels:quantize_op_test PASSED in 0.6s //tensorflow/core/kernels:quantized_activation_ops_test PASSED in 0.6s //tensorflow/core/kernels:quantized_add_op_test PASSED in 15.0s //tensorflow/core/kernels:quantized_batch_norm_op_test PASSED in 1.0s //tensorflow/core/kernels:quantized_bias_add_op_test PASSED in 1.1s //tensorflow/core/kernels:quantized_concat_op_test PASSED in 0.7s //tensorflow/core/kernels:quantized_conv_ops_test PASSED in 0.7s //tensorflow/core/kernels:quantized_instance_norm_test PASSED in 1.1s //tensorflow/core/kernels:quantized_matmul_op_test PASSED in 0.6s //tensorflow/core/kernels:quantized_mul_op_test PASSED in 1.3s //tensorflow/core/kernels:quantized_pooling_ops_test PASSED in 1.0s //tensorflow/core/kernels:quantized_reshape_op_test PASSED in 0.5s //tensorflow/core/kernels:quantized_resize_bilinear_op_test PASSED in 2.0s //tensorflow/core/kernels:ragged_fill_empty_rows_op_test PASSED in 1.5s //tensorflow/core/kernels:ragged_gather_op_test PASSED in 0.8s //tensorflow/core/kernels:ragged_range_op_test PASSED in 0.8s //tensorflow/core/kernels:ragged_tensor_from_variant_op_test PASSED in 0.6s //tensorflow/core/kernels:ragged_tensor_to_sparse_kernel_test PASSED in 0.6s //tensorflow/core/kernels:ragged_tensor_to_tensor_op_test PASSED in 0.7s //tensorflow/core/kernels:ragged_tensor_to_variant_op_test PASSED in 0.8s //tensorflow/core/kernels:random_binomial_op_test_cpu PASSED in 0.5s //tensorflow/core/kernels:random_index_shuffle_test PASSED in 0.4s //tensorflow/core/kernels:random_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:random_poisson_op_test_cpu PASSED in 0.5s //tensorflow/core/kernels:range_sampler_test PASSED in 0.2s //tensorflow/core/kernels:reduction_ops_test_cpu PASSED in 0.6s //tensorflow/core/kernels:regex_replace_op_test PASSED in 0.5s //tensorflow/core/kernels:requantization_range_op_test PASSED in 0.8s //tensorflow/core/kernels:requantize_op_test PASSED in 0.5s //tensorflow/core/kernels:resource_ops_test PASSED in 0.5s //tensorflow/core/kernels:restore_op_test PASSED in 0.9s //tensorflow/core/kernels:restore_v2_op_test PASSED in 1.0s //tensorflow/core/kernels:reverse_op_test PASSED in 1.3s //tensorflow/core/kernels:roll_op_test PASSED in 1.8s //tensorflow/core/kernels:save_op_test PASSED in 0.5s //tensorflow/core/kernels:save_v2_op_test PASSED in 0.6s //tensorflow/core/kernels:scan_ops_test_cpu PASSED in 0.5s //tensorflow/core/kernels:scatter_nd_op_test_cpu PASSED in 0.9s //tensorflow/core/kernels:scatter_op_test PASSED in 0.7s //tensorflow/core/kernels:scoped_allocator_ops_test_cpu PASSED in 13.9s //tensorflow/core/kernels:sdca_ops_test PASSED in 1.2s //tensorflow/core/kernels:segment_reduction_ops_test PASSED in 1.0s //tensorflow/core/kernels:sendrecv_ops_test PASSED in 0.6s //tensorflow/core/kernels:sequence_ops_test PASSED in 0.5s //tensorflow/core/kernels:shape_ops_test PASSED in 0.8s //tensorflow/core/kernels:slice_op_test PASSED in 1.2s //tensorflow/core/kernels:spacetobatch_benchmark_test_cpu PASSED in 0.5s //tensorflow/core/kernels:sparse_add_op_test PASSED in 0.7s //tensorflow/core/kernels:sparse_dense_binary_op_shared_test PASSED in 0.6s //tensorflow/core/kernels:sparse_fill_empty_rows_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:sparse_matmul_op_test_cpu PASSED in 1.4s //tensorflow/core/kernels:sparse_reduce_sum_op_test PASSED in 0.9s //tensorflow/core/kernels:sparse_tensor_dense_matmul_op_test_cpu PASSED in 16.0s //tensorflow/core/kernels:sparse_to_dense_op_test_cpu PASSED in 0.9s //tensorflow/core/kernels:sparse_utils_test PASSED in 0.4s //tensorflow/core/kernels:sparse_xent_op_test_cpu PASSED in 0.5s //tensorflow/core/kernels:spectrogram_op_test_cpu PASSED in 1.8s //tensorflow/core/kernels:spectrogram_test PASSED in 0.5s //tensorflow/core/kernels:split_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:split_v_op_test_cpu PASSED in 3.9s //tensorflow/core/kernels:strided_slice_op_test PASSED in 0.5s //tensorflow/core/kernels:string_format_op_test PASSED in 0.6s //tensorflow/core/kernels:string_ngrams_op_test PASSED in 0.7s //tensorflow/core/kernels:string_split_op_test PASSED in 1.4s //tensorflow/core/kernels:substr_op_test PASSED in 0.8s //tensorflow/core/kernels:summary_audio_op_test PASSED in 0.6s //tensorflow/core/kernels:summary_image_op_test PASSED in 0.6s //tensorflow/core/kernels:summary_op_test PASSED in 0.8s //tensorflow/core/kernels:summary_tensor_op_test PASSED in 0.6s //tensorflow/core/kernels:tensor_cord_test PASSED in 0.2s //tensorflow/core/kernels:tensor_flag_utils_test PASSED in 0.2s //tensorflow/core/kernels:tensor_map_test PASSED in 0.3s //tensorflow/core/kernels:training_ops_test PASSED in 0.5s //tensorflow/core/kernels:transpose_util_test PASSED in 0.5s //tensorflow/core/kernels:unary_ops_composition_test_cpu PASSED in 1.8s //tensorflow/core/kernels:unique_op_test PASSED in 0.5s //tensorflow/core/kernels:variable_ops_test PASSED in 1.5s //tensorflow/core/kernels:while_op_test PASSED in 1.1s //tensorflow/core/kernels:xent_op_test_cpu PASSED in 0.5s //tensorflow/core/kernels/batching_util:basic_batch_scheduler_test PASSED in 0.2s //tensorflow/core/kernels/batching_util:batch_input_task_test PASSED in 0.7s //tensorflow/core/kernels/batching_util:batch_resource_base_test PASSED in 0.8s //tensorflow/core/kernels/batching_util:batch_scheduler_test PASSED in 0.3s //tensorflow/core/kernels/batching_util:bounded_executor_test PASSED in 32.9s //tensorflow/core/kernels/batching_util:input_split_metadata_test PASSED in 0.6s //tensorflow/core/kernels/batching_util:periodic_function_test PASSED in 1.5s //tensorflow/core/kernels/batching_util:serial_device_batch_scheduler_test PASSED in 1.5s //tensorflow/core/kernels/batching_util:shared_batch_scheduler_test PASSED in 6.6s //tensorflow/core/kernels/batching_util:threadsafe_status_test PASSED in 0.1s //tensorflow/core/kernels/data:batch_dataset_op_test PASSED in 2.3s //tensorflow/core/kernels/data:cache_dataset_ops_test PASSED in 1.1s //tensorflow/core/kernels/data:concatenate_dataset_op_test PASSED in 1.6s //tensorflow/core/kernels/data:filter_dataset_op_test PASSED in 0.9s //tensorflow/core/kernels/data:finalize_dataset_op_test PASSED in 1.3s //tensorflow/core/kernels/data:fixed_length_record_dataset_op_test PASSED in 1.6s //tensorflow/core/kernels/data:flat_map_dataset_op_test PASSED in 0.9s //tensorflow/core/kernels/data:get_options_op_test PASSED in 2.0s //tensorflow/core/kernels/data:interleave_dataset_op_test PASSED in 1.5s //tensorflow/core/kernels/data:iterator_ops_test PASSED in 1.4s //tensorflow/core/kernels/data:map_dataset_op_test PASSED in 1.4s //tensorflow/core/kernels/data:map_defun_op_test PASSED in 0.6s //tensorflow/core/kernels/data:optimize_dataset_op_test PASSED in 1.0s //tensorflow/core/kernels/data:options_dataset_op_test PASSED in 1.6s //tensorflow/core/kernels/data:padded_batch_dataset_op_test PASSED in 4.3s //tensorflow/core/kernels/data:parallel_batch_dataset_op_test PASSED in 1.1s //tensorflow/core/kernels/data:parallel_filter_dataset_op_test PASSED in 1.2s //tensorflow/core/kernels/data:parallel_interleave_dataset_op_test PASSED in 2.5s //tensorflow/core/kernels/data:parallel_map_dataset_op_test PASSED in 1.6s //tensorflow/core/kernels/data:prefetch_autotuner_test PASSED in 0.2s //tensorflow/core/kernels/data:prefetch_dataset_op_test PASSED in 1.0s //tensorflow/core/kernels/data:range_dataset_op_test PASSED in 1.4s //tensorflow/core/kernels/data:reduce_dataset_op_test PASSED in 4.2s //tensorflow/core/kernels/data:repeat_dataset_op_test PASSED in 0.9s //tensorflow/core/kernels/data:rewrite_dataset_op_test PASSED in 2.0s //tensorflow/core/kernels/data:shard_dataset_op_test PASSED in 1.4s //tensorflow/core/kernels/data:shuffle_dataset_op_test PASSED in 1.2s //tensorflow/core/kernels/data:skip_dataset_op_test PASSED in 0.8s //tensorflow/core/kernels/data:sparse_tensor_slice_dataset_op_test PASSED in 0.7s //tensorflow/core/kernels/data:take_dataset_op_test PASSED in 0.8s //tensorflow/core/kernels/data:tensor_dataset_op_test PASSED in 0.6s //tensorflow/core/kernels/data:tensor_slice_dataset_op_test PASSED in 0.9s //tensorflow/core/kernels/data:text_line_dataset_op_test PASSED in 0.9s //tensorflow/core/kernels/data:tf_record_dataset_op_test PASSED in 3.1s //tensorflow/core/kernels/data:window_dataset_op_test PASSED in 1.3s //tensorflow/core/kernels/data:zip_dataset_op_test PASSED in 1.4s //tensorflow/core/kernels/data/experimental:assert_next_dataset_op_test PASSED in 1.0s //tensorflow/core/kernels/data/experimental:assert_prev_dataset_op_test PASSED in 1.2s //tensorflow/core/kernels/data/experimental:auto_shard_dataset_op_test PASSED in 0.7s //tensorflow/core/kernels/data/experimental:directed_interleave_dataset_op_test PASSED in 0.6s //tensorflow/core/kernels/data/experimental:list_dataset_op_test PASSED in 1.6s //tensorflow/core/kernels/data/experimental:map_and_batch_dataset_op_test PASSED in 3.1s //tensorflow/core/kernels/data/experimental:parallel_interleave_dataset_op_test PASSED in 1.4s //tensorflow/core/kernels/data/experimental:random_dataset_op_test PASSED in 0.8s //tensorflow/core/kernels/data/experimental:sampling_dataset_op_test PASSED in 2.8s //tensorflow/core/kernels/data/experimental:save_dataset_op_test PASSED in 1.2s //tensorflow/core/kernels/data/experimental:unique_dataset_op_test PASSED in 0.6s //tensorflow/core/kernels/image:adjust_contrast_op_benchmark_test_cpu PASSED in 0.6s //tensorflow/core/kernels/image:adjust_contrast_op_test PASSED in 1.4s //tensorflow/core/kernels/image:colorspace_op_test PASSED in 0.6s //tensorflow/core/kernels/image:crop_and_resize_op_benchmark_test_cpu PASSED in 1.1s //tensorflow/core/kernels/image:crop_and_resize_op_test PASSED in 0.6s //tensorflow/core/kernels/image:encode_jpeg_op_test PASSED in 0.8s //tensorflow/core/kernels/image:mirror_pad_op_benchmark_test_cpu PASSED in 1.0s //tensorflow/core/kernels/image:mirror_pad_op_test PASSED in 0.6s //tensorflow/core/kernels/image:non_max_suppression_op_benchmark_test PASSED in 0.6s //tensorflow/core/kernels/image:non_max_suppression_op_test PASSED in 1.3s //tensorflow/core/kernels/image:resize_area_op_test PASSED in 1.5s //tensorflow/core/kernels/image:resize_benchmark_test_cpu PASSED in 1.2s //tensorflow/core/kernels/image:resize_bicubic_op_test PASSED in 4.0s //tensorflow/core/kernels/image:resize_ops_test_cpu PASSED in 4.2s //tensorflow/core/kernels/image:sampling_kernels_test PASSED in 0.5s //tensorflow/core/kernels/image:scale_and_translate_op_test PASSED in 1.8s //tensorflow/core/kernels/linalg:banded_triangular_solve_op_test_cpu PASSED in 0.7s //tensorflow/core/kernels/linalg:matrix_triangular_solve_op_test_cpu PASSED in 0.9s //tensorflow/core/kernels/mkl:mkl_conv_ops_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_dequantize_op_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_fused_batch_norm_op_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_fused_ops_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_matmul_op_benchmark PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_qmatmul_op_test PASSED in 0.2s //tensorflow/core/kernels/mkl:mkl_quantize_op_test PASSED in 0.2s //tensorflow/core/kernels/mkl:mkl_quantized_concat_op_test PASSED in 0.5s //tensorflow/core/kernels/mkl:mkl_quantized_conv_ops_perchannel_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_quantized_conv_ops_test PASSED in 0.2s //tensorflow/core/kernels/mkl:mkl_quantized_pooling_ops_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_relu_op_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_requantize_ops_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_swish_op_test PASSED in 0.1s //tensorflow/core/kernels/mkl:onednn_nn_ops_benchmark PASSED in 0.2s //tensorflow/core/kernels/sparse:kernels_test PASSED in 0.6s //tensorflow/core/kernels/uniform_quant_ops:math_utils_test PASSED in 0.2s //tensorflow/core/kernels/uniform_quant_ops:tensor_utils_test PASSED in 0.3s //tensorflow/core/kernels/uniform_quant_ops:uniform_dequantize_op_test PASSED in 3.6s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantize_op_test PASSED in 0.7s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_add_op_test PASSED in 0.6s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_clip_by_value_op_test PASSED in 1.0s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_convolution_ops_test PASSED in 0.6s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_dot_ops_test PASSED in 0.5s //tensorflow/core/kernels/uniform_quant_ops:uniform_requantize_op_test PASSED in 0.7s //tensorflow/core/lib/db:sqlite_test PASSED in 0.1s //tensorflow/core/lib/gif:lib_gif_io_test PASSED in 1.2s //tensorflow/core/lib/jpeg:lib_jpeg_jpeg_mem_unittest PASSED in 0.7s //tensorflow/core/ops:cudnn_rnn_ops_test_cc PASSED in 1.2s //tensorflow/core/ops:ops_array_grad_test PASSED in 0.9s //tensorflow/core/ops:ops_math_grad_test PASSED in 3.5s //tensorflow/core/ops:ops_tests PASSED in 1.5s //tensorflow/core/ops/compat:backwards_compatibility_test PASSED in 0.7s //tensorflow/core/platform:__tensorflow_tsl_platform_profile_utils_cpu_utils_test PASSED in 0.1s //tensorflow/core/platform:enable_tf2_utils_test PASSED in 0.1s //tensorflow/core/platform:env_test PASSED in 2.5s //tensorflow/core/platform:fake_python_env_test PASSED in 0.3s //tensorflow/core/platform:file_system_test PASSED in 0.2s //tensorflow/core/platform:platform_strings_test PASSED in 0.2s //tensorflow/core/platform:ram_file_system_test PASSED in 39.2s //tensorflow/core/platform:resource_loader_test PASSED in 0.1s //tensorflow/core/platform:vmodule_benchmark_test PASSED in 0.2s //tensorflow/core/platform:vmodule_test PASSED in 0.2s //tensorflow/core/profiler/backends/cpu:host_tracer_test PASSED in 0.4s //tensorflow/core/profiler/convert:hlo_proto_to_graph_view_test PASSED in 0.1s //tensorflow/core/profiler/convert:hlo_proto_to_memory_visualization_utils_test PASSED in 0.2s //tensorflow/core/profiler/convert:op_stats_to_pod_stats_test PASSED in 0.2s //tensorflow/core/profiler/convert:op_stats_to_pod_viewer_test PASSED in 0.2s //tensorflow/core/profiler/convert:op_stats_to_tf_stats_test PASSED in 0.2s //tensorflow/core/profiler/convert:xplane_to_kernel_stats_db_test PASSED in 0.2s //tensorflow/core/profiler/convert:xplane_to_memory_profile_test PASSED in 0.3s //tensorflow/core/profiler/convert:xplane_to_op_metrics_db_test PASSED in 0.2s //tensorflow/core/profiler/convert:xplane_to_op_stats_test PASSED in 0.2s //tensorflow/core/profiler/convert:xplane_to_step_events_test PASSED in 0.2s //tensorflow/core/profiler/convert:xplane_to_tf_functions_test PASSED in 0.4s //tensorflow/core/profiler/convert:xplane_to_tool_names_test PASSED in 0.4s //tensorflow/core/profiler/internal:tfprof_show_test PASSED in 0.7s //tensorflow/core/profiler/internal:tfprof_stats_test PASSED in 0.8s //tensorflow/core/profiler/internal:tfprof_tensor_test PASSED in 0.5s //tensorflow/core/profiler/internal:tfprof_timeline_test PASSED in 0.9s //tensorflow/core/profiler/internal/advisor:tfprof_advisor_test PASSED in 0.5s //tensorflow/core/profiler/lib:profiler_disabled_test PASSED in 0.3s //tensorflow/core/profiler/utils:derived_timeline_test PASSED in 0.3s //tensorflow/core/profiler/utils:kernel_stats_utils_test PASSED in 0.1s //tensorflow/core/profiler/utils:op_metrics_db_utils_test PASSED in 0.1s //tensorflow/core/profiler/utils:step_intersection_test PASSED in 0.1s //tensorflow/core/summary:schema_test PASSED in 0.3s //tensorflow/core/summary:summary_db_writer_test PASSED in 0.3s //tensorflow/core/summary:summary_file_writer_test PASSED in 0.4s //tensorflow/core/tfrt/common:pjrt_state_test PASSED in 5.0s //tensorflow/core/tfrt/common:pjrt_util_test PASSED in 7.8s //tensorflow/core/tfrt/fallback:cost_recorder_test PASSED in 0.4s //tensorflow/core/tfrt/fallback:fallback_state_test PASSED in 0.5s //tensorflow/core/transforms:eval_utils_test PASSED in 1.4s //tensorflow/core/transforms:graph_transform_wrapper_test PASSED in 0.2s //tensorflow/core/util:bcast_test PASSED in 0.7s //tensorflow/core/util:command_line_flags_test PASSED in 1.5s //tensorflow/core/util:debug_data_dumper_test PASSED in 1.4s //tensorflow/core/util:debug_events_writer_test PASSED in 0.5s //tensorflow/core/util:dump_graph_test PASSED in 0.7s //tensorflow/core/util:equal_graph_def_test PASSED in 0.7s //tensorflow/core/util:events_writer_test PASSED in 2.8s //tensorflow/core/util:example_proto_fast_parsing_test PASSED in 1.0s //tensorflow/core/util:example_proto_helper_test PASSED in 1.0s //tensorflow/core/util:exec_on_stall_test PASSED in 2.5s //tensorflow/core/util:fake_clock_env_test PASSED in 1.2s //tensorflow/core/util:incremental_barrier_test PASSED in 0.2s //tensorflow/core/util:matmul_bcast_test PASSED in 0.6s //tensorflow/core/util:memmapped_file_system_test PASSED in 1.5s //tensorflow/core/util:overflow_test PASSED in 0.1s //tensorflow/core/util:presized_cuckoo_map_test PASSED in 4.4s //tensorflow/core/util:ragged_to_dense_util_test PASSED in 0.6s //tensorflow/core/util:reffed_status_callback_test PASSED in 1.0s //tensorflow/core/util:reporter_test PASSED in 0.6s //tensorflow/core/util:saved_tensor_slice_util_test PASSED in 1.8s //tensorflow/core/util:semver_test PASSED in 1.1s //tensorflow/core/util:stat_summarizer_test PASSED in 0.7s //tensorflow/core/util:strided_slice_op_test PASSED in 1.0s //tensorflow/core/util:tensor_format_test PASSED in 1.1s //tensorflow/core/util:tensor_slice_reader_test PASSED in 2.0s //tensorflow/core/util:tensor_slice_set_test PASSED in 0.8s //tensorflow/core/util:tensor_slice_util_test PASSED in 1.1s //tensorflow/core/util:tensor_slice_writer_test PASSED in 1.9s //tensorflow/core/util:work_sharder_test PASSED in 1.0s //tensorflow/core/util/ctc:ctc_beam_search_test PASSED in 0.1s //tensorflow/core/util/proto:descriptor_pool_registry_test PASSED in 0.5s //tensorflow/core/util/proto:proto_utils_test PASSED in 0.6s //tensorflow/core/util/quantization:uniform_quant_ops_params_test PASSED in 0.6s //tensorflow/core/util/sparse:sparse_tensor_test PASSED in 0.1s //tensorflow/core/util/tensor_bundle:tensor_bundle_test PASSED in 31.3s //tensorflow/dtensor/mlir:dtensor_location_test PASSED in 0.2s //tensorflow/dtensor/mlir:group_assignment_test PASSED in 0.2s //tensorflow/dtensor/mlir/tests:annotate_global_shape.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:cluster_function_conversion.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:constant_folding.mlir.test PASSED in 0.4s //tensorflow/dtensor/mlir/tests:designate_resource_handle_mesh.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:device_mesh_cluster_coarsening.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:dtensor_all_gather.mlir.test PASSED in 0.8s //tensorflow/dtensor/mlir/tests:dtensor_all_scatter.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_combine_optimization.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_lowering.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_scatter_optimization.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_sum_optimization.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_layout_must_execute.mlir.test PASSED in 0.8s //tensorflow/dtensor/mlir/tests:dtensor_layout_to_xla_sharding_op.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_mixed_precision_reduce.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:dtensor_reduce_scatter_lowering.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_remove_dtensorlayout.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_replace_auxiliary_layout_op.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_replace_relayout_with_identity.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_set_hlo_sharding.mlir.test PASSED in 0.4s //tensorflow/dtensor/mlir/tests:dtensor_set_hlo_sharding_default.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_xla_spmd_integration.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:elide_identity_before_copy_to_mesh.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:function_renaming.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:handle_cross_cluster_dependencies.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:handle_sparsetensors.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:layout_propagation_v2.mlir.test PASSED in 0.8s //tensorflow/dtensor/mlir/tests:lower_send_recv.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:merge_clusters.mlir.test PASSED in 0.9s //tensorflow/dtensor/mlir/tests:mesh_propagation.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:op_to_device_cluster.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:propagate_default_layout.mlir.test PASSED in 3.8s //tensorflow/dtensor/mlir/tests:propagate_device_id_to_function.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:restore_and_assign.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:restore_shape_inference.mlir.test PASSED in 1.3s //tensorflow/dtensor/mlir/tests:set_default_sharding.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:sparse_expansion.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:spmd_batchparallel.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_concat.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:spmd_conv.mlir.test PASSED in 1.2s //tensorflow/dtensor/mlir/tests:spmd_einsum.mlir.test PASSED in 0.9s //tensorflow/dtensor/mlir/tests:spmd_expansion.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:spmd_io_ops.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:spmd_iterator.mlir.test PASSED in 0.9s //tensorflow/dtensor/mlir/tests:spmd_matmul.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:spmd_random.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:spmd_save_restore.mlir.test PASSED in 0.8s //tensorflow/dtensor/mlir/tests:spmd_segment_sum.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_slice.mlir.test PASSED in 1.3s //tensorflow/dtensor/mlir/tests:spmd_softmax_loss.mlir.test PASSED in 0.8s //tensorflow/dtensor/mlir/tests:spmd_squeeze.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_var_handle.mlir.test PASSED in 0.4s //tensorflow/dtensor/mlir/tests:tf_dtensor_ops.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:tpu_add_resource_device_attribute.mlir.test PASSED in 0.8s //tensorflow/dtensor/mlir/tests:tpu_integration.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:undo_merge_const_across_mesh.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:update_tpu_metadata.mlir.test PASSED in 0.6s //tensorflow/dtensor/python/tests:collective_combine_all_reduce_test_cpu PASSED in 14.2s //tensorflow/dtensor/python/tests:collective_test_cpu PASSED in 16.9s //tensorflow/dtensor/python/tests:config_test_cpu PASSED in 7.0s //tensorflow/dtensor/python/tests:device_test_cpu PASSED in 32.4s //tensorflow/dtensor/python/tests:layout_test_cpu PASSED in 8.2s //tensorflow/dtensor/python/tests:multi_client_test_cpu PASSED in 13.5s //tensorflow/dtensor/python/tests:numpy_util_test_cpu PASSED in 10.4s //tensorflow/dtensor/tests:executable_manager_test PASSED in 29.6s //tensorflow/dtensor/tests:layout_to_xla_sharding_test PASSED in 0.4s //tensorflow/dtensor/tests:tensor_layout_test PASSED in 0.2s //tensorflow/examples/adding_an_op:fact_test PASSED in 14.6s //tensorflow/examples/adding_an_op:zero_out_1_test PASSED in 14.2s //tensorflow/examples/adding_an_op:zero_out_2_test PASSED in 15.0s //tensorflow/examples/adding_an_op:zero_out_3_test PASSED in 20.3s //tensorflow/examples/custom_ops_doc/multiplex_1:multiplex_1_test PASSED in 14.1s //tensorflow/examples/custom_ops_doc/multiplex_2:multiplex_2_test_cpu PASSED in 16.5s //tensorflow/examples/custom_ops_doc/multiplex_3:multiplex_3_test PASSED in 23.8s //tensorflow/examples/custom_ops_doc/multiplex_4:multiplex_4_test PASSED in 20.6s //tensorflow/examples/custom_ops_doc/simple_hash_table:simple_hash_table_test PASSED in 15.6s //tensorflow/examples/custom_ops_doc/sleep:sleep_test PASSED in 15.2s //tensorflow/examples/speech_commands:accuracy_utils_test PASSED in 2.2s //tensorflow/examples/speech_commands:models_test PASSED in 22.5s //tensorflow/examples/speech_commands:recognize_commands_test PASSED in 1.9s //tensorflow/examples/wav_to_spectrogram:wav_to_spectrogram_test PASSED in 2.1s //tensorflow/js:ts_op_gen_test PASSED in 0.2s //tensorflow/python:array_grad_test_cpu PASSED in 9.9s //tensorflow/python:autograph_ops_test PASSED in 7.4s //tensorflow/python:batch_norm_benchmark_cpu PASSED in 22.6s //tensorflow/python:bincount_ops_test PASSED in 9.6s //tensorflow/python:bitwise_ops_test_cpu PASSED in 8.1s //tensorflow/python:clip_ops_test PASSED in 7.2s //tensorflow/python:clustering_ops_test PASSED in 23.9s //tensorflow/python:collective_ops_benchmark_cpu PASSED in 36.3s //tensorflow/python:collective_ops_gpu_test_2gpu PASSED in 9.1s //tensorflow/python:collective_ops_gpu_test_cpu PASSED in 7.4s //tensorflow/python:collective_ops_test PASSED in 19.4s //tensorflow/python:collective_ops_xla_test PASSED in 7.8s //tensorflow/python:compiled_collective_ops_gpu_test_2gpu PASSED in 7.9s //tensorflow/python:compiled_collective_ops_gpu_test_cpu PASSED in 7.3s //tensorflow/python:concat_benchmark_cpu PASSED in 7.7s //tensorflow/python:control_flow_ops_benchmark_cpu PASSED in 7.0s //tensorflow/python:control_flow_v2_enable_test PASSED in 6.4s //tensorflow/python:control_flow_v2_toggles_test PASSED in 7.6s //tensorflow/python:dequantize_op_test PASSED in 12.3s //tensorflow/python:embedding_ops_test_cpu PASSED in 8.0s //tensorflow/python:factory_ops_test_cpu PASSED in 7.3s //tensorflow/python:functional_ops_test PASSED in 7.4s //tensorflow/python:gradient_checker_v2_test_cpu PASSED in 25.4s //tensorflow/python:gradients_test_cpu PASSED in 14.4s //tensorflow/python:init_ops_test_cpu PASSED in 8.3s //tensorflow/python:init_ops_v2_test_cpu PASSED in 10.0s //tensorflow/python:math_grad_test_cpu PASSED in 18.2s //tensorflow/python:math_ops_linspace_test_cpu PASSED in 7.5s //tensorflow/python:math_ops_test_cpu PASSED in 31.9s //tensorflow/python:matmul_benchmark_cpu PASSED in 6.9s //tensorflow/python:nn_grad_test_cpu PASSED in 9.1s //tensorflow/python:nn_loss_scaling_utilities_test PASSED in 9.8s //tensorflow/python:nn_test_cpu PASSED in 84.5s //tensorflow/python:nn_xent_test_cpu PASSED in 7.7s //tensorflow/python:op_selector_test PASSED in 6.6s //tensorflow/python:ops/array_ops_test PASSED in 7.6s //tensorflow/python:quantized_conv_ops_test PASSED in 7.0s //tensorflow/python:quantized_ops_test PASSED in 8.8s //tensorflow/python:raw_ops_test_cpu PASSED in 10.7s //tensorflow/python:rnn_grad_test_cpu PASSED in 7.7s //tensorflow/python:script_ops_test PASSED in 7.4s //tensorflow/python:sort_ops_test PASSED in 9.3s //tensorflow/python:sparse_ops_test PASSED in 30.7s //tensorflow/python:split_benchmark_cpu PASSED in 19.5s //tensorflow/python:tensor_array_ops_test PASSED in 8.1s //tensorflow/python:transpose_benchmark_cpu PASSED in 7.0s //tensorflow/python:variable_spec_test PASSED in 8.4s //tensorflow/python/autograph/converters:asserts_test PASSED in 6.9s //tensorflow/python/autograph/converters:break_statements_test PASSED in 6.7s //tensorflow/python/autograph/converters:call_trees_test PASSED in 7.2s //tensorflow/python/autograph/converters:conditional_expressions_test PASSED in 6.6s //tensorflow/python/autograph/converters:continue_statements_test PASSED in 7.9s //tensorflow/python/autograph/converters:control_flow_test PASSED in 13.6s //tensorflow/python/autograph/converters:directives_test PASSED in 7.0s //tensorflow/python/autograph/converters:functions_test PASSED in 6.6s //tensorflow/python/autograph/converters:list_comprehensions_test PASSED in 6.8s //tensorflow/python/autograph/converters:lists_test PASSED in 7.6s //tensorflow/python/autograph/converters:logical_expressions_test PASSED in 7.0s //tensorflow/python/autograph/converters:return_statements_test PASSED in 8.2s //tensorflow/python/autograph/converters:slices_test PASSED in 6.8s //tensorflow/python/autograph/converters:variables_test PASSED in 7.7s //tensorflow/python/autograph/core:converter_test PASSED in 6.7s //tensorflow/python/autograph/core:function_wrappers_test PASSED in 7.3s //tensorflow/python/autograph/impl:api_test PASSED in 13.8s //tensorflow/python/autograph/impl:conversion_test PASSED in 6.5s //tensorflow/python/autograph/lang:special_functions_test PASSED in 7.9s //tensorflow/python/autograph/operators:conditional_expressions_test PASSED in 7.0s //tensorflow/python/autograph/operators:control_flow_test PASSED in 16.7s //tensorflow/python/autograph/operators:data_structures_test PASSED in 7.3s //tensorflow/python/autograph/operators:exceptions_test PASSED in 7.8s //tensorflow/python/autograph/operators:logical_test PASSED in 6.9s //tensorflow/python/autograph/operators:py_builtins_test PASSED in 13.9s //tensorflow/python/autograph/operators:slices_test PASSED in 11.2s //tensorflow/python/autograph/operators:variables_test PASSED in 7.1s //tensorflow/python/autograph/pyct:anno_test PASSED in 6.5s //tensorflow/python/autograph/pyct:ast_util_test PASSED in 6.6s //tensorflow/python/autograph/pyct:cache_test PASSED in 5.8s //tensorflow/python/autograph/pyct:cfg_test PASSED in 7.7s //tensorflow/python/autograph/pyct:error_utils_test PASSED in 7.1s //tensorflow/python/autograph/pyct:inspect_utils_test PASSED in 7.0s //tensorflow/python/autograph/pyct:loader_test PASSED in 7.3s //tensorflow/python/autograph/pyct:naming_test PASSED in 38.9s //tensorflow/python/autograph/pyct:origin_info_test PASSED in 9.7s //tensorflow/python/autograph/pyct:parser_test PASSED in 6.7s //tensorflow/python/autograph/pyct:pretty_printer_test PASSED in 8.3s //tensorflow/python/autograph/pyct:qual_names_test PASSED in 8.2s //tensorflow/python/autograph/pyct:templates_test PASSED in 6.9s //tensorflow/python/autograph/pyct:transformer_test PASSED in 7.4s //tensorflow/python/autograph/pyct:transpiler_test PASSED in 7.6s //tensorflow/python/autograph/pyct/static_analysis:activity_test PASSED in 9.9s //tensorflow/python/autograph/pyct/static_analysis:liveness_test PASSED in 6.8s //tensorflow/python/autograph/pyct/static_analysis:reaching_definitions_test PASSED in 6.7s //tensorflow/python/autograph/pyct/static_analysis:reaching_fndefs_test PASSED in 6.7s //tensorflow/python/autograph/pyct/static_analysis:type_inference_test PASSED in 7.2s //tensorflow/python/autograph/tests:assertion_test PASSED in 15.8s //tensorflow/python/autograph/tests:basic_ifexp_test PASSED in 31.4s //tensorflow/python/autograph/tests:call_to_builtin_function_test PASSED in 23.6s //tensorflow/python/autograph/tests:call_to_lambda_function_test PASSED in 15.8s //tensorflow/python/autograph/tests:call_to_named_tuple_test PASSED in 14.3s //tensorflow/python/autograph/tests:call_to_numpy_function_test PASSED in 15.8s //tensorflow/python/autograph/tests:call_to_print_function_test PASSED in 16.1s //tensorflow/python/autograph/tests:call_to_tf_api_test PASSED in 13.9s //tensorflow/python/autograph/tests:call_to_user_function_test PASSED in 14.9s //tensorflow/python/autograph/tests:composite_names_in_control_flow_test PASSED in 22.8s //tensorflow/python/autograph/tests:cond_basic_test PASSED in 22.4s //tensorflow/python/autograph/tests:datasets_test PASSED in 29.5s //tensorflow/python/autograph/tests:early_return_test PASSED in 19.7s //tensorflow/python/autograph/tests:ext_slice_test PASSED in 14.3s //tensorflow/python/autograph/tests:generator_test PASSED in 15.4s //tensorflow/python/autograph/tests:logical_expression_test PASSED in 16.8s //tensorflow/python/autograph/tests:loop_basic_test PASSED in 83.0s //tensorflow/python/autograph/tests:loop_control_flow_illegal_cases_test PASSED in 31.1s //tensorflow/python/autograph/tests:loop_created_variables_test PASSED in 21.5s //tensorflow/python/autograph/tests:loop_scoping_test PASSED in 19.9s //tensorflow/python/autograph/tests:loop_with_function_call_test PASSED in 41.2s //tensorflow/python/autograph/tests:loop_with_variable_type_illegal_cases_test PASSED in 20.2s //tensorflow/python/autograph/tests:loop_with_variable_type_test PASSED in 44.0s //tensorflow/python/autograph/tests:nested_control_flow_test PASSED in 58.0s //tensorflow/python/autograph/tests:type_annotations_test PASSED in 14.0s //tensorflow/python/autograph/utils:context_managers_test PASSED in 6.8s //tensorflow/python/autograph/utils:misc_test PASSED in 7.7s //tensorflow/python/autograph/utils:tensor_list_test PASSED in 8.2s //tensorflow/python/autograph/utils:tensors_test PASSED in 8.8s //tensorflow/python/checkpoint:benchmarks_test PASSED in 28.6s //tensorflow/python/checkpoint:checkpoint_management_test_cpu PASSED in 13.7s //tensorflow/python/checkpoint:checkpoint_metrics_test PASSED in 14.5s //tensorflow/python/checkpoint:checkpoint_test PASSED in 25.7s //tensorflow/python/checkpoint:checkpoint_view_test PASSED in 7.9s //tensorflow/python/checkpoint:checkpoint_with_v1_optimizers_test PASSED in 11.2s //tensorflow/python/checkpoint:functional_saver_test_cpu PASSED in 9.9s //tensorflow/python/checkpoint:restore_test PASSED in 7.6s //tensorflow/python/checkpoint:save_util_v1_test PASSED in 23.0s //tensorflow/python/checkpoint:saveable_compat_test PASSED in 23.2s //tensorflow/python/checkpoint:tensor_callable_test PASSED in 8.3s //tensorflow/python/checkpoint:trackable_view_test PASSED in 6.5s //tensorflow/python/client:device_lib_test_cpu PASSED in 6.9s //tensorflow/python/client:events_writer_test PASSED in 7.3s //tensorflow/python/client:session_benchmark_cpu PASSED in 9.0s //tensorflow/python/client:session_list_devices_test PASSED in 7.4s //tensorflow/python/client:session_partial_run_test PASSED in 11.9s //tensorflow/python/client:timeline_test_cpu PASSED in 8.0s //tensorflow/python/client:virtual_gpu_test_cpu PASSED in 8.5s //tensorflow/python/compat:compat_test PASSED in 6.4s //tensorflow/python/compat:disable_v2_behavior_test PASSED in 7.3s //tensorflow/python/compiler/mlir:mlir_test PASSED in 7.6s //tensorflow/python/compiler/tensorrt:trt_convert_test_cpu PASSED in 13.9s //tensorflow/python/compiler/tensorrt/test:batch_matmul_test_cpu PASSED in 9.4s //tensorflow/python/compiler/tensorrt/test:biasadd_matmul_test_cpu PASSED in 7.4s //tensorflow/python/compiler/tensorrt/test:binary_tensor_weight_broadcast_test_cpu PASSED in 9.9s //tensorflow/python/compiler/tensorrt/test:bool_test_cpu PASSED in 8.0s //tensorflow/python/compiler/tensorrt/test:cast_test_cpu PASSED in 7.7s //tensorflow/python/compiler/tensorrt/test:concatenation_test_cpu PASSED in 7.6s //tensorflow/python/compiler/tensorrt/test:const_broadcast_test_cpu PASSED in 8.1s //tensorflow/python/compiler/tensorrt/test:data_dependent_shape_test_cpu PASSED in 7.8s //tensorflow/python/compiler/tensorrt/test:dynamic_input_shapes_test_cpu PASSED in 27.7s //tensorflow/python/compiler/tensorrt/test:identity_output_test_cpu PASSED in 7.2s //tensorflow/python/compiler/tensorrt/test:int32_test_cpu PASSED in 7.8s //tensorflow/python/compiler/tensorrt/test:lru_cache_test_cpu PASSED in 7.8s //tensorflow/python/compiler/tensorrt/test:memory_alignment_test_cpu PASSED in 16.5s //tensorflow/python/compiler/tensorrt/test:multi_connection_neighbor_engine_test_cpu PASSED in 8.9s //tensorflow/python/compiler/tensorrt/test:neighboring_engine_test_cpu PASSED in 6.9s //tensorflow/python/compiler/tensorrt/test:quantization_test_cpu PASSED in 16.3s //tensorflow/python/compiler/tensorrt/test:rank_two_test_cpu PASSED in 6.9s //tensorflow/python/compiler/tensorrt/test:reshape_transpose_test_cpu PASSED in 7.3s //tensorflow/python/compiler/tensorrt/test:topk_test_cpu PASSED in 8.6s //tensorflow/python/compiler/tensorrt/test:trt_engine_op_shape_test_cpu PASSED in 8.2s //tensorflow/python/compiler/tensorrt/test:trt_mode_test_cpu PASSED in 15.3s //tensorflow/python/compiler/tensorrt/test:unary_test_cpu PASSED in 15.6s //tensorflow/python/compiler/tensorrt/test:vgg_block_nchw_test_cpu PASSED in 9.4s //tensorflow/python/compiler/tensorrt/test:vgg_block_test_cpu PASSED in 8.7s //tensorflow/python/compiler/xla:jit_compile_test_cpu PASSED in 16.9s //tensorflow/python/compiler/xla:jit_test_cpu PASSED in 11.4s //tensorflow/python/compiler/xla:xla_test_cpu PASSED in 16.2s //tensorflow/python/compiler/xla/experimental:xla_sharding_test PASSED in 14.9s //tensorflow/python/data/benchmarks:batch_benchmark PASSED in 7.6s //tensorflow/python/data/benchmarks:filter_benchmark PASSED in 6.8s //tensorflow/python/data/benchmarks:from_tensor_slices_benchmark PASSED in 7.2s //tensorflow/python/data/benchmarks:interleave_benchmark PASSED in 8.3s //tensorflow/python/data/benchmarks:list_files_benchmark PASSED in 8.0s //tensorflow/python/data/benchmarks:map_benchmark PASSED in 7.8s //tensorflow/python/data/benchmarks:meta_benchmark PASSED in 6.9s //tensorflow/python/data/benchmarks:prefetch_benchmark PASSED in 7.1s //tensorflow/python/data/benchmarks:range_benchmark PASSED in 8.2s //tensorflow/python/data/experimental/benchmarks:autotune_benchmark PASSED in 6.9s //tensorflow/python/data/experimental/benchmarks:csv_dataset_benchmark PASSED in 7.3s //tensorflow/python/data/experimental/benchmarks:map_and_batch_benchmark PASSED in 9.0s //tensorflow/python/data/experimental/benchmarks:map_defun_benchmark PASSED in 7.7s //tensorflow/python/data/experimental/benchmarks:matching_files_benchmark PASSED in 8.4s //tensorflow/python/data/experimental/benchmarks:optimize_benchmark PASSED in 9.0s //tensorflow/python/data/experimental/benchmarks:parameter_value_benchmark PASSED in 7.5s //tensorflow/python/data/experimental/benchmarks:rejection_resample_benchmark PASSED in 9.0s //tensorflow/python/data/experimental/benchmarks:snapshot_dataset_benchmark PASSED in 7.8s //tensorflow/python/data/experimental/benchmarks:unbatch_benchmark PASSED in 6.6s //tensorflow/python/data/experimental/kernel_tests:assert_cardinality_test PASSED in 30.9s //tensorflow/python/data/experimental/kernel_tests:assert_next_test PASSED in 9.9s //tensorflow/python/data/experimental/kernel_tests:assert_prev_test PASSED in 10.3s //tensorflow/python/data/experimental/kernel_tests:checkpoint_input_pipeline_hook_test PASSED in 15.4s //tensorflow/python/data/experimental/kernel_tests:compression_ops_test PASSED in 12.6s //tensorflow/python/data/experimental/kernel_tests:copy_to_device_test_cpu PASSED in 15.7s //tensorflow/python/data/experimental/kernel_tests:dense_to_sparse_batch_test PASSED in 29.1s //tensorflow/python/data/experimental/kernel_tests:from_list_test PASSED in 37.4s //tensorflow/python/data/experimental/kernel_tests:io_test PASSED in 36.3s //tensorflow/python/data/experimental/kernel_tests:lookup_ops_test PASSED in 9.8s //tensorflow/python/data/experimental/kernel_tests:make_csv_dataset_test PASSED in 22.7s //tensorflow/python/data/experimental/kernel_tests:make_saveable_from_iterator_test PASSED in 8.8s //tensorflow/python/data/experimental/kernel_tests:make_tf_record_dataset_test PASSED in 55.2s //tensorflow/python/data/experimental/kernel_tests:map_defun_op_test PASSED in 7.1s //tensorflow/python/data/experimental/kernel_tests:matching_files_dataset_test PASSED in 18.0s //tensorflow/python/data/experimental/kernel_tests:model_dataset_test PASSED in 8.4s //tensorflow/python/data/experimental/kernel_tests:non_serializable_test PASSED in 8.3s //tensorflow/python/data/experimental/kernel_tests:prefetch_to_device_test_cpu PASSED in 16.5s //tensorflow/python/data/experimental/kernel_tests:prefetch_with_slack_test PASSED in 14.6s //tensorflow/python/data/experimental/kernel_tests:shuffle_and_repeat_test PASSED in 32.2s //tensorflow/python/data/experimental/kernel_tests:sleep_test PASSED in 15.2s //tensorflow/python/data/experimental/kernel_tests:tf_record_writer_test PASSED in 12.2s //tensorflow/python/data/experimental/kernel_tests:variant_test PASSED in 8.5s //tensorflow/python/data/experimental/kernel_tests:wrap_unwrap_test_cpu PASSED in 7.8s //tensorflow/python/data/experimental/kernel_tests/optimization:filter_fusion_test PASSED in 37.5s //tensorflow/python/data/experimental/kernel_tests/optimization:filter_parallelization_test PASSED in 54.8s //tensorflow/python/data/experimental/kernel_tests/optimization:grappler_test_cpu PASSED in 8.2s //tensorflow/python/data/experimental/kernel_tests/optimization:make_deterministic_test PASSED in 50.3s //tensorflow/python/data/experimental/kernel_tests/optimization:map_and_batch_fusion_test PASSED in 8.8s //tensorflow/python/data/experimental/kernel_tests/optimization:map_and_filter_fusion_test PASSED in 18.3s //tensorflow/python/data/experimental/kernel_tests/optimization:map_fusion_test PASSED in 34.3s //tensorflow/python/data/experimental/kernel_tests/optimization:map_parallelization_test PASSED in 10.8s //tensorflow/python/data/experimental/kernel_tests/optimization:noop_elimination_test PASSED in 11.4s //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_test PASSED in 20.3s //tensorflow/python/data/experimental/kernel_tests/service:multi_device_test PASSED in 23.1s //tensorflow/python/data/experimental/service:server_lib_test PASSED in 11.0s //tensorflow/python/data/kernel_tests:as_numpy_iterator_test PASSED in 9.1s //tensorflow/python/data/kernel_tests:bucket_by_sequence_length_test PASSED in 17.0s //tensorflow/python/data/kernel_tests:cache_test PASSED in 55.5s //tensorflow/python/data/kernel_tests:cardinality_test PASSED in 11.1s //tensorflow/python/data/kernel_tests:checkpoint_test PASSED in 17.4s //tensorflow/python/data/kernel_tests:concatenate_test PASSED in 42.1s //tensorflow/python/data/kernel_tests:counter_test PASSED in 28.8s //tensorflow/python/data/kernel_tests:dataset_spec_test PASSED in 6.6s //tensorflow/python/data/kernel_tests:dataset_test PASSED in 25.3s //tensorflow/python/data/kernel_tests:enumerate_test PASSED in 27.2s //tensorflow/python/data/kernel_tests:from_sparse_tensor_slices_test PASSED in 7.5s //tensorflow/python/data/kernel_tests:from_tensor_slices_test PASSED in 26.0s //tensorflow/python/data/kernel_tests:from_tensors_test PASSED in 18.9s //tensorflow/python/data/kernel_tests:get_single_element_test PASSED in 16.9s //tensorflow/python/data/kernel_tests:ignore_errors_test PASSED in 17.0s //tensorflow/python/data/kernel_tests:io_test PASSED in 60.8s //tensorflow/python/data/kernel_tests:iterator_test_cpu PASSED in 21.8s //tensorflow/python/data/kernel_tests:len_test PASSED in 8.1s //tensorflow/python/data/kernel_tests:list_files_test PASSED in 15.6s //tensorflow/python/data/kernel_tests:optional_test_cpu PASSED in 10.8s //tensorflow/python/data/kernel_tests:options_test PASSED in 10.9s //tensorflow/python/data/kernel_tests:placement_test_cpu PASSED in 8.6s //tensorflow/python/data/kernel_tests:prefetch_test PASSED in 51.5s //tensorflow/python/data/kernel_tests:random_test PASSED in 25.6s //tensorflow/python/data/kernel_tests:range_test PASSED in 52.4s //tensorflow/python/data/kernel_tests:rebatch_test PASSED in 7.4s //tensorflow/python/data/kernel_tests:reduce_test_cpu PASSED in 25.1s //tensorflow/python/data/kernel_tests:scan_test_cpu PASSED in 52.0s //tensorflow/python/data/kernel_tests:sparse_batch_test PASSED in 21.0s //tensorflow/python/data/kernel_tests:unbatch_test PASSED in 30.6s //tensorflow/python/data/util:convert_test PASSED in 7.6s //tensorflow/python/data/util:nest_test PASSED in 8.4s //tensorflow/python/data/util:options_test PASSED in 7.5s //tensorflow/python/data/util:random_seed_test PASSED in 8.0s //tensorflow/python/data/util:sparse_test PASSED in 8.4s //tensorflow/python/data/util:structure_test PASSED in 8.9s //tensorflow/python/data/util:traverse_test PASSED in 7.0s //tensorflow/python/debug/cli:analyzer_cli_test_cpu PASSED in 8.4s //tensorflow/python/debug/cli:cli_config_test PASSED in 7.4s //tensorflow/python/debug/cli:cli_shared_test PASSED in 15.5s //tensorflow/python/debug/cli:command_parser_test PASSED in 7.2s //tensorflow/python/debug/cli:curses_ui_test PASSED in 7.5s //tensorflow/python/debug/cli:debugger_cli_common_test PASSED in 7.3s //tensorflow/python/debug/cli:evaluator_test PASSED in 7.8s //tensorflow/python/debug/cli:profile_analyzer_cli_test PASSED in 16.0s //tensorflow/python/debug/cli:readline_ui_test PASSED in 6.6s //tensorflow/python/debug/cli:tensor_format_test PASSED in 10.4s //tensorflow/python/debug/lib:check_numerics_callback_test_cpu PASSED in 10.5s //tensorflow/python/debug/lib:common_test PASSED in 8.4s //tensorflow/python/debug/lib:debug_data_test PASSED in 6.2s //tensorflow/python/debug/lib:debug_events_monitors_test PASSED in 9.8s //tensorflow/python/debug/lib:debug_events_writer_test PASSED in 19.2s //tensorflow/python/debug/lib:debug_gradients_test_cpu PASSED in 7.3s //tensorflow/python/debug/lib:debug_graph_reconstruction_test_cpu PASSED in 7.7s //tensorflow/python/debug/lib:debug_graphs_test PASSED in 8.4s //tensorflow/python/debug/lib:debug_grappler_test_cpu PASSED in 15.7s //tensorflow/python/debug/lib:debug_utils_test PASSED in 7.4s //tensorflow/python/debug/lib:debug_v2_ops_test_cpu PASSED in 16.7s //tensorflow/python/debug/lib:profiling_test PASSED in 7.5s //tensorflow/python/debug/lib:session_debug_file_test_cpu PASSED in 13.8s //tensorflow/python/debug/lib:session_debug_multi_gpu_test_cpu PASSED in 7.7s //tensorflow/python/debug/lib:source_utils_test PASSED in 12.7s //tensorflow/python/debug/wrappers:disk_usage_test PASSED in 7.3s //tensorflow/python/debug/wrappers:dumping_wrapper_test PASSED in 6.7s //tensorflow/python/debug/wrappers:framework_test PASSED in 6.4s //tensorflow/python/debug/wrappers:local_cli_wrapper_test PASSED in 7.0s //tensorflow/python/distribute:checkpoint_utils_test_2gpu PASSED in 9.3s //tensorflow/python/distribute:checkpoint_utils_test_cpu PASSED in 12.9s //tensorflow/python/distribute:checkpointing_test_2gpu PASSED in 8.4s //tensorflow/python/distribute:checkpointing_test_cpu PASSED in 9.1s //tensorflow/python/distribute:collective_all_reduce_strategy_test_2gpu PASSED in 57.9s //tensorflow/python/distribute:collective_all_reduce_strategy_test_cpu PASSED in 51.9s //tensorflow/python/distribute:collective_all_reduce_strategy_test_xla_2gpu PASSED in 22.1s //tensorflow/python/distribute:collective_util_test PASSED in 7.1s //tensorflow/python/distribute:combinations_test_2gpu PASSED in 21.0s //tensorflow/python/distribute:combinations_test_cpu PASSED in 19.4s //tensorflow/python/distribute:cross_device_utils_test_cpu PASSED in 7.8s //tensorflow/python/distribute:custom_training_loop_gradient_test_2gpu PASSED in 13.7s //tensorflow/python/distribute:custom_training_loop_gradient_test_cpu PASSED in 11.9s //tensorflow/python/distribute:device_util_test_cpu PASSED in 9.8s //tensorflow/python/distribute:distribute_coordinator_test PASSED in 17.6s //tensorflow/python/distribute:distribute_lib_test PASSED in 11.2s //tensorflow/python/distribute:distribute_utils_test_2gpu PASSED in 16.6s //tensorflow/python/distribute:distribute_utils_test_cpu PASSED in 8.5s //tensorflow/python/distribute:input_ops_test_cpu PASSED in 16.9s //tensorflow/python/distribute:metrics_v1_test_2gpu PASSED in 27.5s //tensorflow/python/distribute:metrics_v1_test_cpu PASSED in 29.0s //tensorflow/python/distribute:mirrored_values_test_2gpu PASSED in 9.9s //tensorflow/python/distribute:mirrored_values_test_cpu PASSED in 9.2s //tensorflow/python/distribute:mirrored_variable_test_2gpu PASSED in 20.1s //tensorflow/python/distribute:mirrored_variable_test_cpu PASSED in 23.9s //tensorflow/python/distribute:multi_process_runner_no_init_test PASSED in 9.4s //tensorflow/python/distribute:multi_worker_util_test PASSED in 6.7s //tensorflow/python/distribute:numpy_dataset_test PASSED in 6.7s //tensorflow/python/distribute:one_device_strategy_test_cpu PASSED in 17.8s //tensorflow/python/distribute:packed_distributed_variable_test PASSED in 7.1s //tensorflow/python/distribute:parameter_server_strategy_test_2gpu PASSED in 39.5s //tensorflow/python/distribute:parameter_server_strategy_test_cpu PASSED in 27.6s //tensorflow/python/distribute:parameter_server_strategy_v2_test_2gpu PASSED in 29.8s //tensorflow/python/distribute:parameter_server_strategy_v2_test_cpu PASSED in 26.8s //tensorflow/python/distribute:per_replica_test_2gpu PASSED in 9.0s //tensorflow/python/distribute:per_replica_test_cpu PASSED in 8.3s //tensorflow/python/distribute:ps_values_test_2gpu PASSED in 8.5s //tensorflow/python/distribute:ps_values_test_cpu PASSED in 8.4s //tensorflow/python/distribute:remote_mirrored_strategy_eager_test_cpu PASSED in 8.5s //tensorflow/python/distribute:sharded_variable_test PASSED in 18.9s //tensorflow/python/distribute:shared_variable_creator_test PASSED in 10.1s //tensorflow/python/distribute:strategy_combinations_test_cpu PASSED in 44.3s //tensorflow/python/distribute:template_mirrored_strategy_test_cpu PASSED in 7.4s //tensorflow/python/distribute:test_util_test_2gpu PASSED in 17.2s //tensorflow/python/distribute:test_util_test_cpu PASSED in 15.2s //tensorflow/python/distribute:tf_function_test_2gpu PASSED in 9.8s //tensorflow/python/distribute:tf_function_test_cpu PASSED in 8.5s //tensorflow/python/distribute:values_v2_test_cpu PASSED in 14.3s //tensorflow/python/distribute:warm_starting_util_test_2gpu PASSED in 10.3s //tensorflow/python/distribute:warm_starting_util_test_cpu PASSED in 18.0s //tensorflow/python/distribute/cluster_resolver:base_cluster_resolver_py_test PASSED in 8.1s //tensorflow/python/distribute/cluster_resolver:gce_cluster_resolver_py_test PASSED in 8.2s //tensorflow/python/distribute/cluster_resolver:kubernetes_cluster_resolver_py_test PASSED in 7.3s //tensorflow/python/distribute/cluster_resolver:sagemaker_cluster_resolver_py_test PASSED in 8.3s //tensorflow/python/distribute/cluster_resolver:slurm_cluster_resolver_py_test PASSED in 7.2s //tensorflow/python/distribute/cluster_resolver:tfconfig_cluster_resolver_py_test PASSED in 8.3s //tensorflow/python/distribute/cluster_resolver/tpu:tpu_cluster_resolver_py_test PASSED in 7.6s //tensorflow/python/distribute/coordinator:metric_utils_test PASSED in 13.5s //tensorflow/python/distribute/coordinator:watchdog_test PASSED in 61.7s //tensorflow/python/distribute/experimental:dtensor_util_test_cpu PASSED in 23.0s //tensorflow/python/distribute/experimental:mirrored_strategy_test_cpu PASSED in 30.4s //tensorflow/python/distribute/integration_test:saved_model_test_cpu PASSED in 35.2s //tensorflow/python/distribute/parallel_device:parallel_device_test_cpu PASSED in 26.5s //tensorflow/python/distribute/v1:all_reduce_test PASSED in 56.1s //tensorflow/python/distribute/v1:cross_device_ops_test_2gpu PASSED in 57.6s //tensorflow/python/distribute/v1:cross_device_ops_test_cpu PASSED in 53.4s //tensorflow/python/dlpack:dlpack_test_cpu PASSED in 8.8s //tensorflow/python/eager:backprop_test_cpu PASSED in 110.2s //tensorflow/python/eager:benchmarks_test_cpu PASSED in 8.1s //tensorflow/python/eager:cancellation_test_cpu PASSED in 7.1s //tensorflow/python/eager:context_test_cpu PASSED in 8.6s //tensorflow/python/eager:core_test_cpu PASSED in 18.4s //tensorflow/python/eager:gradient_input_output_exclusions_test PASSED in 29.5s //tensorflow/python/eager:graph_only_ops_test_cpu PASSED in 7.0s //tensorflow/python/eager:lift_to_graph_test PASSED in 23.4s //tensorflow/python/eager:monitoring_test_cpu PASSED in 9.9s //tensorflow/python/eager:ops_test_cpu PASSED in 10.7s //tensorflow/python/eager:profiler_client_test PASSED in 6.3s //tensorflow/python/eager:profiler_test_cpu PASSED in 6.7s //tensorflow/python/eager:pywrap_tfe_test PASSED in 20.0s //tensorflow/python/eager:remote_benchmarks_test_cpu PASSED in 8.0s //tensorflow/python/eager:run_eager_op_as_function_test_cpu PASSED in 9.6s //tensorflow/python/eager:run_eager_op_as_function_xla_test_cpu PASSED in 8.2s //tensorflow/python/eager:tape_test PASSED in 8.2s //tensorflow/python/eager:tensor_test_cpu PASSED in 12.7s //tensorflow/python/eager:wrap_function_device_test_cpu PASSED in 9.1s //tensorflow/python/eager:wrap_function_test PASSED in 15.0s //tensorflow/python/eager/benchmarks:kpi_benchmark_test_cpu PASSED in 13.4s //tensorflow/python/eager/memory_tests:remote_memory_test_cpu PASSED in 8.4s //tensorflow/python/eager/polymorphic_function:argument_naming_test_cpu PASSED in 9.3s //tensorflow/python/eager/polymorphic_function:collection_test_cpu PASSED in 7.2s //tensorflow/python/eager/polymorphic_function:compiler_ir_test_cpu PASSED in 23.6s //tensorflow/python/eager/polymorphic_function:compiler_ir_test_cpu_mlir_bridge_test PASSED in 7.7s //tensorflow/python/eager/polymorphic_function:function_spec_test PASSED in 7.1s //tensorflow/python/eager/polymorphic_function:polymorphic_function_xla_jit_test_cpu PASSED in 24.5s //tensorflow/python/eager/polymorphic_function:polymorphic_function_xla_jit_test_cpu_mlir_bridge_test PASSED in 22.9s //tensorflow/python/eager/polymorphic_function:polymorphic_function_xla_test_cpu PASSED in 8.0s //tensorflow/python/eager/polymorphic_function:quarantine_test PASSED in 20.8s //tensorflow/python/feature_column:sequence_feature_column_integration_test PASSED in 8.9s //tensorflow/python/feature_column:serialization_test PASSED in 10.7s //tensorflow/python/framework:auto_control_deps_test PASSED in 21.4s //tensorflow/python/framework:c_api_util_test PASSED in 9.2s //tensorflow/python/framework:common_shapes_test PASSED in 7.5s //tensorflow/python/framework:composite_tensor_test PASSED in 8.8s //tensorflow/python/framework:config_test_2gpu PASSED in 10.5s //tensorflow/python/framework:config_test_cpu PASSED in 12.6s //tensorflow/python/framework:constant_op_test PASSED in 7.5s //tensorflow/python/framework:device_spec_test PASSED in 7.1s //tensorflow/python/framework:device_test PASSED in 6.8s //tensorflow/python/framework:dtypes_test PASSED in 15.4s //tensorflow/python/framework:error_interpolation_test PASSED in 8.1s //tensorflow/python/framework:errors_test PASSED in 8.9s //tensorflow/python/framework:extension_type_field_test PASSED in 7.5s //tensorflow/python/framework:extension_type_test PASSED in 17.7s //tensorflow/python/framework:file_system_test PASSED in 8.1s //tensorflow/python/framework:function_def_to_graph_test PASSED in 25.5s //tensorflow/python/framework:graph_building_benchmark_cpu PASSED in 8.5s //tensorflow/python/framework:graph_util_test PASSED in 7.2s //tensorflow/python/framework:immutable_dict_test PASSED in 7.3s //tensorflow/python/framework:importer_test PASSED in 9.0s //tensorflow/python/framework:indexed_slices_test PASSED in 7.1s //tensorflow/python/framework:kernels_test PASSED in 7.5s //tensorflow/python/framework:meta_graph_test PASSED in 9.1s //tensorflow/python/framework:node_file_writer_test_cpu PASSED in 8.5s //tensorflow/python/framework:offset_counter_helper_test PASSED in 0.1s //tensorflow/python/framework:op_allowlist_namespace_test PASSED in 1.8s //tensorflow/python/framework:op_callbacks_test_cpu PASSED in 9.5s //tensorflow/python/framework:op_def_library_test PASSED in 9.7s //tensorflow/python/framework:op_def_util_test PASSED in 7.8s //tensorflow/python/framework:ops_enable_eager_test PASSED in 1.6s //tensorflow/python/framework:ops_test PASSED in 22.7s //tensorflow/python/framework:proto_test PASSED in 7.6s //tensorflow/python/framework:py_context_manager_test PASSED in 7.1s //tensorflow/python/framework:python_api_dispatcher_test PASSED in 8.2s //tensorflow/python/framework:python_api_info_test PASSED in 8.5s //tensorflow/python/framework:python_api_parameter_converter_test PASSED in 7.8s //tensorflow/python/framework:python_op_gen_annotation_test PASSED in 3.5s //tensorflow/python/framework:python_op_gen_annotator_test PASSED in 0.5s //tensorflow/python/framework:python_tensor_converter_test PASSED in 6.9s //tensorflow/python/framework:random_seed_test PASSED in 11.5s //tensorflow/python/framework:registry_test PASSED in 7.0s //tensorflow/python/framework:smart_cond_test PASSED in 7.2s //tensorflow/python/framework:sparse_tensor_test PASSED in 8.3s //tensorflow/python/framework:subscribe_test PASSED in 27.0s //tensorflow/python/framework:tensor_shape_test PASSED in 6.7s //tensorflow/python/framework:tensor_test PASSED in 7.8s //tensorflow/python/framework:tensor_util_test PASSED in 19.6s //tensorflow/python/framework:test_combinations_test PASSED in 7.6s //tensorflow/python/framework:test_util_test_cpu PASSED in 15.6s //tensorflow/python/framework:tf2_test PASSED in 6.6s //tensorflow/python/framework:traceable_stack_test PASSED in 26.1s //tensorflow/python/framework:type_spec_test PASSED in 7.5s //tensorflow/python/framework:versions_test PASSED in 6.3s //tensorflow/python/framework/experimental:graph_building_test_cpu PASSED in 7.5s //tensorflow/python/framework/experimental:unified_api_test_cpu PASSED in 13.4s //tensorflow/python/grappler:arithmetic_optimizer_test_cpu PASSED in 7.6s //tensorflow/python/grappler:auto_mixed_precision_test_cpu PASSED in 12.8s //tensorflow/python/grappler:constant_folding_test_cpu PASSED in 8.9s //tensorflow/python/grappler:cost_analyzer_test PASSED in 8.9s //tensorflow/python/grappler:datasets_test PASSED in 44.9s //tensorflow/python/grappler:item_test PASSED in 6.2s //tensorflow/python/grappler:memory_optimizer_test PASSED in 33.1s //tensorflow/python/grappler:model_analyzer_test PASSED in 7.0s //tensorflow/python/grappler:remapper_test_cpu PASSED in 7.4s //tensorflow/python/grappler:tf_optimizer_test PASSED in 7.9s //tensorflow/python/kernel_tests:benchmark_test_cpu PASSED in 9.3s //tensorflow/python/kernel_tests:check_ops_test_cpu PASSED in 17.3s //tensorflow/python/kernel_tests:collective_ops_multi_worker_test PASSED in 29.2s //tensorflow/python/kernel_tests:composite_tensor_ops_test PASSED in 8.6s //tensorflow/python/kernel_tests:critical_section_test_cpu PASSED in 16.7s //tensorflow/python/kernel_tests:garbage_collection_test PASSED in 7.0s //tensorflow/python/kernel_tests:gradient_correctness_test_cpu PASSED in 8.4s //tensorflow/python/kernel_tests:histogram_ops_test_cpu PASSED in 7.2s //tensorflow/python/kernel_tests:logging_ops_test_cpu PASSED in 10.1s //tensorflow/python/kernel_tests:numerics_test_cpu PASSED in 8.3s //tensorflow/python/kernel_tests:template_test PASSED in 10.2s //tensorflow/python/kernel_tests:trace_op_test_cpu PASSED in 7.9s //tensorflow/python/kernel_tests/array_ops:batch_gather_op_test_cpu PASSED in 8.4s //tensorflow/python/kernel_tests/array_ops:batch_scatter_ops_test PASSED in 7.8s //tensorflow/python/kernel_tests/array_ops:batchtospace_op_test_cpu PASSED in 12.3s //tensorflow/python/kernel_tests/array_ops:bcast_ops_test PASSED in 7.3s //tensorflow/python/kernel_tests/array_ops:bitcast_op_test_cpu PASSED in 7.9s //tensorflow/python/kernel_tests/array_ops:broadcast_to_ops_test_cpu PASSED in 45.8s //tensorflow/python/kernel_tests/array_ops:cast_op_test_cpu PASSED in 8.8s //tensorflow/python/kernel_tests/array_ops:constant_op_eager_test_cpu PASSED in 7.4s //tensorflow/python/kernel_tests/array_ops:constant_op_test_cpu PASSED in 19.7s //tensorflow/python/kernel_tests/array_ops:denormal_test_cpu PASSED in 8.3s //tensorflow/python/kernel_tests/array_ops:depthtospace_op_test_cpu PASSED in 18.8s //tensorflow/python/kernel_tests/array_ops:edit_distance_op_test PASSED in 8.0s //tensorflow/python/kernel_tests/array_ops:fingerprint_op_test PASSED in 28.2s //tensorflow/python/kernel_tests/array_ops:gather_nd_op_test_cpu PASSED in 7.8s //tensorflow/python/kernel_tests/array_ops:identity_n_op_py_test PASSED in 7.1s //tensorflow/python/kernel_tests/array_ops:identity_op_py_test PASSED in 7.9s //tensorflow/python/kernel_tests/array_ops:large_concat_op_test_cpu PASSED in 7.9s //tensorflow/python/kernel_tests/array_ops:manip_ops_test_cpu PASSED in 9.2s //tensorflow/python/kernel_tests/array_ops:one_hot_op_test_cpu PASSED in 8.7s //tensorflow/python/kernel_tests/array_ops:pad_op_test_cpu PASSED in 26.9s //tensorflow/python/kernel_tests/array_ops:reshape_op_test_cpu PASSED in 8.2s //tensorflow/python/kernel_tests/array_ops:reverse_sequence_op_test_cpu PASSED in 8.2s //tensorflow/python/kernel_tests/array_ops:scalar_test_cpu PASSED in 17.9s //tensorflow/python/kernel_tests/array_ops:shape_ops_test_cpu PASSED in 13.5s //tensorflow/python/kernel_tests/array_ops:slice_op_test_cpu PASSED in 8.5s //tensorflow/python/kernel_tests/array_ops:spacetobatch_op_test_cpu PASSED in 15.6s //tensorflow/python/kernel_tests/array_ops:spacetodepth_op_test_cpu PASSED in 7.9s //tensorflow/python/kernel_tests/array_ops:stack_op_test_cpu PASSED in 15.0s //tensorflow/python/kernel_tests/array_ops:unique_op_test_cpu PASSED in 39.4s //tensorflow/python/kernel_tests/array_ops:unstack_op_test_cpu PASSED in 22.6s //tensorflow/python/kernel_tests/array_ops:where_op_test_cpu PASSED in 15.2s //tensorflow/python/kernel_tests/control_flow:cond_v2_test_cpu PASSED in 66.9s //tensorflow/python/kernel_tests/control_flow:control_flow_util_test PASSED in 7.4s //tensorflow/python/kernel_tests/control_flow:control_flow_util_v2_test PASSED in 7.6s //tensorflow/python/kernel_tests/control_flow:py_func_test_cpu PASSED in 41.2s //tensorflow/python/kernel_tests/control_flow:scan_ops_test_cpu PASSED in 70.1s //tensorflow/python/kernel_tests/control_flow:while_v2_test_cpu PASSED in 62.3s //tensorflow/python/kernel_tests/custom_ops:ackermann_test PASSED in 8.1s //tensorflow/python/kernel_tests/custom_ops:duplicate_op_test PASSED in 7.9s //tensorflow/python/kernel_tests/custom_ops:invalid_op_test PASSED in 6.7s //tensorflow/python/kernel_tests/data_structures:conditional_accumulator_test PASSED in 8.1s //tensorflow/python/kernel_tests/data_structures:dynamic_partition_op_test_2gpu PASSED in 12.2s //tensorflow/python/kernel_tests/data_structures:dynamic_partition_op_test_cpu PASSED in 33.9s //tensorflow/python/kernel_tests/data_structures:dynamic_stitch_op_test_cpu PASSED in 7.4s //tensorflow/python/kernel_tests/data_structures:fifo_queue_test PASSED in 9.9s //tensorflow/python/kernel_tests/data_structures:list_ops_test_cpu PASSED in 40.7s //tensorflow/python/kernel_tests/data_structures:listdiff_op_test PASSED in 8.0s //tensorflow/python/kernel_tests/data_structures:lookup_ops_test PASSED in 20.0s //tensorflow/python/kernel_tests/data_structures:map_ops_test PASSED in 30.8s //tensorflow/python/kernel_tests/data_structures:padding_fifo_queue_test_cpu PASSED in 27.5s //tensorflow/python/kernel_tests/data_structures:priority_queue_test PASSED in 7.1s //tensorflow/python/kernel_tests/data_structures:stack_ops_test_cpu PASSED in 8.6s //tensorflow/python/kernel_tests/data_structures:stage_op_test_cpu PASSED in 23.2s //tensorflow/python/kernel_tests/distributions:bernoulli_test_cpu PASSED in 16.4s //tensorflow/python/kernel_tests/distributions:bijector_test_cpu PASSED in 10.6s //tensorflow/python/kernel_tests/distributions:categorical_test_cpu PASSED in 10.9s //tensorflow/python/kernel_tests/distributions:dirichlet_multinomial_test_cpu PASSED in 12.0s //tensorflow/python/kernel_tests/distributions:dirichlet_test_cpu PASSED in 11.1s //tensorflow/python/kernel_tests/distributions:exponential_test_cpu PASSED in 14.3s //tensorflow/python/kernel_tests/distributions:gamma_test_cpu PASSED in 52.1s //tensorflow/python/kernel_tests/distributions:identity_bijector_test_cpu PASSED in 9.4s //tensorflow/python/kernel_tests/distributions:kullback_leibler_test_cpu PASSED in 8.5s //tensorflow/python/kernel_tests/distributions:laplace_test_cpu PASSED in 27.6s //tensorflow/python/kernel_tests/distributions:multinomial_test_cpu PASSED in 8.7s //tensorflow/python/kernel_tests/distributions:normal_test_cpu PASSED in 24.8s //tensorflow/python/kernel_tests/distributions:special_math_test_cpu PASSED in 18.0s //tensorflow/python/kernel_tests/distributions:uniform_test_cpu PASSED in 9.7s //tensorflow/python/kernel_tests/image_ops:attention_ops_test PASSED in 8.0s //tensorflow/python/kernel_tests/image_ops:decode_bmp_op_test PASSED in 7.6s //tensorflow/python/kernel_tests/image_ops:decode_compressed_op_test PASSED in 7.9s //tensorflow/python/kernel_tests/image_ops:decode_image_op_test PASSED in 7.5s //tensorflow/python/kernel_tests/image_ops:decode_jpeg_op_test PASSED in 7.3s //tensorflow/python/kernel_tests/image_ops:decode_png_op_test PASSED in 8.8s //tensorflow/python/kernel_tests/image_ops:decode_raw_op_test PASSED in 7.7s //tensorflow/python/kernel_tests/image_ops:draw_bounding_box_op_test_cpu PASSED in 7.5s //tensorflow/python/kernel_tests/image_ops:extract_image_patches_op_test_cpu PASSED in 7.2s //tensorflow/python/kernel_tests/image_ops:extract_volume_patches_op_test_cpu PASSED in 6.8s //tensorflow/python/kernel_tests/io_ops:checkpoint_ops_test PASSED in 8.2s //tensorflow/python/kernel_tests/io_ops:decode_csv_op_test PASSED in 8.5s //tensorflow/python/kernel_tests/io_ops:io_ops_test PASSED in 7.5s //tensorflow/python/kernel_tests/io_ops:parse_single_example_op_test PASSED in 8.2s //tensorflow/python/kernel_tests/io_ops:parsing_ops_test PASSED in 24.7s //tensorflow/python/kernel_tests/io_ops:reader_ops_test PASSED in 24.0s //tensorflow/python/kernel_tests/io_ops:record_input_test PASSED in 25.1s //tensorflow/python/kernel_tests/io_ops:save_restore_ops_test PASSED in 7.1s //tensorflow/python/kernel_tests/linalg:determinant_op_test_cpu PASSED in 7.8s //tensorflow/python/kernel_tests/linalg:linear_operator_addition_test_cpu PASSED in 7.5s //tensorflow/python/kernel_tests/linalg:linear_operator_algebra_test_cpu PASSED in 7.5s //tensorflow/python/kernel_tests/linalg:linear_operator_test_cpu PASSED in 9.2s //tensorflow/python/kernel_tests/linalg:lu_op_test_cpu PASSED in 10.4s //tensorflow/python/kernel_tests/linalg:matrix_inverse_op_test_cpu PASSED in 31.8s //tensorflow/python/kernel_tests/linalg:matrix_logarithm_op_test PASSED in 53.4s //tensorflow/python/kernel_tests/linalg:matrix_solve_ls_op_test_cpu PASSED in 32.6s //tensorflow/python/kernel_tests/linalg:matrix_solve_op_test_cpu PASSED in 38.8s //tensorflow/python/kernel_tests/linalg:matrix_square_root_op_test_cpu PASSED in 6.5s //tensorflow/python/kernel_tests/linalg:slicing_test_cpu PASSED in 31.5s //tensorflow/python/kernel_tests/linalg/sparse:conjugate_gradient_test_cpu PASSED in 17.9s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_test_cpu PASSED in 7.9s //tensorflow/python/kernel_tests/math_ops:aggregate_ops_test_cpu PASSED in 11.4s //tensorflow/python/kernel_tests/math_ops:argmax_op_test_cpu PASSED in 24.8s //tensorflow/python/kernel_tests/math_ops:banded_triangular_solve_op_test_cpu PASSED in 10.3s //tensorflow/python/kernel_tests/math_ops:basic_gpu_test_cpu PASSED in 8.6s //tensorflow/python/kernel_tests/math_ops:bincount_op_test_cpu PASSED in 23.3s //tensorflow/python/kernel_tests/math_ops:bucketize_op_test_cpu PASSED in 7.5s //tensorflow/python/kernel_tests/math_ops:clip_ops_test PASSED in 8.6s //tensorflow/python/kernel_tests/math_ops:confusion_matrix_test PASSED in 9.9s //tensorflow/python/kernel_tests/math_ops:cross_grad_test_cpu PASSED in 8.6s //tensorflow/python/kernel_tests/math_ops:cumulative_logsumexp_test_cpu PASSED in 8.8s //tensorflow/python/kernel_tests/math_ops:in_topk_op_test_cpu PASSED in 7.9s //tensorflow/python/kernel_tests/math_ops:reduce_benchmark_test_cpu PASSED in 15.4s //tensorflow/python/kernel_tests/math_ops:segment_reduction_ops_d9m_test_cpu PASSED in 11.7s //tensorflow/python/kernel_tests/math_ops:sets_test PASSED in 30.4s //tensorflow/python/kernel_tests/math_ops:topk_op_test_cpu PASSED in 8.7s //tensorflow/python/kernel_tests/math_ops:zero_division_test_cpu PASSED in 7.9s //tensorflow/python/kernel_tests/nn_ops:betainc_op_test_cpu PASSED in 11.5s //tensorflow/python/kernel_tests/nn_ops:bias_op_test_cpu PASSED in 142.6s //tensorflow/python/kernel_tests/nn_ops:conv1d_test_cpu PASSED in 11.7s //tensorflow/python/kernel_tests/nn_ops:conv1d_transpose_test_cpu PASSED in 6.5s //tensorflow/python/kernel_tests/nn_ops:conv2d_transpose_test_cpu PASSED in 15.5s //tensorflow/python/kernel_tests/nn_ops:conv3d_backprop_filter_v2_grad_test_cpu PASSED in 12.8s //tensorflow/python/kernel_tests/nn_ops:conv3d_transpose_test_cpu PASSED in 9.3s //tensorflow/python/kernel_tests/nn_ops:ctc_decoder_ops_test PASSED in 8.7s //tensorflow/python/kernel_tests/nn_ops:ctc_loss_op_test_cpu PASSED in 65.4s //tensorflow/python/kernel_tests/nn_ops:cudnn_d9m_test_cpu PASSED in 15.5s //tensorflow/python/kernel_tests/nn_ops:cudnn_deterministic_ops_test_cpu PASSED in 7.7s //tensorflow/python/kernel_tests/nn_ops:losses_test PASSED in 30.3s //tensorflow/python/kernel_tests/nn_ops:lrn_op_test_cpu PASSED in 27.0s //tensorflow/python/kernel_tests/nn_ops:morphological_ops_test_cpu PASSED in 11.7s //tensorflow/python/kernel_tests/nn_ops:nth_element_op_test_cpu PASSED in 9.7s //tensorflow/python/kernel_tests/nn_ops:pool_test_cpu PASSED in 26.5s //tensorflow/python/kernel_tests/nn_ops:pooling_ops_3d_test_cpu PASSED in 18.4s //tensorflow/python/kernel_tests/nn_ops:relu_op_test_cpu PASSED in 9.0s //tensorflow/python/kernel_tests/nn_ops:softmax_op_test_cpu PASSED in 7.8s //tensorflow/python/kernel_tests/nn_ops:softplus_op_test_cpu PASSED in 8.0s //tensorflow/python/kernel_tests/nn_ops:softsign_op_test_cpu PASSED in 8.2s //tensorflow/python/kernel_tests/nn_ops:xent_op_d9m_test_cpu PASSED in 189.8s //tensorflow/python/kernel_tests/nn_ops:xent_op_test_cpu PASSED in 9.2s //tensorflow/python/kernel_tests/proto:descriptor_source_test PASSED in 8.0s //tensorflow/python/kernel_tests/proto:encode_proto_op_test PASSED in 8.0s //tensorflow/python/kernel_tests/quantization_ops:quantization_ops_test PASSED in 22.6s //tensorflow/python/kernel_tests/random:candidate_sampler_ops_test PASSED in 7.1s //tensorflow/python/kernel_tests/random:multinomial_op_test_cpu PASSED in 11.6s //tensorflow/python/kernel_tests/random:parameterized_truncated_normal_op_test_cpu PASSED in 13.9s //tensorflow/python/kernel_tests/random:random_crop_test_cpu PASSED in 8.3s //tensorflow/python/kernel_tests/random:random_grad_test_cpu PASSED in 18.7s //tensorflow/python/kernel_tests/random:random_ops_test_cpu PASSED in 15.6s //tensorflow/python/kernel_tests/random:random_poisson_test_cpu PASSED in 11.6s //tensorflow/python/kernel_tests/random:random_shuffle_queue_test PASSED in 7.2s //tensorflow/python/kernel_tests/random:stateful_random_ops_test_cpu PASSED in 20.7s //tensorflow/python/kernel_tests/signal:mel_ops_test_cpu PASSED in 13.0s //tensorflow/python/kernel_tests/signal:mfcc_ops_test_cpu PASSED in 7.3s //tensorflow/python/kernel_tests/signal:reconstruction_ops_test_cpu PASSED in 11.6s //tensorflow/python/kernel_tests/signal:shape_ops_test_cpu PASSED in 18.5s //tensorflow/python/kernel_tests/sparse_ops:sparse_add_op_test PASSED in 21.9s //tensorflow/python/kernel_tests/sparse_ops:sparse_concat_op_test PASSED in 19.0s //tensorflow/python/kernel_tests/sparse_ops:sparse_conditional_accumulator_test PASSED in 9.5s //tensorflow/python/kernel_tests/sparse_ops:sparse_cross_op_test PASSED in 28.0s //tensorflow/python/kernel_tests/sparse_ops:sparse_matmul_op_test_cpu PASSED in 52.0s //tensorflow/python/kernel_tests/sparse_ops:sparse_reorder_op_test PASSED in 7.2s //tensorflow/python/kernel_tests/sparse_ops:sparse_reshape_op_test PASSED in 10.2s //tensorflow/python/kernel_tests/sparse_ops:sparse_serialization_ops_test PASSED in 9.1s //tensorflow/python/kernel_tests/sparse_ops:sparse_slice_op_test PASSED in 9.4s //tensorflow/python/kernel_tests/sparse_ops:sparse_split_op_test_cpu PASSED in 7.7s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensor_dense_matmul_grad_test_cpu PASSED in 18.3s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensor_dense_matmul_op_d9m_test_cpu PASSED in 35.6s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensor_dense_matmul_op_test_cpu PASSED in 54.3s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensors_map_ops_test PASSED in 22.0s //tensorflow/python/kernel_tests/sparse_ops:sparse_to_dense_op_py_test_cpu PASSED in 9.7s //tensorflow/python/kernel_tests/sparse_ops:sparse_xent_op_d9m_test_cpu PASSED in 82.0s //tensorflow/python/kernel_tests/sparse_ops:sparse_xent_op_test_cpu PASSED in 9.3s //tensorflow/python/kernel_tests/sparse_ops:sparsemask_op_test PASSED in 10.5s //tensorflow/python/kernel_tests/strings_ops:as_string_op_test PASSED in 9.7s //tensorflow/python/kernel_tests/strings_ops:base64_ops_test PASSED in 13.8s //tensorflow/python/kernel_tests/strings_ops:reduce_join_op_test_cpu PASSED in 7.9s //tensorflow/python/kernel_tests/strings_ops:regex_full_match_op_test PASSED in 9.6s //tensorflow/python/kernel_tests/strings_ops:regex_replace_op_test PASSED in 7.5s //tensorflow/python/kernel_tests/strings_ops:string_bytes_split_op_test PASSED in 10.9s //tensorflow/python/kernel_tests/strings_ops:string_format_op_test PASSED in 7.5s //tensorflow/python/kernel_tests/strings_ops:string_join_op_test PASSED in 7.0s //tensorflow/python/kernel_tests/strings_ops:string_length_op_test PASSED in 14.5s //tensorflow/python/kernel_tests/strings_ops:string_lower_op_test PASSED in 8.3s //tensorflow/python/kernel_tests/strings_ops:string_split_op_test PASSED in 11.2s //tensorflow/python/kernel_tests/strings_ops:string_strip_op_test PASSED in 26.7s //tensorflow/python/kernel_tests/strings_ops:string_to_hash_bucket_op_test_cpu PASSED in 13.6s //tensorflow/python/kernel_tests/strings_ops:string_to_number_op_test_cpu PASSED in 21.8s //tensorflow/python/kernel_tests/strings_ops:string_upper_op_test PASSED in 7.3s //tensorflow/python/kernel_tests/strings_ops:substr_op_test PASSED in 8.6s //tensorflow/python/kernel_tests/strings_ops:unicode_decode_op_test PASSED in 31.5s //tensorflow/python/kernel_tests/strings_ops:unicode_encode_op_test PASSED in 6.8s //tensorflow/python/kernel_tests/strings_ops:unicode_script_op_test PASSED in 6.6s //tensorflow/python/kernel_tests/strings_ops:unicode_transcode_op_test PASSED in 8.5s //tensorflow/python/kernel_tests/strings_ops:unsorted_segment_join_op_test_cpu PASSED in 10.6s //tensorflow/python/kernel_tests/summary_ops:summary_ops_test_cpu PASSED in 29.9s //tensorflow/python/kernel_tests/summary_ops:summary_v1_audio_op_test_cpu PASSED in 9.4s //tensorflow/python/kernel_tests/summary_ops:summary_v1_image_op_test_cpu PASSED in 7.9s //tensorflow/python/kernel_tests/summary_ops:summary_v1_ops_test PASSED in 9.6s //tensorflow/python/kernel_tests/summary_ops:summary_v1_tensor_op_test PASSED in 7.7s //tensorflow/python/kernel_tests/v1_compat_tests:array_ops_test_cpu PASSED in 7.7s //tensorflow/python/kernel_tests/v1_compat_tests:dense_update_ops_test_cpu PASSED in 7.1s //tensorflow/python/kernel_tests/v1_compat_tests:identity_op_py_test PASSED in 7.7s //tensorflow/python/kernel_tests/v1_compat_tests:scatter_nd_ops_test_cpu PASSED in 14.6s //tensorflow/python/kernel_tests/v1_compat_tests:session_ops_test_cpu PASSED in 15.0s //tensorflow/python/kernel_tests/v1_compat_tests:stack_op_test_cpu PASSED in 8.2s //tensorflow/python/kernel_tests/variables:dense_update_ops_no_tsan_test_cpu PASSED in 8.8s //tensorflow/python/kernel_tests/variables:dense_update_ops_test_cpu PASSED in 16.4s //tensorflow/python/kernel_tests/variables:partitioned_variables_test PASSED in 30.0s //tensorflow/python/kernel_tests/variables:resource_variable_ops_test_cpu PASSED in 50.9s //tensorflow/python/kernel_tests/variables:variable_ops_test_cpu PASSED in 8.3s //tensorflow/python/kernel_tests/variables:variable_scope_test PASSED in 35.7s //tensorflow/python/kernel_tests/variables:variables_test PASSED in 10.0s //tensorflow/python/lib/core:custom_float_test PASSED in 8.2s //tensorflow/python/lib/io:file_io_test PASSED in 15.0s //tensorflow/python/lib/io:tf_record_test PASSED in 18.3s //tensorflow/python/module:module_test PASSED in 7.2s //tensorflow/python/ops/losses:util_test PASSED in 7.1s //tensorflow/python/ops/memory_tests:custom_gradient_memory_test_cpu PASSED in 10.2s //tensorflow/python/ops/numpy_ops:np_array_ops_test_cpu PASSED in 71.8s //tensorflow/python/ops/numpy_ops:np_arrays_test_cpu PASSED in 8.4s //tensorflow/python/ops/numpy_ops:np_dtypes_test_cpu PASSED in 7.2s //tensorflow/python/ops/numpy_ops:np_interop_test_cpu PASSED in 39.9s //tensorflow/python/ops/numpy_ops:np_logic_test_cpu PASSED in 10.5s //tensorflow/python/ops/numpy_ops:np_math_ops_test_cpu PASSED in 23.9s //tensorflow/python/ops/numpy_ops:np_random_test_cpu PASSED in 75.1s //tensorflow/python/ops/numpy_ops:np_utils_test_cpu PASSED in 7.7s //tensorflow/python/ops/numpy_ops/integration_test:np_config_test_cpu PASSED in 16.2s //tensorflow/python/ops/numpy_ops/integration_test:public_symbol_test PASSED in 14.9s //tensorflow/python/ops/parallel_for:array_test_cpu PASSED in 51.0s //tensorflow/python/ops/parallel_for:gradients_test_cpu PASSED in 10.6s //tensorflow/python/ops/parallel_for:xla_control_flow_ops_test_cpu PASSED in 48.0s //tensorflow/python/ops/ragged:convert_to_tensor_or_ragged_tensor_op_test PASSED in 8.8s //tensorflow/python/ops/ragged:ragged_batch_gather_op_test PASSED in 41.7s //tensorflow/python/ops/ragged:ragged_bitcast_op_test PASSED in 26.7s //tensorflow/python/ops/ragged:ragged_boolean_mask_op_test PASSED in 14.2s //tensorflow/python/ops/ragged:ragged_concat_op_test PASSED in 11.1s //tensorflow/python/ops/ragged:ragged_const_op_test PASSED in 8.1s //tensorflow/python/ops/ragged:ragged_constant_value_op_test PASSED in 7.6s //tensorflow/python/ops/ragged:ragged_cross_op_test PASSED in 19.6s //tensorflow/python/ops/ragged:ragged_dispatch_test PASSED in 133.2s //tensorflow/python/ops/ragged:ragged_dynamic_partition_op_test_cpu PASSED in 19.1s //tensorflow/python/ops/ragged:ragged_eager_test PASSED in 6.8s //tensorflow/python/ops/ragged:ragged_expand_dims_op_test PASSED in 8.6s //tensorflow/python/ops/ragged:ragged_factory_ops_test_cpu PASSED in 15.3s //tensorflow/python/ops/ragged:ragged_from_sparse_op_test PASSED in 31.0s //tensorflow/python/ops/ragged:ragged_from_tensor_op_test PASSED in 20.9s //tensorflow/python/ops/ragged:ragged_gather_nd_op_test PASSED in 17.9s //tensorflow/python/ops/ragged:ragged_map_flat_values_op_test PASSED in 9.5s //tensorflow/python/ops/ragged:ragged_map_fn_op_test PASSED in 14.3s //tensorflow/python/ops/ragged:ragged_math_ops_test PASSED in 13.4s //tensorflow/python/ops/ragged:ragged_matmul_op_test PASSED in 39.6s //tensorflow/python/ops/ragged:ragged_merge_dims_op_test PASSED in 39.7s //tensorflow/python/ops/ragged:ragged_one_hot_op_test PASSED in 24.3s //tensorflow/python/ops/ragged:ragged_operators_test PASSED in 17.7s //tensorflow/python/ops/ragged:ragged_placeholder_op_test PASSED in 6.5s //tensorflow/python/ops/ragged:ragged_print_op_test PASSED in 12.0s //tensorflow/python/ops/ragged:ragged_range_op_test PASSED in 8.2s //tensorflow/python/ops/ragged:ragged_rank_op_test PASSED in 7.9s //tensorflow/python/ops/ragged:ragged_reduce_op_test PASSED in 56.7s //tensorflow/python/ops/ragged:ragged_resize_image_op_test PASSED in 16.2s //tensorflow/python/ops/ragged:ragged_reverse_op_test PASSED in 7.7s //tensorflow/python/ops/ragged:ragged_row_lengths_op_test PASSED in 7.5s //tensorflow/python/ops/ragged:ragged_row_splits_to_segment_ids_op_test PASSED in 8.0s //tensorflow/python/ops/ragged:ragged_segment_ids_to_row_splits_op_test PASSED in 7.6s //tensorflow/python/ops/ragged:ragged_segment_op_test PASSED in 29.4s //tensorflow/python/ops/ragged:ragged_size_op_test PASSED in 6.5s //tensorflow/python/ops/ragged:ragged_split_op_test PASSED in 40.0s //tensorflow/python/ops/ragged:ragged_squeeze_op_test PASSED in 15.6s //tensorflow/python/ops/ragged:ragged_stack_op_test PASSED in 27.2s //tensorflow/python/ops/ragged:ragged_tensor_bounding_shape_op_test PASSED in 9.3s //tensorflow/python/ops/ragged:ragged_tensor_shape_test PASSED in 79.9s //tensorflow/python/ops/ragged:ragged_tile_op_test PASSED in 39.9s //tensorflow/python/ops/ragged:ragged_to_sparse_op_test PASSED in 8.4s //tensorflow/python/ops/ragged:ragged_to_tensor_op_test PASSED in 70.4s //tensorflow/python/ops/ragged:ragged_util_test PASSED in 21.0s //tensorflow/python/ops/ragged:ragged_where_op_test PASSED in 26.4s //tensorflow/python/ops/ragged:row_partition_test PASSED in 25.4s //tensorflow/python/ops/ragged:string_ngrams_op_test PASSED in 7.3s //tensorflow/python/ops/ragged:strings_reduce_join_op_test PASSED in 17.4s //tensorflow/python/ops/structured:structured_array_ops_test PASSED in 40.2s //tensorflow/python/ops/structured:structured_tensor_slice_test PASSED in 68.1s //tensorflow/python/ops/structured:structured_tensor_spec_test PASSED in 25.3s //tensorflow/python/ops/structured:structured_tensor_test PASSED in 40.9s //tensorflow/python/ops/v1_compat_tests:gradient_checker_test_cpu PASSED in 9.4s //tensorflow/python/platform:benchmark_test PASSED in 7.6s //tensorflow/python/platform:build_info_test PASSED in 6.9s //tensorflow/python/platform:resource_loader_test PASSED in 2.1s //tensorflow/python/profiler:pprof_profiler_test PASSED in 7.1s //tensorflow/python/profiler:profile_context_test_cpu PASSED in 23.4s //tensorflow/python/profiler:profiler_client_test_cpu PASSED in 7.6s //tensorflow/python/profiler:profiler_test_cpu PASSED in 17.7s //tensorflow/python/profiler:profiler_v2_test_cpu PASSED in 7.1s //tensorflow/python/profiler:profiler_wrapper_test PASSED in 14.9s //tensorflow/python/profiler:tfprof_logger_test PASSED in 7.8s //tensorflow/python/profiler/integration_test:profiler_api_test_cpu PASSED in 23.8s //tensorflow/python/profiler/internal:flops_registry_test PASSED in 7.1s //tensorflow/python/profiler/internal:print_model_analysis_test PASSED in 6.3s //tensorflow/python/profiler/internal:run_metadata_test_cpu PASSED in 12.5s //tensorflow/python/saved_model:fingerprinting_test PASSED in 8.7s //tensorflow/python/saved_model:keras_injection_test PASSED in 15.5s //tensorflow/python/saved_model:load_v1_in_v2_test PASSED in 28.7s //tensorflow/python/saved_model:loader_test PASSED in 18.8s //tensorflow/python/saved_model:method_name_updater_test PASSED in 6.4s //tensorflow/python/saved_model:metrics_test PASSED in 8.4s //tensorflow/python/saved_model:nested_structure_coder_test PASSED in 7.7s //tensorflow/python/saved_model:pywrap_saved_model_fingerprinting_test PASSED in 7.1s //tensorflow/python/saved_model:pywrap_saved_model_metrics_test PASSED in 27.4s //tensorflow/python/saved_model:revived_types_test PASSED in 7.7s //tensorflow/python/saved_model:save_context_test PASSED in 7.3s //tensorflow/python/saved_model:save_test PASSED in 40.0s //tensorflow/python/saved_model:saved_model_test PASSED in 35.3s //tensorflow/python/saved_model:signature_def_utils_test PASSED in 7.5s //tensorflow/python/saved_model:simple_save_test PASSED in 7.9s //tensorflow/python/saved_model:tracing_utils_test PASSED in 7.6s //tensorflow/python/saved_model:utils_test PASSED in 7.3s //tensorflow/python/saved_model/model_utils:export_output_test PASSED in 8.2s //tensorflow/python/saved_model/model_utils:export_test PASSED in 11.0s //tensorflow/python/saved_model/model_utils:mode_keys_test PASSED in 6.7s //tensorflow/python/saved_model/registration:registration_saving_test PASSED in 13.6s //tensorflow/python/saved_model/registration:registration_test PASSED in 8.4s //tensorflow/python/saved_model/registration:tf_registration_test PASSED in 22.8s //tensorflow/python/summary:plugin_asset_test PASSED in 7.9s //tensorflow/python/summary:summary_iterator_test PASSED in 7.6s //tensorflow/python/summary:summary_test PASSED in 7.0s //tensorflow/python/summary:summary_v2_test PASSED in 7.8s //tensorflow/python/summary/writer:writer_test PASSED in 35.9s //tensorflow/python/tools:aot_compiled_test PASSED in 21.7s //tensorflow/python/tools:freeze_graph_test PASSED in 16.0s //tensorflow/python/tools:optimize_for_inference_test PASSED in 8.4s //tensorflow/python/tools:print_selective_registration_header_test PASSED in 23.5s //tensorflow/python/tools:saved_model_cli_test PASSED in 21.8s //tensorflow/python/tools:saved_model_utils_test PASSED in 8.8s //tensorflow/python/tools:strip_unused_test PASSED in 7.1s //tensorflow/python/tools/api/generator:create_python_api_test PASSED in 22.6s //tensorflow/python/tools/api/generator:output_init_files_test PASSED in 14.2s //tensorflow/python/tools/api/generator:tensorflow_doc_srcs_test PASSED in 13.9s //tensorflow/python/tpu:bfloat16_test PASSED in 16.8s //tensorflow/python/tpu:feature_column_test PASSED in 11.2s //tensorflow/python/tpu:topology_test PASSED in 7.2s //tensorflow/python/tpu:tpu_embedding_for_serving_test PASSED in 10.3s //tensorflow/python/tpu:tpu_embedding_v2_utils_test PASSED in 8.2s //tensorflow/python/tpu:tpu_infeed_test PASSED in 29.0s //tensorflow/python/tpu:tpu_sharding_test PASSED in 15.4s //tensorflow/python/tpu:tpu_test_wrapper_test PASSED in 7.5s //tensorflow/python/tpu/client:client_py_test PASSED in 8.1s //tensorflow/python/trackable:autotrackable_test PASSED in 7.3s //tensorflow/python/trackable:base_delegate_test PASSED in 17.8s //tensorflow/python/trackable:base_test PASSED in 16.7s //tensorflow/python/trackable:data_structures_test PASSED in 20.3s //tensorflow/python/trackable:python_state_test PASSED in 16.6s //tensorflow/python/trackable:resource_test PASSED in 9.1s //tensorflow/python/trackable:trackable_utils_test PASSED in 7.7s //tensorflow/python/training:adadelta_test_cpu PASSED in 16.1s //tensorflow/python/training:adagrad_da_test_cpu PASSED in 8.8s //tensorflow/python/training:adagrad_test_cpu PASSED in 14.3s //tensorflow/python/training:adam_test_cpu PASSED in 14.4s //tensorflow/python/training:basic_loops_test_cpu PASSED in 9.9s //tensorflow/python/training:basic_session_run_hooks_test PASSED in 21.2s //tensorflow/python/training:checkpoint_ops_test PASSED in 7.4s //tensorflow/python/training:coordinator_test_cpu PASSED in 14.4s //tensorflow/python/training:device_setter_test_cpu PASSED in 7.0s //tensorflow/python/training:ftrl_test_cpu PASSED in 12.5s //tensorflow/python/training:gradient_descent_test_cpu PASSED in 10.5s //tensorflow/python/training:input_test PASSED in 41.4s //tensorflow/python/training:momentum_test_cpu PASSED in 12.1s //tensorflow/python/training:monitored_session_test PASSED in 25.3s //tensorflow/python/training:moving_averages_test_cpu PASSED in 13.1s //tensorflow/python/training:optimizer_test_cpu PASSED in 11.8s //tensorflow/python/training:proximal_adagrad_test_cpu PASSED in 9.9s //tensorflow/python/training:proximal_gradient_descent_test_cpu PASSED in 28.9s //tensorflow/python/training:quantize_training_test_cpu PASSED in 27.0s //tensorflow/python/training:queue_runner_test_cpu PASSED in 24.6s //tensorflow/python/training:rmsprop_test_cpu PASSED in 23.6s //tensorflow/python/training:saver_large_partitioned_variable_test PASSED in 14.9s //tensorflow/python/training:saver_test_2gpu PASSED in 37.0s //tensorflow/python/training:saver_test_cpu PASSED in 30.5s //tensorflow/python/training:server_lib_multiple_containers_test PASSED in 27.9s //tensorflow/python/training:server_lib_same_variables_clear_container_test PASSED in 28.7s //tensorflow/python/training:server_lib_same_variables_clear_test PASSED in 9.1s //tensorflow/python/training:server_lib_same_variables_no_clear_test PASSED in 9.6s //tensorflow/python/training:server_lib_sparse_job_test PASSED in 8.1s //tensorflow/python/training:server_lib_test PASSED in 16.8s //tensorflow/python/training:session_manager_test_cpu PASSED in 85.3s //tensorflow/python/training:slot_creator_test_cpu PASSED in 9.8s //tensorflow/python/training:supervisor_test PASSED in 13.4s //tensorflow/python/training:training_ops_mlir_test_cpu PASSED in 9.2s //tensorflow/python/training:training_ops_test_cpu PASSED in 11.5s //tensorflow/python/training:training_util_test PASSED in 7.6s //tensorflow/python/training:warm_starting_util_test PASSED in 42.2s //tensorflow/python/training/experimental:loss_scale_optimizer_test PASSED in 13.4s //tensorflow/python/training/experimental:loss_scale_test PASSED in 27.1s //tensorflow/python/training/experimental:mixed_precision_test_cpu PASSED in 7.7s //tensorflow/python/training/saving:saveable_object_util_test PASSED in 8.8s //tensorflow/python/util:compat_test PASSED in 9.4s //tensorflow/python/util:decorator_utils_test PASSED in 8.8s //tensorflow/python/util:deprecation_test PASSED in 7.7s //tensorflow/python/util:dispatch_test PASSED in 9.8s //tensorflow/python/util:example_parser_configuration_test PASSED in 8.2s //tensorflow/python/util:fast_module_type_test PASSED in 9.1s //tensorflow/python/util:function_parameter_canonicalizer_test PASSED in 7.6s //tensorflow/python/util:function_utils_test PASSED in 9.7s //tensorflow/python/util:keyword_args_test PASSED in 8.0s //tensorflow/python/util:lock_util_test PASSED in 27.6s //tensorflow/python/util:module_wrapper_test PASSED in 9.6s //tensorflow/python/util:nest_test PASSED in 16.3s //tensorflow/python/util:object_identity_test PASSED in 6.8s //tensorflow/python/util:serialization_test PASSED in 6.7s //tensorflow/python/util:tf_contextlib_test PASSED in 9.5s //tensorflow/python/util:tf_decorator_test PASSED in 8.6s //tensorflow/python/util:tf_export_test PASSED in 6.8s //tensorflow/python/util:tf_inspect_test PASSED in 8.2s //tensorflow/python/util:tf_should_use_test PASSED in 8.7s //tensorflow/python/util:tf_stack_test PASSED in 7.1s //tensorflow/python/util:traceback_utils_test PASSED in 7.1s //tensorflow/python/util:type_annotations_test PASSED in 7.1s //tensorflow/python/util:variable_utils_test PASSED in 8.3s //tensorflow/python/util:vlog_test PASSED in 18.4s //tensorflow/tools/api/tests:module_test PASSED in 17.9s //tensorflow/tools/benchmark:benchmark_model_test PASSED in 2.0s //tensorflow/tools/common:public_api_test PASSED in 2.5s //tensorflow/tools/common:traverse_test PASSED in 2.0s //tensorflow/tools/compatibility:all_renames_v2_test PASSED in 8.6s //tensorflow/tools/compatibility:ast_edits_test PASSED in 7.0s //tensorflow/tools/compatibility:test_file_v1_0 PASSED in 28.8s //tensorflow/tools/compatibility:test_file_v2_0 PASSED in 18.4s //tensorflow/tools/compatibility:tf_upgrade_test PASSED in 12.0s //tensorflow/tools/compatibility:tf_upgrade_v2_safety_test PASSED in 7.6s //tensorflow/tools/docs:tf_doctest_test PASSED in 1.6s //tensorflow/tools/graph_transforms:file_utils_test PASSED in 0.8s //tensorflow/tools/graph_transforms:transform_graph_test PASSED in 1.5s //tensorflow/tools/graph_transforms:transform_utils_test PASSED in 2.5s //tensorflow/tools/graph_transforms:transforms_test PASSED in 3.3s //tensorflow/tools/proto_text:gen_proto_text_functions_lib_test PASSED in 0.3s //tensorflow/tools/tensorflow_builder/compat_checker:compat_checker_test PASSED in 0.4s //tensorflow/tsl/c:tsl_status_helper_test PASSED in 0.1s //tensorflow/tsl/c:tsl_status_test PASSED in 0.3s //tensorflow/tsl/concurrency:async_value_ref_test PASSED in 0.1s //tensorflow/tsl/concurrency:async_value_test PASSED in 0.2s //tensorflow/tsl/concurrency:concurrent_vector_test PASSED in 0.3s //tensorflow/tsl/cuda:cudnn_version_test PASSED in 0.1s //tensorflow/tsl/distributed_runtime/coordination:coordination_service_agent_test PASSED in 13.1s //tensorflow/tsl/distributed_runtime/coordination:coordination_service_error_util_test PASSED in 0.2s //tensorflow/tsl/distributed_runtime/coordination:coordination_service_recoverable_job_test PASSED in 0.3s //tensorflow/tsl/distributed_runtime/preemption:preemption_notifier_test PASSED in 8.2s //tensorflow/tsl/distributed_runtime/preemption:preemption_sync_manager_test PASSED in 10.3s //tensorflow/tsl/distributed_runtime/rpc:grpc_channel_test PASSED in 0.2s //tensorflow/tsl/distributed_runtime/rpc:grpc_util_test PASSED in 0.6s //tensorflow/tsl/framework:cancellation_test PASSED in 1.3s //tensorflow/tsl/framework/convolution:spatial_convolutions_test PASSED in 0.8s //tensorflow/tsl/lib/gtl:tsl_lib_gtl_tests PASSED in 0.4s //tensorflow/tsl/lib/hash:crc32c_test PASSED in 0.1s //tensorflow/tsl/lib/histogram:histogram_test PASSED in 0.2s //tensorflow/tsl/lib/io:buffered_inputstream_test PASSED in 0.4s //tensorflow/tsl/lib/io:cache_test PASSED in 0.3s //tensorflow/tsl/lib/io:inputbuffer_test PASSED in 1.7s //tensorflow/tsl/lib/io:inputstream_interface_test PASSED in 0.6s //tensorflow/tsl/lib/io:random_inputstream_test PASSED in 0.3s //tensorflow/tsl/lib/io:record_reader_writer_test PASSED in 3.5s //tensorflow/tsl/lib/io:recordio_test PASSED in 0.3s //tensorflow/tsl/lib/io:table_test PASSED in 4.9s //tensorflow/tsl/lib/io:zlib_buffers_test PASSED in 19.4s //tensorflow/tsl/lib/io/snappy:snappy_test PASSED in 0.4s //tensorflow/tsl/lib/math:math_util_test PASSED in 0.1s //tensorflow/tsl/lib/random:distribution_sampler_test PASSED in 0.2s //tensorflow/tsl/lib/random:philox_random_test PASSED in 0.2s //tensorflow/tsl/lib/random:random_distributions_test PASSED in 19.3s //tensorflow/tsl/lib/random:simple_philox_test PASSED in 0.2s //tensorflow/tsl/lib/random:weighted_picker_test PASSED in 14.9s //tensorflow/tsl/platform:ctstring_test PASSED in 0.1s //tensorflow/tsl/platform:denormal_test PASSED in 0.7s //tensorflow/tsl/platform:errors_test PASSED in 0.1s //tensorflow/tsl/platform:fingerprint_test PASSED in 0.2s //tensorflow/tsl/platform:float8_test PASSED in 1.2s //tensorflow/tsl/platform:hash_test PASSED in 0.1s //tensorflow/tsl/platform:integral_types_test PASSED in 0.1s //tensorflow/tsl/platform:intrusive_ptr_test PASSED in 0.3s //tensorflow/tsl/platform:logging_test PASSED in 20.3s //tensorflow/tsl/platform:mutex_test PASSED in 0.2s //tensorflow/tsl/platform:net_test PASSED in 0.6s //tensorflow/tsl/platform:numbers_test PASSED in 0.3s //tensorflow/tsl/platform:path_test PASSED in 0.1s //tensorflow/tsl/platform:port_test PASSED in 8.4s //tensorflow/tsl/platform:random_test PASSED in 2.1s //tensorflow/tsl/platform:refcount_test PASSED in 1.9s //tensorflow/tsl/platform:retrying_file_system_test PASSED in 0.1s //tensorflow/tsl/platform:retrying_utils_test PASSED in 0.2s //tensorflow/tsl/platform:scanner_test PASSED in 0.1s //tensorflow/tsl/platform:setround_test PASSED in 0.1s //tensorflow/tsl/platform:stacktrace_handler_test PASSED in 1.9s //tensorflow/tsl/platform:stacktrace_test PASSED in 0.4s //tensorflow/tsl/platform:status_matchers_test PASSED in 0.2s //tensorflow/tsl/platform:status_test PASSED in 0.4s //tensorflow/tsl/platform:statusor_test PASSED in 15.8s //tensorflow/tsl/platform:str_util_test PASSED in 0.1s //tensorflow/tsl/platform:strcat_test PASSED in 0.1s //tensorflow/tsl/platform:stringpiece_test PASSED in 0.3s //tensorflow/tsl/platform:stringprintf_test PASSED in 0.1s //tensorflow/tsl/platform:subprocess_test PASSED in 0.6s //tensorflow/tsl/platform:tstring_test PASSED in 0.1s //tensorflow/tsl/platform:unbounded_work_queue_test PASSED in 1.0s //tensorflow/tsl/platform/cloud:compute_engine_metadata_client_test PASSED in 0.4s //tensorflow/tsl/platform/cloud:compute_engine_zone_provider_test PASSED in 0.5s //tensorflow/tsl/platform/cloud:curl_http_request_test PASSED in 7.1s //tensorflow/tsl/platform/cloud:expiring_lru_cache_test PASSED in 0.1s //tensorflow/tsl/platform/cloud:gcs_dns_cache_test PASSED in 0.1s //tensorflow/tsl/platform/cloud:gcs_file_system_test PASSED in 5.6s //tensorflow/tsl/platform/cloud:gcs_throttle_test PASSED in 0.2s //tensorflow/tsl/platform/cloud:google_auth_provider_test PASSED in 0.1s //tensorflow/tsl/platform/cloud:oauth_client_test PASSED in 0.1s //tensorflow/tsl/platform/cloud:ram_file_block_cache_test PASSED in 2.2s //tensorflow/tsl/platform/cloud:time_util_test PASSED in 0.1s //tensorflow/tsl/profiler/backends/cpu:traceme_recorder_test PASSED in 0.3s //tensorflow/tsl/profiler/convert:trace_container_test PASSED in 0.1s //tensorflow/tsl/profiler/convert:trace_events_to_json_test PASSED in 0.1s //tensorflow/tsl/profiler/convert:xla_op_utils_test PASSED in 0.1s //tensorflow/tsl/profiler/convert:xplane_to_trace_events_test PASSED in 0.1s //tensorflow/tsl/profiler/lib:profiler_factory_test PASSED in 0.2s //tensorflow/tsl/profiler/lib:profiler_lock_test PASSED in 0.3s //tensorflow/tsl/profiler/lib:scoped_annotation_test PASSED in 0.4s //tensorflow/tsl/profiler/lib:traceme_encode_test PASSED in 0.1s //tensorflow/tsl/profiler/rpc/client:profiler_client_test PASSED in 3.4s //tensorflow/tsl/profiler/rpc/client:remote_profiler_session_manager_test PASSED in 3.4s //tensorflow/tsl/profiler/utils:buffer_pool_test PASSED in 0.1s //tensorflow/tsl/profiler/utils:group_events_test PASSED in 0.2s //tensorflow/tsl/profiler/utils:parse_annotation_test PASSED in 0.1s //tensorflow/tsl/profiler/utils:preprocess_xplane_test PASSED in 0.1s //tensorflow/tsl/profiler/utils:tf_op_utils_test PASSED in 0.3s //tensorflow/tsl/profiler/utils:timespan_test PASSED in 0.1s //tensorflow/tsl/profiler/utils:tpu_xplane_utils_test PASSED in 0.1s //tensorflow/tsl/profiler/utils:xplane_builder_test PASSED in 0.2s //tensorflow/tsl/profiler/utils:xplane_utils_test PASSED in 0.1s //tensorflow/tsl/util:device_name_utils_test PASSED in 0.1s //tensorflow/tsl/util:stats_calculator_test PASSED in 0.1s //tensorflow/compiler/tests:complex_div_test_cpu PASSED in 7.2s Stats over 2 runs: max = 7.2s, min = 6.4s, avg = 6.8s, dev = 0.4s //tensorflow/compiler/tests:complex_div_test_cpu_mlir_bridge_test PASSED in 7.2s Stats over 2 runs: max = 7.2s, min = 6.5s, avg = 6.8s, dev = 0.4s //tensorflow/compiler/xla/tests:conditional_test_cpu PASSED in 10.9s Stats over 2 runs: max = 10.9s, min = 8.8s, avg = 9.8s, dev = 1.1s //tensorflow/python:control_flow_ops_test_cpu PASSED in 26.8s Stats over 2 runs: max = 26.8s, min = 22.2s, avg = 24.5s, dev = 2.3s //tensorflow/python/data/experimental/kernel_tests/optimization:optimization_test PASSED in 36.7s Stats over 2 runs: max = 36.7s, min = 24.9s, avg = 30.8s, dev = 5.9s //tensorflow/python/data/experimental/kernel_tests/service:metadata_test PASSED in 16.2s Stats over 2 runs: max = 16.2s, min = 15.0s, avg = 15.6s, dev = 0.6s //tensorflow/python/data/kernel_tests:padded_batch_test PASSED in 41.1s Stats over 2 runs: max = 41.1s, min = 39.7s, avg = 40.4s, dev = 0.7s //tensorflow/python/data/kernel_tests:repeat_test PASSED in 43.1s Stats over 2 runs: max = 43.1s, min = 41.9s, avg = 42.5s, dev = 0.6s //tensorflow/python/data/kernel_tests:window_test PASSED in 41.5s Stats over 2 runs: max = 41.5s, min = 31.9s, avg = 36.7s, dev = 4.8s //tensorflow/python/distribute:strategy_common_test_2gpu PASSED in 23.5s Stats over 2 runs: max = 23.5s, min = 18.3s, avg = 20.9s, dev = 2.6s //tensorflow/python/distribute:strategy_common_test_cpu PASSED in 24.7s Stats over 2 runs: max = 24.7s, min = 18.8s, avg = 21.8s, dev = 3.0s //tensorflow/python/distribute:strategy_common_test_xla_2gpu PASSED in 23.8s Stats over 2 runs: max = 23.8s, min = 23.8s, avg = 23.8s, dev = 0.0s //tensorflow/python/kernel_tests/array_ops:scatter_nd_ops_test_cpu PASSED in 13.4s Stats over 2 runs: max = 13.4s, min = 12.3s, avg = 12.8s, dev = 0.6s //tensorflow/python/kernel_tests/array_ops:scatter_ops_test_cpu PASSED in 21.2s Stats over 2 runs: max = 21.2s, min = 19.5s, avg = 20.3s, dev = 0.8s //tensorflow/python/kernel_tests/control_flow:functional_ops_test_cpu PASSED in 30.8s Stats over 2 runs: max = 30.8s, min = 30.3s, avg = 30.5s, dev = 0.2s //tensorflow/python/kernel_tests/control_flow:map_fn_test_cpu PASSED in 30.6s Stats over 2 runs: max = 30.6s, min = 29.7s, avg = 30.1s, dev = 0.4s //tensorflow/python/kernel_tests/nn_ops:atrous_conv2d_test_cpu PASSED in 25.4s Stats over 2 runs: max = 25.4s, min = 15.7s, avg = 20.5s, dev = 4.8s //tensorflow/python/kernel_tests/nn_ops:bias_op_d9m_test_cpu PASSED in 127.5s Stats over 2 runs: max = 127.5s, min = 37.3s, avg = 82.4s, dev = 45.1s //tensorflow/python/kernel_tests/nn_ops:conv2d_backprop_filter_grad_test_cpu PASSED in 53.8s Stats over 2 runs: max = 53.8s, min = 8.3s, avg = 31.1s, dev = 22.7s //tensorflow/python/distribute:multi_worker_continuous_run_test_cpu FLAKY, failed in 1 out of 2 in 24.0s Stats over 2 runs: max = 24.0s, min = 15.1s, avg = 19.6s, dev = 4.4s /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/distribute/multi_worker_continuous_run_test_cpu/test_attempts/attempt_1.log //tensorflow/compiler/tests:spacetobatch_op_test_cpu PASSED in 9.6s Stats over 3 runs: max = 9.6s, min = 8.3s, avg = 9.1s, dev = 0.5s //tensorflow/compiler/tests:spacetobatch_op_test_cpu_mlir_bridge_test PASSED in 11.7s Stats over 3 runs: max = 11.7s, min = 11.0s, avg = 11.4s, dev = 0.3s //tensorflow/compiler/xla/tests:triangular_solve_test_cpu PASSED in 60.4s Stats over 3 runs: max = 60.4s, min = 60.1s, avg = 60.2s, dev = 0.1s //tensorflow/core/data/service:thread_safe_buffer_test PASSED in 0.3s Stats over 3 runs: max = 0.3s, min = 0.2s, avg = 0.2s, dev = 0.1s //tensorflow/python/data/experimental/kernel_tests/service:multi_process_cluster_test PASSED in 20.0s Stats over 3 runs: max = 20.0s, min = 12.3s, avg = 16.5s, dev = 3.2s //tensorflow/python/data/kernel_tests:unique_test PASSED in 15.6s Stats over 3 runs: max = 15.6s, min = 12.4s, avg = 13.7s, dev = 1.4s //tensorflow/python/kernel_tests/array_ops:gather_op_test_cpu PASSED in 42.6s Stats over 3 runs: max = 42.6s, min = 24.6s, avg = 30.6s, dev = 8.5s //tensorflow/python/kernel_tests/array_ops:weights_broadcast_test PASSED in 8.9s Stats over 3 runs: max = 8.9s, min = 8.0s, avg = 8.3s, dev = 0.4s //tensorflow/python/kernel_tests/distributions:util_test_cpu PASSED in 12.3s Stats over 3 runs: max = 12.3s, min = 10.6s, avg = 11.6s, dev = 0.7s //tensorflow/python/kernel_tests/linalg:matrix_triangular_solve_op_test_cpu PASSED in 28.6s Stats over 3 runs: max = 28.6s, min = 9.1s, avg = 15.8s, dev = 9.0s //tensorflow/python/kernel_tests/random:multinomial_op_big_test_cpu PASSED in 12.8s Stats over 3 runs: max = 12.8s, min = 10.1s, avg = 11.0s, dev = 1.3s //tensorflow/compiler/tests:ternary_ops_test_cpu PASSED in 13.3s Stats over 4 runs: max = 13.3s, min = 9.1s, avg = 10.7s, dev = 1.6s //tensorflow/compiler/tests:ternary_ops_test_cpu_mlir_bridge_test PASSED in 17.2s Stats over 4 runs: max = 17.2s, min = 10.7s, avg = 13.2s, dev = 2.5s //tensorflow/compiler/tests:unary_ops_test_cpu PASSED in 31.2s Stats over 4 runs: max = 31.2s, min = 7.9s, avg = 20.4s, dev = 9.4s //tensorflow/compiler/tests:unary_ops_test_cpu_mlir_bridge_test PASSED in 45.7s Stats over 4 runs: max = 45.7s, min = 9.8s, avg = 29.2s, dev = 14.9s //tensorflow/compiler/xla/tests:dynamic_ops_test_cpu PASSED in 9.8s Stats over 4 runs: max = 9.8s, min = 8.6s, avg = 9.2s, dev = 0.4s //tensorflow/core/kernels:example_parsing_ops_test PASSED in 0.9s Stats over 4 runs: max = 0.9s, min = 0.5s, avg = 0.7s, dev = 0.2s //tensorflow/python:nn_batchnorm_test_cpu PASSED in 20.1s Stats over 4 runs: max = 20.1s, min = 8.2s, avg = 15.3s, dev = 4.5s //tensorflow/python:nn_fused_batchnorm_d9m_test_cpu PASSED in 17.8s Stats over 4 runs: max = 17.8s, min = 13.4s, avg = 15.8s, dev = 1.6s //tensorflow/python/data/experimental/kernel_tests:auto_shard_dataset_test PASSED in 29.4s Stats over 4 runs: max = 29.4s, min = 19.7s, avg = 24.9s, dev = 3.8s //tensorflow/python/data/experimental/kernel_tests:map_and_batch_test PASSED in 48.9s Stats over 4 runs: max = 48.9s, min = 21.4s, avg = 29.1s, dev = 11.5s //tensorflow/python/data/experimental/kernel_tests:parse_example_dataset_test PASSED in 47.5s Stats over 4 runs: max = 47.5s, min = 30.2s, avg = 39.1s, dev = 7.7s //tensorflow/python/data/experimental/kernel_tests:rebatch_dataset_test PASSED in 19.2s Stats over 4 runs: max = 19.2s, min = 7.6s, avg = 12.2s, dev = 4.5s //tensorflow/python/data/experimental/kernel_tests:sql_dataset_test PASSED in 25.1s Stats over 4 runs: max = 25.1s, min = 22.8s, avg = 23.9s, dev = 1.0s //tensorflow/python/data/experimental/kernel_tests/service:cross_trainer_cache_ft_test PASSED in 9.7s Stats over 4 runs: max = 9.7s, min = 7.2s, avg = 8.4s, dev = 0.9s //tensorflow/python/data/kernel_tests:batch_test PASSED in 27.9s Stats over 4 runs: max = 27.9s, min = 21.5s, avg = 24.1s, dev = 2.4s //tensorflow/python/data/kernel_tests:fixed_length_record_dataset_test PASSED in 16.1s Stats over 4 runs: max = 16.1s, min = 8.5s, avg = 12.2s, dev = 3.4s //tensorflow/python/data/kernel_tests:from_generator_test PASSED in 26.0s Stats over 4 runs: max = 26.0s, min = 16.2s, avg = 20.9s, dev = 3.6s //tensorflow/python/data/kernel_tests:group_by_window_test PASSED in 14.4s Stats over 4 runs: max = 14.4s, min = 6.5s, avg = 9.8s, dev = 3.3s //tensorflow/python/data/kernel_tests:ragged_batch_test PASSED in 15.5s Stats over 4 runs: max = 15.5s, min = 14.6s, avg = 14.9s, dev = 0.3s //tensorflow/python/data/kernel_tests:skip_test PASSED in 32.0s Stats over 4 runs: max = 32.0s, min = 27.1s, avg = 29.4s, dev = 1.9s //tensorflow/python/data/kernel_tests:take_test PASSED in 20.4s Stats over 4 runs: max = 20.4s, min = 19.3s, avg = 19.8s, dev = 0.4s //tensorflow/python/data/kernel_tests:take_while_test PASSED in 27.6s Stats over 4 runs: max = 27.6s, min = 25.2s, avg = 26.1s, dev = 0.9s //tensorflow/python/data/kernel_tests:text_line_dataset_test PASSED in 17.9s Stats over 4 runs: max = 17.9s, min = 13.7s, avg = 15.7s, dev = 2.0s //tensorflow/python/data/kernel_tests:zip_test PASSED in 13.6s Stats over 4 runs: max = 13.6s, min = 12.7s, avg = 13.1s, dev = 0.4s //tensorflow/python/debug/lib:dumping_callback_test_cpu PASSED in 13.1s Stats over 4 runs: max = 13.1s, min = 11.5s, avg = 12.1s, dev = 0.7s //tensorflow/python/distribute:cross_device_ops_test_2gpu PASSED in 38.2s Stats over 4 runs: max = 38.2s, min = 29.6s, avg = 34.2s, dev = 3.7s //tensorflow/python/distribute:strategy_gather_test_2gpu PASSED in 26.2s Stats over 4 runs: max = 26.2s, min = 17.5s, avg = 21.8s, dev = 3.7s //tensorflow/python/distribute:strategy_gather_test_cpu PASSED in 22.8s Stats over 4 runs: max = 22.8s, min = 12.8s, avg = 17.4s, dev = 3.8s //tensorflow/python/distribute:strategy_gather_test_xla_2gpu PASSED in 21.3s Stats over 4 runs: max = 21.3s, min = 11.3s, avg = 16.5s, dev = 4.8s //tensorflow/python/framework:convert_to_constants_test PASSED in 19.1s Stats over 4 runs: max = 19.1s, min = 13.9s, avg = 16.0s, dev = 1.9s //tensorflow/python/kernel_tests:collective_ops_test_2gpu PASSED in 32.1s Stats over 4 runs: max = 32.1s, min = 30.2s, avg = 31.5s, dev = 0.8s //tensorflow/python/kernel_tests:collective_ops_test_cpu PASSED in 32.6s Stats over 4 runs: max = 32.6s, min = 30.8s, avg = 31.6s, dev = 0.8s //tensorflow/python/kernel_tests/array_ops:concat_op_test_cpu PASSED in 15.0s Stats over 4 runs: max = 15.0s, min = 12.4s, avg = 13.6s, dev = 0.9s //tensorflow/python/kernel_tests/array_ops:init_ops_test_cpu PASSED in 56.2s Stats over 4 runs: max = 56.2s, min = 19.4s, avg = 36.9s, dev = 15.4s //tensorflow/python/kernel_tests/array_ops:split_op_test_cpu PASSED in 26.1s Stats over 4 runs: max = 26.1s, min = 7.3s, avg = 14.3s, dev = 7.5s //tensorflow/python/kernel_tests/linalg:einsum_op_test_cpu PASSED in 91.9s Stats over 4 runs: max = 91.9s, min = 14.4s, avg = 45.6s, dev = 31.2s //tensorflow/python/kernel_tests/linalg:linear_operator_lower_triangular_test_cpu PASSED in 27.7s Stats over 4 runs: max = 27.7s, min = 25.1s, avg = 26.4s, dev = 1.1s //tensorflow/python/kernel_tests/nn_ops:conv_ops_test_cpu PASSED in 39.2s Stats over 4 runs: max = 39.2s, min = 29.4s, avg = 33.1s, dev = 4.0s //tensorflow/python/kernel_tests/random:random_gamma_test_cpu PASSED in 79.4s Stats over 4 runs: max = 79.4s, min = 8.8s, avg = 38.9s, dev = 30.7s //tensorflow/python/kernel_tests/signal:window_ops_test_cpu PASSED in 17.5s Stats over 4 runs: max = 17.5s, min = 15.9s, avg = 16.7s, dev = 0.6s //tensorflow/python/ops/ragged:ragged_gather_op_test PASSED in 62.0s Stats over 4 runs: max = 62.0s, min = 17.4s, avg = 37.5s, dev = 16.0s //tensorflow/python/ops/ragged:ragged_getitem_test PASSED in 42.8s Stats over 4 runs: max = 42.8s, min = 39.3s, avg = 41.1s, dev = 1.2s //tensorflow/compiler/tests:async_comp_test_cpu PASSED in 6.9s Stats over 5 runs: max = 6.9s, min = 5.7s, avg = 6.4s, dev = 0.5s //tensorflow/compiler/tests:conv3d_test_cpu PASSED in 13.8s Stats over 5 runs: max = 13.8s, min = 7.2s, avg = 10.0s, dev = 2.8s //tensorflow/compiler/tests:conv3d_test_cpu_mlir_bridge_test PASSED in 58.0s Stats over 5 runs: max = 58.0s, min = 49.9s, avg = 53.5s, dev = 3.6s //tensorflow/compiler/tests:depthwise_conv_op_test_cpu PASSED in 66.2s Stats over 5 runs: max = 66.2s, min = 61.2s, avg = 63.1s, dev = 2.0s //tensorflow/compiler/tests:depthwise_conv_op_test_cpu_mlir_bridge_test PASSED in 13.3s Stats over 5 runs: max = 13.3s, min = 9.1s, avg = 10.8s, dev = 1.6s //tensorflow/compiler/tests:fused_batchnorm_test_cpu PASSED in 8.5s Stats over 5 runs: max = 8.5s, min = 7.5s, avg = 8.0s, dev = 0.3s //tensorflow/compiler/tests:fused_batchnorm_test_cpu_mlir_bridge_test PASSED in 8.4s Stats over 5 runs: max = 8.4s, min = 7.3s, avg = 7.9s, dev = 0.3s //tensorflow/compiler/tests:image_ops_jit_compile_test_cpu PASSED in 7.8s Stats over 5 runs: max = 7.8s, min = 6.8s, avg = 7.2s, dev = 0.4s //tensorflow/compiler/tests:reduce_ops_test_cpu PASSED in 9.9s Stats over 5 runs: max = 9.9s, min = 9.6s, avg = 9.7s, dev = 0.1s //tensorflow/compiler/tests:reduce_ops_test_cpu_mlir_bridge_test PASSED in 20.9s Stats over 5 runs: max = 20.9s, min = 16.3s, avg = 18.1s, dev = 1.9s //tensorflow/compiler/tests:repeat_op_test_cpu PASSED in 7.9s Stats over 5 runs: max = 7.9s, min = 6.6s, avg = 7.2s, dev = 0.5s //tensorflow/compiler/tests:repeat_op_test_cpu_mlir_bridge_test PASSED in 8.0s Stats over 5 runs: max = 8.0s, min = 7.2s, avg = 7.4s, dev = 0.3s //tensorflow/compiler/tests:special_math_test_cpu PASSED in 119.7s Stats over 5 runs: max = 119.7s, min = 13.1s, avg = 51.0s, dev = 37.1s //tensorflow/compiler/tests:special_math_test_cpu_mlir_bridge_test PASSED in 119.4s Stats over 5 runs: max = 119.4s, min = 14.5s, avg = 50.5s, dev = 36.2s //tensorflow/compiler/xla/client/lib:self_adjoint_eig_test_cpu PASSED in 27.5s Stats over 5 runs: max = 27.5s, min = 11.5s, avg = 20.4s, dev = 6.4s //tensorflow/core/grappler/optimizers:constant_folding_test PASSED in 3.7s Stats over 5 runs: max = 3.7s, min = 2.7s, avg = 3.0s, dev = 0.4s //tensorflow/dtensor/python/tests:layout_propagation_test_cpu PASSED in 12.5s Stats over 5 runs: max = 12.5s, min = 9.1s, avg = 10.6s, dev = 1.2s //tensorflow/python/distribute:mirrored_strategy_test_2gpu PASSED in 10.4s Stats over 5 runs: max = 10.4s, min = 8.8s, avg = 9.4s, dev = 0.6s //tensorflow/python/distribute:mirrored_strategy_test_cpu PASSED in 19.5s Stats over 5 runs: max = 19.5s, min = 9.8s, avg = 16.2s, dev = 3.9s //tensorflow/python/distribute:moving_averages_test_2gpu PASSED in 13.5s Stats over 5 runs: max = 13.5s, min = 11.7s, avg = 12.7s, dev = 0.7s //tensorflow/python/distribute:moving_averages_test_cpu PASSED in 18.7s Stats over 5 runs: max = 18.7s, min = 14.4s, avg = 16.7s, dev = 1.4s //tensorflow/python/distribute:vars_test_2gpu PASSED in 29.4s Stats over 5 runs: max = 29.4s, min = 13.3s, avg = 17.8s, dev = 6.0s //tensorflow/python/distribute:vars_test_cpu PASSED in 15.1s Stats over 5 runs: max = 15.1s, min = 13.0s, avg = 13.8s, dev = 0.8s //tensorflow/python/eager:device_placement_test_cpu PASSED in 9.0s Stats over 5 runs: max = 9.0s, min = 7.7s, avg = 8.4s, dev = 0.4s //tensorflow/python/eager:forwardprop_test_cpu PASSED in 90.5s Stats over 5 runs: max = 90.5s, min = 15.3s, avg = 45.1s, dev = 25.2s //tensorflow/python/eager/polymorphic_function:gradients_test_cpu PASSED in 14.5s Stats over 5 runs: max = 14.5s, min = 10.4s, avg = 12.0s, dev = 1.8s //tensorflow/python/kernel_tests/linalg:cholesky_op_test_cpu PASSED in 50.6s Stats over 5 runs: max = 50.6s, min = 31.4s, avg = 40.2s, dev = 6.6s //tensorflow/python/kernel_tests/linalg:linear_operator_adjoint_test_cpu PASSED in 21.0s Stats over 5 runs: max = 21.0s, min = 19.3s, avg = 20.2s, dev = 0.6s //tensorflow/python/kernel_tests/linalg:linear_operator_composition_test_cpu PASSED in 35.9s Stats over 5 runs: max = 35.9s, min = 32.8s, avg = 34.0s, dev = 1.2s //tensorflow/python/kernel_tests/linalg:linear_operator_diag_test_cpu PASSED in 20.0s Stats over 5 runs: max = 20.0s, min = 15.5s, avg = 17.7s, dev = 1.5s //tensorflow/python/kernel_tests/linalg:linear_operator_full_matrix_test_cpu PASSED in 23.1s Stats over 5 runs: max = 23.1s, min = 21.7s, avg = 22.3s, dev = 0.5s //tensorflow/python/kernel_tests/linalg:linear_operator_householder_test_cpu PASSED in 30.7s Stats over 5 runs: max = 30.7s, min = 28.7s, avg = 30.1s, dev = 0.7s //tensorflow/python/kernel_tests/linalg:linear_operator_identity_test_cpu PASSED in 31.3s Stats over 5 runs: max = 31.3s, min = 28.7s, avg = 30.0s, dev = 1.1s //tensorflow/python/kernel_tests/linalg:linear_operator_inversion_test_cpu PASSED in 22.9s Stats over 5 runs: max = 22.9s, min = 21.7s, avg = 22.2s, dev = 0.4s //tensorflow/python/kernel_tests/linalg:linear_operator_permutation_test_cpu PASSED in 21.4s Stats over 5 runs: max = 21.4s, min = 17.5s, avg = 18.8s, dev = 1.3s //tensorflow/python/kernel_tests/linalg:linear_operator_toeplitz_test_cpu PASSED in 14.3s Stats over 5 runs: max = 14.3s, min = 11.6s, avg = 12.8s, dev = 1.0s //tensorflow/python/kernel_tests/linalg:linear_operator_tridiag_test_cpu PASSED in 90.3s Stats over 5 runs: max = 90.3s, min = 87.9s, avg = 89.4s, dev = 0.9s //tensorflow/python/kernel_tests/linalg:linear_operator_util_test_cpu PASSED in 23.0s Stats over 5 runs: max = 23.0s, min = 22.5s, avg = 22.7s, dev = 0.2s //tensorflow/python/kernel_tests/linalg:linear_operator_zeros_test_cpu PASSED in 13.1s Stats over 5 runs: max = 13.1s, min = 12.3s, avg = 12.8s, dev = 0.3s //tensorflow/python/kernel_tests/nn_ops:fractional_avg_pool_op_test PASSED in 12.6s Stats over 5 runs: max = 12.6s, min = 5.8s, avg = 7.7s, dev = 2.6s //tensorflow/python/kernel_tests/nn_ops:fractional_max_pool_op_test PASSED in 14.1s Stats over 5 runs: max = 14.1s, min = 5.8s, avg = 8.3s, dev = 3.0s //tensorflow/python/kernel_tests/sparse_ops:sparse_ops_test_cpu PASSED in 26.7s Stats over 5 runs: max = 26.7s, min = 3.5s, avg = 9.9s, dev = 8.5s //tensorflow/python/ops/parallel_for:math_test_cpu PASSED in 94.0s Stats over 5 runs: max = 94.0s, min = 23.8s, avg = 53.8s, dev = 26.0s //tensorflow/compiler/tests:scan_ops_test_cpu PASSED in 13.6s Stats over 6 runs: max = 13.6s, min = 10.0s, avg = 11.6s, dev = 1.2s //tensorflow/compiler/tests:scan_ops_test_cpu_mlir_bridge_test PASSED in 16.4s Stats over 6 runs: max = 16.4s, min = 10.5s, avg = 13.7s, dev = 1.7s //tensorflow/python:accumulate_n_benchmark_cpu PASSED in 6.6s Stats over 6 runs: max = 6.6s, min = 4.9s, avg = 6.1s, dev = 0.6s //tensorflow/python/data/experimental/kernel_tests:make_batched_features_dataset_test PASSED in 25.0s Stats over 6 runs: max = 25.0s, min = 7.1s, avg = 14.3s, dev = 7.2s //tensorflow/python/kernel_tests/array_ops:diag_op_test_cpu PASSED in 68.9s Stats over 6 runs: max = 68.9s, min = 9.5s, avg = 21.6s, dev = 21.2s //tensorflow/python/kernel_tests/math_ops:reduction_ops_test_cpu PASSED in 31.3s Stats over 6 runs: max = 31.3s, min = 19.9s, avg = 27.2s, dev = 3.8s //tensorflow/python/distribute/experimental/rpc:rpc_ops_test PASSED in 11.9s Stats over 7 runs: max = 11.9s, min = 8.8s, avg = 9.6s, dev = 1.2s //tensorflow/python/distribute:cross_device_ops_test_cpu FLAKY, failed in 3 out of 7 in 900.1s Stats over 7 runs: max = 900.1s, min = 14.0s, avg = 146.9s, dev = 307.5s /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/distribute/cross_device_ops_test_cpu/shard_1_of_4/test_attempts/attempt_1.log /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/distribute/cross_device_ops_test_cpu/shard_4_of_4/test_attempts/attempt_1.log /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/distribute/cross_device_ops_test_cpu/shard_2_of_4/test_attempts/attempt_1.log //tensorflow/compiler/tests:matrix_diag_ops_test_cpu PASSED in 78.7s Stats over 8 runs: max = 78.7s, min = 2.4s, avg = 24.3s, dev = 25.1s //tensorflow/compiler/tests:matrix_diag_ops_test_cpu_mlir_bridge_test PASSED in 62.8s Stats over 8 runs: max = 62.8s, min = 5.4s, avg = 24.5s, dev = 19.4s //tensorflow/dtensor/python/tests:input_util_test PASSED in 23.1s Stats over 8 runs: max = 23.1s, min = 13.1s, avg = 18.9s, dev = 3.4s //tensorflow/python/data/experimental/kernel_tests:csv_dataset_test PASSED in 37.4s Stats over 8 runs: max = 37.4s, min = 8.5s, avg = 21.5s, dev = 11.4s //tensorflow/python/data/experimental/kernel_tests:parallel_interleave_test PASSED in 25.8s Stats over 8 runs: max = 25.8s, min = 13.0s, avg = 18.5s, dev = 4.5s //tensorflow/python/data/experimental/kernel_tests/service:coordinated_read_ft_test PASSED in 44.1s Stats over 8 runs: max = 44.1s, min = 12.0s, avg = 27.4s, dev = 13.3s //tensorflow/python/data/experimental/kernel_tests/service:coordinated_read_test PASSED in 50.5s Stats over 8 runs: max = 50.5s, min = 21.5s, avg = 29.5s, dev = 9.1s //tensorflow/python/data/experimental/kernel_tests/service:cross_trainer_cache_test PASSED in 24.8s Stats over 8 runs: max = 24.8s, min = 6.5s, avg = 13.6s, dev = 6.4s //tensorflow/python/data/experimental/kernel_tests/service:fault_tolerance_test PASSED in 21.1s Stats over 8 runs: max = 21.1s, min = 14.0s, avg = 15.2s, dev = 2.2s //tensorflow/python/data/kernel_tests:filter_test PASSED in 17.7s Stats over 8 runs: max = 17.7s, min = 14.2s, avg = 16.1s, dev = 1.1s //tensorflow/python/data/kernel_tests:flat_map_test PASSED in 40.2s Stats over 8 runs: max = 40.2s, min = 21.8s, avg = 31.8s, dev = 6.8s //tensorflow/python/data/kernel_tests:shard_test PASSED in 33.5s Stats over 8 runs: max = 33.5s, min = 24.9s, avg = 29.9s, dev = 2.5s //tensorflow/python/data/kernel_tests:shuffle_test PASSED in 55.5s Stats over 8 runs: max = 55.5s, min = 20.4s, avg = 26.7s, dev = 11.1s //tensorflow/python/data/kernel_tests:tf_record_dataset_test PASSED in 34.7s Stats over 8 runs: max = 34.7s, min = 17.3s, avg = 26.8s, dev = 6.3s //tensorflow/python/distribute/failure_handling:failure_handler_test PASSED in 76.7s Stats over 8 runs: max = 76.7s, min = 39.2s, avg = 61.0s, dev = 10.8s //tensorflow/python/distribute/failure_handling:gce_failure_handler_test PASSED in 65.7s Stats over 8 runs: max = 65.7s, min = 7.6s, avg = 26.2s, dev = 22.7s //tensorflow/python/kernel_tests/linalg:linalg_ops_test_cpu PASSED in 45.9s Stats over 8 runs: max = 45.9s, min = 25.8s, avg = 36.9s, dev = 7.1s //tensorflow/python/kernel_tests/linalg:linear_operator_block_diag_test_cpu PASSED in 58.9s Stats over 8 runs: max = 58.9s, min = 41.3s, avg = 49.3s, dev = 5.7s //tensorflow/python/kernel_tests/linalg:linear_operator_block_lower_triangular_test_cpu PASSED in 46.1s Stats over 8 runs: max = 46.1s, min = 31.4s, avg = 38.7s, dev = 5.2s //tensorflow/python/kernel_tests/nn_ops:depthwise_conv_op_d9m_test_cpu PASSED in 67.8s Stats over 8 runs: max = 67.8s, min = 5.3s, avg = 15.7s, dev = 20.4s //tensorflow/python/kernel_tests/nn_ops:depthwise_conv_op_test_cpu PASSED in 7.6s Stats over 8 runs: max = 7.6s, min = 6.6s, avg = 7.0s, dev = 0.3s //tensorflow/python/kernel_tests/signal:fft_ops_test_cpu PASSED in 29.3s Stats over 8 runs: max = 29.3s, min = 8.6s, avg = 15.6s, dev = 8.1s //tensorflow/python/ops/ragged:dynamic_ragged_shape_test PASSED in 43.2s Stats over 8 runs: max = 43.2s, min = 26.6s, avg = 33.3s, dev = 5.3s //tensorflow/python/ops/ragged:ragged_tensor_test PASSED in 25.4s Stats over 8 runs: max = 25.4s, min = 12.7s, avg = 16.5s, dev = 3.7s //tensorflow/compiler/tests:bincount_op_test_cpu PASSED in 7.2s Stats over 10 runs: max = 7.2s, min = 4.4s, avg = 6.0s, dev = 0.8s //tensorflow/compiler/tests:conv2d_test_cpu PASSED in 15.4s Stats over 10 runs: max = 15.4s, min = 12.5s, avg = 14.0s, dev = 0.9s //tensorflow/compiler/tests:conv2d_test_cpu_mlir_bridge_test PASSED in 9.5s Stats over 10 runs: max = 9.5s, min = 8.4s, avg = 9.0s, dev = 0.3s //tensorflow/compiler/tests:image_ops_test_cpu PASSED in 18.0s Stats over 10 runs: max = 18.0s, min = 11.1s, avg = 14.5s, dev = 2.2s //tensorflow/compiler/tests:random_ops_test_cpu PASSED in 21.3s Stats over 10 runs: max = 21.3s, min = 15.2s, avg = 18.0s, dev = 1.9s //tensorflow/compiler/tests:random_ops_test_cpu_mlir_bridge_test PASSED in 21.1s Stats over 10 runs: max = 21.1s, min = 14.7s, avg = 17.9s, dev = 1.8s //tensorflow/compiler/tests:stateless_random_ops_test_cpu PASSED in 87.9s Stats over 10 runs: max = 87.9s, min = 33.1s, avg = 58.3s, dev = 18.0s //tensorflow/compiler/tests:stateless_random_ops_test_cpu_mlir_bridge_test PASSED in 85.8s Stats over 10 runs: max = 85.8s, min = 37.2s, avg = 62.9s, dev = 18.2s //tensorflow/compiler/xla/client/lib:svd_test_cpu PASSED in 74.7s Stats over 10 runs: max = 74.7s, min = 7.0s, avg = 26.4s, dev = 24.1s //tensorflow/compiler/xla/client/lib:tridiagonal_test_cpu PASSED in 8.3s Stats over 10 runs: max = 8.3s, min = 6.8s, avg = 7.6s, dev = 0.4s //tensorflow/compiler/xla/service/cpu:cpu_runtime_test PASSED in 15.2s Stats over 10 runs: max = 15.2s, min = 1.0s, avg = 10.0s, dev = 4.6s //tensorflow/python:special_math_ops_test_cpu PASSED in 57.8s Stats over 10 runs: max = 57.8s, min = 7.1s, avg = 15.2s, dev = 14.4s //tensorflow/python/data/kernel_tests:rejection_resample_test PASSED in 25.1s Stats over 10 runs: max = 25.1s, min = 14.6s, avg = 18.7s, dev = 3.2s //tensorflow/python/distribute:input_lib_test_2gpu PASSED in 35.6s Stats over 10 runs: max = 35.6s, min = 25.9s, avg = 28.8s, dev = 2.8s //tensorflow/python/distribute:input_lib_test_cpu PASSED in 31.4s Stats over 10 runs: max = 31.4s, min = 22.2s, avg = 25.8s, dev = 2.6s //tensorflow/python/distribute:input_lib_type_spec_test_2gpu PASSED in 16.7s Stats over 10 runs: max = 16.7s, min = 5.1s, avg = 11.3s, dev = 4.0s //tensorflow/python/distribute:input_lib_type_spec_test_cpu PASSED in 18.1s Stats over 10 runs: max = 18.1s, min = 8.0s, avg = 12.8s, dev = 3.4s //tensorflow/python/framework:config_vgpu_test_2gpu PASSED in 6.9s Stats over 10 runs: max = 6.9s, min = 3.9s, avg = 5.1s, dev = 0.9s //tensorflow/python/framework:config_vgpu_test_cpu PASSED in 7.1s Stats over 10 runs: max = 7.1s, min = 4.6s, avg = 5.6s, dev = 0.8s //tensorflow/python/framework:function_test_cpu PASSED in 60.6s Stats over 10 runs: max = 60.6s, min = 6.9s, avg = 13.5s, dev = 15.8s //tensorflow/python/grappler:cluster_test_cpu PASSED in 21.6s Stats over 10 runs: max = 21.6s, min = 18.3s, avg = 19.9s, dev = 1.1s //tensorflow/python/kernel_tests/array_ops:array_ops_test_cpu PASSED in 13.6s Stats over 10 runs: max = 13.6s, min = 7.9s, avg = 10.3s, dev = 1.7s //tensorflow/python/kernel_tests/array_ops:inplace_ops_test_cpu PASSED in 7.3s Stats over 10 runs: max = 7.3s, min = 6.9s, avg = 7.1s, dev = 0.1s //tensorflow/python/kernel_tests/data_structures:tensor_array_ops_test_cpu PASSED in 11.5s Stats over 10 runs: max = 11.5s, min = 6.7s, avg = 8.7s, dev = 1.6s //tensorflow/python/kernel_tests/linalg:linear_operator_kronecker_test_cpu PASSED in 31.8s Stats over 10 runs: max = 31.8s, min = 28.9s, avg = 30.1s, dev = 0.9s //tensorflow/python/kernel_tests/linalg:linear_operator_low_rank_update_test_cpu PASSED in 66.5s Stats over 10 runs: max = 66.5s, min = 62.6s, avg = 64.7s, dev = 1.4s //tensorflow/python/kernel_tests/linalg:tridiagonal_matmul_op_test_cpu PASSED in 118.4s Stats over 10 runs: max = 118.4s, min = 3.6s, avg = 17.2s, dev = 33.8s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_ops_test_cpu PASSED in 39.8s Stats over 10 runs: max = 39.8s, min = 12.2s, avg = 24.5s, dev = 8.6s //tensorflow/python/kernel_tests/math_ops:segment_reduction_ops_test_cpu PASSED in 23.3s Stats over 10 runs: max = 23.3s, min = 3.9s, avg = 12.4s, dev = 7.6s //tensorflow/python/kernel_tests/nn_ops:pooling_ops_test_cpu PASSED in 17.8s Stats over 10 runs: max = 17.8s, min = 5.5s, avg = 9.3s, dev = 4.2s //tensorflow/python/kernel_tests/nn_ops:rnn_test_cpu PASSED in 13.3s Stats over 10 runs: max = 13.3s, min = 11.3s, avg = 12.2s, dev = 0.6s //tensorflow/python/kernel_tests/random:random_index_shuffle_test PASSED in 8.8s Stats over 10 runs: max = 8.8s, min = 7.7s, avg = 8.2s, dev = 0.4s //tensorflow/python/kernel_tests/random:stateless_random_ops_test_cpu PASSED in 91.0s Stats over 10 runs: max = 91.0s, min = 17.8s, avg = 53.9s, dev = 35.4s //tensorflow/python/ops/ragged:ragged_tensor_supported_values_test PASSED in 34.1s Stats over 10 runs: max = 34.1s, min = 27.5s, avg = 30.7s, dev = 1.5s //tensorflow/python/saved_model:load_test_cpu PASSED in 50.7s Stats over 10 runs: max = 50.7s, min = 24.2s, avg = 29.8s, dev = 7.3s //tensorflow/compiler/tests:fft_test_cpu PASSED in 28.2s Stats over 12 runs: max = 28.2s, min = 7.2s, avg = 18.1s, dev = 9.8s //tensorflow/compiler/xla/service:triangular_solve_expander_test PASSED in 5.2s Stats over 12 runs: max = 5.2s, min = 2.9s, avg = 3.6s, dev = 0.6s //tensorflow/python/data/experimental/kernel_tests:group_by_reducer_test PASSED in 13.5s Stats over 12 runs: max = 13.5s, min = 2.8s, avg = 7.9s, dev = 3.5s //tensorflow/python/data/kernel_tests:choose_from_datasets_test PASSED in 10.9s Stats over 12 runs: max = 10.9s, min = 7.1s, avg = 9.0s, dev = 1.2s //tensorflow/python/data/kernel_tests:memory_cleanup_test_cpu PASSED in 12.0s Stats over 12 runs: max = 12.0s, min = 5.7s, avg = 8.0s, dev = 1.7s //tensorflow/python/distribute:multi_process_runner_test_2gpu PASSED in 222.8s Stats over 12 runs: max = 222.8s, min = 13.1s, avg = 50.8s, dev = 57.8s //tensorflow/python/distribute:multi_process_runner_test_cpu PASSED in 223.2s Stats over 12 runs: max = 223.2s, min = 14.2s, avg = 50.8s, dev = 57.9s //tensorflow/python/eager/polymorphic_function:polymorphic_function_test_cpu PASSED in 75.3s Stats over 15 runs: max = 75.3s, min = 11.3s, avg = 18.0s, dev = 15.3s //tensorflow/python/kernel_tests/linalg:linear_operator_circulant_test_cpu PASSED in 75.1s Stats over 15 runs: max = 75.1s, min = 66.4s, avg = 70.6s, dev = 2.4s //tensorflow/python/kernel_tests/nn_ops:rnn_cell_test_cpu PASSED in 48.3s Stats over 15 runs: max = 48.3s, min = 10.3s, avg = 15.8s, dev = 9.5s //tensorflow/python:image_ops_test_cpu PASSED in 16.8s Stats over 16 runs: max = 16.8s, min = 9.1s, avg = 12.5s, dev = 2.4s //tensorflow/python/data/experimental/kernel_tests/service:dynamic_sharding_test PASSED in 28.0s Stats over 16 runs: max = 28.0s, min = 21.0s, avg = 24.0s, dev = 2.2s //tensorflow/python/data/experimental/kernel_tests/service:worker_tags_test PASSED in 30.3s Stats over 16 runs: max = 30.3s, min = 8.3s, avg = 15.0s, dev = 5.6s //tensorflow/python/data/kernel_tests:snapshot_test PASSED in 25.2s Stats over 16 runs: max = 25.2s, min = 10.4s, avg = 16.4s, dev = 3.7s //tensorflow/python/kernel_tests/control_flow:control_flow_ops_py_test_cpu PASSED in 28.6s Stats over 16 runs: max = 28.6s, min = 7.0s, avg = 10.2s, dev = 4.9s //tensorflow/python/kernel_tests/linalg:matrix_exponential_op_test PASSED in 18.3s Stats over 16 runs: max = 18.3s, min = 4.8s, avg = 7.5s, dev = 3.1s //tensorflow/python/kernel_tests/signal:dct_ops_test_cpu PASSED in 11.0s Stats over 16 runs: max = 11.0s, min = 9.2s, avg = 10.1s, dev = 0.6s //tensorflow/python/ops/parallel_for:control_flow_ops_test_cpu PASSED in 51.8s Stats over 16 runs: max = 51.8s, min = 12.4s, avg = 20.3s, dev = 8.9s //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_ft_test PASSED in 7.8s Stats over 17 runs: max = 7.8s, min = 3.5s, avg = 5.2s, dev = 1.6s //tensorflow/python/data/kernel_tests:map_test PASSED in 39.2s Stats over 19 runs: max = 39.2s, min = 9.7s, avg = 17.6s, dev = 6.8s //tensorflow/compiler/tests:pooling_ops_3d_test_cpu PASSED in 7.8s Stats over 20 runs: max = 7.8s, min = 3.4s, avg = 5.5s, dev = 1.1s //tensorflow/compiler/tests:pooling_ops_3d_test_cpu_mlir_bridge_test PASSED in 6.9s Stats over 20 runs: max = 6.9s, min = 3.3s, avg = 4.8s, dev = 0.9s //tensorflow/compiler/tests:pooling_ops_test_cpu PASSED in 9.2s Stats over 20 runs: max = 9.2s, min = 3.7s, avg = 5.5s, dev = 1.5s //tensorflow/compiler/tests:pooling_ops_test_cpu_mlir_bridge_test PASSED in 13.3s Stats over 20 runs: max = 13.3s, min = 4.6s, avg = 6.7s, dev = 1.8s //tensorflow/compiler/xla/tests:convolution_dimension_numbers_test_cpu PASSED in 7.5s Stats over 20 runs: max = 7.5s, min = 5.8s, avg = 6.3s, dev = 0.4s //tensorflow/compiler/xla/tests:dot_operation_single_threaded_runtime_test_cpu PASSED in 13.0s Stats over 20 runs: max = 13.0s, min = 9.3s, avg = 11.0s, dev = 0.9s //tensorflow/compiler/xla/tests:dot_operation_test_cpu PASSED in 15.3s Stats over 20 runs: max = 15.3s, min = 12.7s, avg = 13.7s, dev = 0.6s //tensorflow/compiler/xla/tests:prng_test_cpu PASSED in 8.0s Stats over 20 runs: max = 8.0s, min = 6.7s, avg = 7.3s, dev = 0.4s //tensorflow/compiler/xla/tests:reduce_window_test_cpu PASSED in 44.4s Stats over 20 runs: max = 44.4s, min = 7.0s, avg = 16.9s, dev = 11.6s //tensorflow/python/autograph/tests:loop_control_flow_test PASSED in 21.0s Stats over 20 runs: max = 21.0s, min = 14.9s, avg = 18.8s, dev = 1.6s //tensorflow/python/kernel_tests:metrics_test PASSED in 38.4s Stats over 20 runs: max = 38.4s, min = 9.8s, avg = 18.5s, dev = 8.3s //tensorflow/python/kernel_tests/array_ops:matrix_band_part_op_test_cpu PASSED in 7.7s Stats over 20 runs: max = 7.7s, min = 3.0s, avg = 4.8s, dev = 1.5s //tensorflow/python/kernel_tests/data_structures:barrier_ops_test PASSED in 11.3s Stats over 20 runs: max = 11.3s, min = 3.3s, avg = 5.8s, dev = 2.2s //tensorflow/python/kernel_tests/linalg:eig_op_test PASSED in 40.4s Stats over 20 runs: max = 40.4s, min = 3.3s, avg = 14.1s, dev = 11.7s //tensorflow/python/kernel_tests/linalg:linalg_grad_test_cpu PASSED in 91.9s Stats over 20 runs: max = 91.9s, min = 24.0s, avg = 44.8s, dev = 19.3s //tensorflow/python/kernel_tests/linalg:norm_op_test_cpu PASSED in 7.7s Stats over 20 runs: max = 7.7s, min = 4.9s, avg = 6.3s, dev = 1.0s //tensorflow/python/kernel_tests/linalg:normalize_op_test_cpu PASSED in 12.1s Stats over 20 runs: max = 12.1s, min = 5.9s, avg = 9.3s, dev = 1.9s //tensorflow/python/kernel_tests/linalg:qr_op_test_cpu PASSED in 126.3s Stats over 20 runs: max = 126.3s, min = 30.5s, avg = 82.7s, dev = 31.0s //tensorflow/python/kernel_tests/linalg:self_adjoint_eig_op_test_cpu PASSED in 23.9s Stats over 20 runs: max = 23.9s, min = 3.5s, avg = 10.5s, dev = 6.0s //tensorflow/python/kernel_tests/math_ops:batch_matmul_op_test_cpu PASSED in 36.5s Stats over 20 runs: max = 36.5s, min = 19.2s, avg = 26.6s, dev = 6.5s //tensorflow/python/kernel_tests/math_ops:matmul_op_test_cpu PASSED in 16.7s Stats over 20 runs: max = 16.7s, min = 11.7s, avg = 14.1s, dev = 1.7s //tensorflow/python/kernel_tests/math_ops:tensordot_op_test_cpu PASSED in 77.2s Stats over 20 runs: max = 77.2s, min = 4.6s, avg = 28.5s, dev = 21.9s //tensorflow/python/kernel_tests/nn_ops:embedding_ops_test_cpu PASSED in 21.9s Stats over 20 runs: max = 21.9s, min = 11.8s, avg = 14.3s, dev = 2.2s //tensorflow/python/data/experimental/kernel_tests/service:local_workers_test PASSED in 20.5s Stats over 24 runs: max = 20.5s, min = 9.1s, avg = 14.5s, dev = 3.4s //tensorflow/python/data/kernel_tests:interleave_test PASSED in 25.7s Stats over 24 runs: max = 25.7s, min = 9.6s, avg = 15.7s, dev = 4.3s //tensorflow/python/data/kernel_tests:sample_from_datasets_test PASSED in 17.1s Stats over 24 runs: max = 17.1s, min = 3.4s, avg = 8.9s, dev = 4.0s //tensorflow/compiler/xla/tests:array_elementwise_ops_test_cpu PASSED in 9.9s Stats over 25 runs: max = 9.9s, min = 6.3s, avg = 7.8s, dev = 0.9s //tensorflow/compiler/xla/tests:select_and_scatter_test_cpu PASSED in 37.5s Stats over 25 runs: max = 37.5s, min = 7.6s, avg = 12.2s, dev = 7.7s //tensorflow/compiler/xla/tests:convolution_variants_test_cpu PASSED in 8.4s Stats over 30 runs: max = 8.4s, min = 5.9s, avg = 7.2s, dev = 0.6s //tensorflow/compiler/xla/tests:iota_test_cpu PASSED in 13.1s Stats over 30 runs: max = 13.1s, min = 11.5s, avg = 12.3s, dev = 0.4s //tensorflow/compiler/xla/tests:params_test_cpu PASSED in 11.0s Stats over 30 runs: max = 11.0s, min = 9.6s, avg = 10.3s, dev = 0.4s //tensorflow/compiler/xla/tests:reshape_test_cpu PASSED in 9.6s Stats over 30 runs: max = 9.6s, min = 6.0s, avg = 7.3s, dev = 0.8s //tensorflow/python/kernel_tests/nn_ops:conv_ops_3d_test_cpu PASSED in 15.7s Stats over 30 runs: max = 15.7s, min = 2.9s, avg = 7.6s, dev = 2.7s //tensorflow/compiler/xla/tests:reduce_test_cpu PASSED in 8.2s Stats over 31 runs: max = 8.2s, min = 6.5s, avg = 7.2s, dev = 0.5s //tensorflow/compiler/xla/tests:scalar_computations_test_cpu PASSED in 15.0s Stats over 32 runs: max = 15.0s, min = 7.8s, avg = 11.1s, dev = 2.2s //tensorflow/python/data/experimental/kernel_tests/service:auto_shard_test PASSED in 22.2s Stats over 32 runs: max = 22.2s, min = 5.2s, avg = 11.7s, dev = 3.6s //tensorflow/python/data/experimental/kernel_tests/service:data_service_ops_test PASSED in 27.9s Stats over 32 runs: max = 27.9s, min = 8.1s, avg = 16.3s, dev = 5.2s //tensorflow/compiler/xla/tests:batch_normalization_test_cpu PASSED in 14.5s Stats over 40 runs: max = 14.5s, min = 7.3s, avg = 9.5s, dev = 1.7s //tensorflow/compiler/xla/tests:bfloat16_test_cpu PASSED in 11.5s Stats over 40 runs: max = 11.5s, min = 6.9s, avg = 9.0s, dev = 1.3s //tensorflow/compiler/xla/tests:conv_depthwise_backprop_filter_test_cpu PASSED in 10.2s Stats over 40 runs: max = 10.2s, min = 7.4s, avg = 8.9s, dev = 0.7s //tensorflow/compiler/xla/tests:slice_test_cpu PASSED in 13.8s Stats over 40 runs: max = 13.8s, min = 9.9s, avg = 11.0s, dev = 0.9s //tensorflow/compiler/mlir/quantization/tensorflow/python:quantize_model_test PASSED in 43.7s Stats over 50 runs: max = 43.7s, min = 18.7s, avg = 30.2s, dev = 7.7s //tensorflow/compiler/tests:sort_ops_test_cpu PASSED in 41.4s Stats over 50 runs: max = 41.4s, min = 3.3s, avg = 11.2s, dev = 8.3s //tensorflow/compiler/tests:sort_ops_test_cpu_mlir_bridge_test PASSED in 42.5s Stats over 50 runs: max = 42.5s, min = 2.8s, avg = 10.3s, dev = 8.7s //tensorflow/compiler/xla/tests:conv_depthwise_test_cpu PASSED in 10.7s Stats over 50 runs: max = 10.7s, min = 7.7s, avg = 9.0s, dev = 0.7s //tensorflow/compiler/xla/tests:convolution_test_1d_no_vmodule_cpu PASSED in 13.5s Stats over 50 runs: max = 13.5s, min = 10.6s, avg = 12.2s, dev = 0.6s //tensorflow/compiler/xla/tests:convolution_test_cpu PASSED in 16.1s Stats over 50 runs: max = 16.1s, min = 8.3s, avg = 11.5s, dev = 1.7s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_dense_mat_mul_grad_test_cpu PASSED in 12.7s Stats over 50 runs: max = 12.7s, min = 4.1s, avg = 7.5s, dev = 2.3s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_grad_test_cpu PASSED in 6.6s Stats over 50 runs: max = 6.6s, min = 3.0s, avg = 4.1s, dev = 0.8s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_sparse_mat_mul_grad_test_cpu PASSED in 8.0s Stats over 50 runs: max = 8.0s, min = 3.4s, avg = 4.5s, dev = 1.3s //tensorflow/python/kernel_tests/math_ops:cwise_ops_binary_test_cpu PASSED in 33.9s Stats over 50 runs: max = 33.9s, min = 6.4s, avg = 17.1s, dev = 7.7s //tensorflow/python/kernel_tests/math_ops:cwise_ops_test_cpu PASSED in 14.3s Stats over 50 runs: max = 14.3s, min = 3.6s, avg = 6.2s, dev = 2.4s //tensorflow/python/kernel_tests/math_ops:cwise_ops_unary_test_cpu PASSED in 12.8s Stats over 50 runs: max = 12.8s, min = 2.9s, avg = 4.5s, dev = 2.1s Executed 3652 out of 3652 tests: 3652 tests pass. There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.