==================== Test output for //tensorflow/dtensor/python/tests:input_util_test (shard 1 of 8): 2023-03-28 05:50:29.384434: I tensorflow/core/util/port.cc:116] Experimental oneDNN custom operations are on. If you experience issues, please turn them off by setting the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.11.2: /usr/local/bin/python3 [ RUN ] DTensorDatasetTest.testForInIterationEager I0328 05:50:34.311458 281473542091648 mesh_util.py:35] This is client 0 of 1 clients I0328 05:50:34.311716 281473542091648 mesh_util.py:36] Number of global CPU devices: 16 I0328 05:50:34.311910 281473542091648 mesh_util.py:39] Global device IDs: [[[ 0 1] [ 2 3]] [[ 4 5] [ 6 7]] [[ 8 9] [10 11]] [[12 13] [14 15]]] I0328 05:50:34.312552 281473542091648 mesh_util.py:40] Local device IDs: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] I0328 05:50:34.312797 281473542091648 mesh_util.py:41] Local devices: ['/job:localhost/replica:0/task:0/device:CPU:0', '/job:localhost/replica:0/task:0/device:CPU:1', '/job:localhost/replica:0/task:0/device:CPU:2', '/job:localhost/replica:0/task:0/device:CPU:3', '/job:localhost/replica:0/task:0/device:CPU:4', '/job:localhost/replica:0/task:0/device:CPU:5', '/job:localhost/replica:0/task:0/device:CPU:6', '/job:localhost/replica:0/task:0/device:CPU:7', '/job:localhost/replica:0/task:0/device:CPU:8', '/job:localhost/replica:0/task:0/device:CPU:9', '/job:localhost/replica:0/task:0/device:CPU:10', '/job:localhost/replica:0/task:0/device:CPU:11', '/job:localhost/replica:0/task:0/device:CPU:12', '/job:localhost/replica:0/task:0/device:CPU:13', '/job:localhost/replica:0/task:0/device:CPU:14', '/job:localhost/replica:0/task:0/device:CPU:15'] 2023-03-28 05:50:34.646796: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_0}}]] 2023-03-28 05:50:34.775601: I tensorflow/core/grappler/optimizers/data/replicate_on_split.cc:32] Running replicate on split optimization 2023-03-28 05:50:34.837863: I tensorflow/core/grappler/optimizers/data/replicate_on_split.cc:32] Running replicate on split optimization 2023-03-28 05:50:34.894410: I tensorflow/core/grappler/optimizers/data/replicate_on_split.cc:32] Running replicate on split optimization 2023-03-28 05:50:34.951613: I tensorflow/core/grappler/optimizers/data/replicate_on_split.cc:32] Running replicate on split optimization 2023-03-28 05:50:34.996785: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:34.997726: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:35.074482: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:35.075347: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:35.187716: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:35.188592: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:35.253353: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:35.254275: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:35.391015: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:35.391846: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:35.486918: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:35.489163: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:35.575056: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:35.576047: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:35.640754: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:35.641646: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:35.716472: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:35.717293: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:35.778973: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:35.779779: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:35.867209: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:35.868022: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:35.947783: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:35.948672: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:36.053077: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:36.053978: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:36.113880: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:36.114821: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_14' with dtype int32 and shape [3,4] [[{{node Placeholder/_14}}]] 2023-03-28 05:50:36.221483: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:36.222345: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:36.298662: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:36.299541: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [32,8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:36.844863: I tensorflow/core/common_runtime/executor.cc:1210] [/job:localhost/replica:0/task:0/device:CPU:9] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): CANCELLED: Operation was cancelled [[{{node tf.StatefulPartitionedCall/eager_operation}}]] 2023-03-28 05:50:36.847812: I tensorflow/core/common_runtime/executor.cc:1210] [/job:localhost/replica:0/task:0/device:CPU:7] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): CANCELLED: Operation was cancelled [[{{node tf.StatefulPartitionedCall/eager_operation}}]] 2023-03-28 05:50:36.849976: E tensorflow/dtensor/cc/dtensor_device.cc:2000] Encountered error while executing function: CopyToMesh__func_5333616190392767472_5950845106898307211_6858324592425701659_1 for mesh : |batch=4,height=2,width=2|0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15|0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15|/job:localhost/replica:0/task:0/device:CPU:0,/job:localhost/replica:0/task:0/device:CPU:1,/job:localhost/replica:0/task:0/device:CPU:2,/job:localhost/replica:0/task:0/device:CPU:3,/job:localhost/replica:0/task:0/device:CPU:4,/job:localhost/replica:0/task:0/device:CPU:5,/job:localhost/replica:0/task:0/device:CPU:6,/job:localhost/replica:0/task:0/device:CPU:7,/job:localhost/replica:0/task:0/device:CPU:8,/job:localhost/replica:0/task:0/device:CPU:9,/job:localhost/replica:0/task:0/device:CPU:10,/job:localhost/replica:0/task:0/device:CPU:11,/job:localhost/replica:0/task:0/device:CPU:12,/job:localhost/replica:0/task:0/device:CPU:13,/job:localhost/replica:0/task:0/device:CPU:14,/job:localhost/replica:0/task:0/device:CPU:15 / error : {{function_node IteratorGetNext__func_5602927597924918942_12989052755378955192_1628789561242421788_0}} End of sequence [[{{node tf.StatefulPartitionedCall/eager_operation}}]] Encountered when executing an operation using EagerExecutor. This error cancels all future operations and poisons their output tensors. 2023-03-28 05:50:36.853729: I tensorflow/core/common_runtime/executor.cc:1210] [/job:localhost/replica:0/task:0/device:CPU:3] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): CANCELLED: Operation was cancelled [[{{node tf.StatefulPartitionedCall/eager_operation}}]] 2023-03-28 05:50:36.854083: E tensorflow/dtensor/cc/dtensor_device.cc:2046] Error executing CopyToMesh {{function_node IteratorGetNext__func_5602927597924918942_12989052755378955192_1628789561242421788_0}} End of sequence [[{{node tf.StatefulPartitionedCall/eager_operation}}]] Encountered when executing an operation using EagerExecutor. This error cancels all future operations and poisons their output tensors. [ FAILED ] DTensorDatasetTest.testForInIterationEager INFO:tensorflow:time(__main__.DTensorDatasetTest.testForInIterationEager): 2.68s I0328 05:50:36.988229 281473542091648 test_util.py:2462] time(__main__.DTensorDatasetTest.testForInIterationEager): 2.68s [ RUN ] DTensorDatasetTest.testIterOnBatchedDatasetGraph I0328 05:50:36.989587 281473542091648 mesh_util.py:35] This is client 0 of 1 clients I0328 05:50:36.989788 281473542091648 mesh_util.py:36] Number of global CPU devices: 16 I0328 05:50:36.989990 281473542091648 mesh_util.py:39] Global device IDs: [[[ 0 1] [ 2 3]] [[ 4 5] [ 6 7]] [[ 8 9] [10 11]] [[12 13] [14 15]]] I0328 05:50:36.990466 281473542091648 mesh_util.py:40] Local device IDs: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] I0328 05:50:36.990602 281473542091648 mesh_util.py:41] Local devices: ['/job:localhost/replica:0/task:0/device:CPU:0', '/job:localhost/replica:0/task:0/device:CPU:1', '/job:localhost/replica:0/task:0/device:CPU:2', '/job:localhost/replica:0/task:0/device:CPU:3', '/job:localhost/replica:0/task:0/device:CPU:4', '/job:localhost/replica:0/task:0/device:CPU:5', '/job:localhost/replica:0/task:0/device:CPU:6', '/job:localhost/replica:0/task:0/device:CPU:7', '/job:localhost/replica:0/task:0/device:CPU:8', '/job:localhost/replica:0/task:0/device:CPU:9', '/job:localhost/replica:0/task:0/device:CPU:10', '/job:localhost/replica:0/task:0/device:CPU:11', '/job:localhost/replica:0/task:0/device:CPU:12', '/job:localhost/replica:0/task:0/device:CPU:13', '/job:localhost/replica:0/task:0/device:CPU:14', '/job:localhost/replica:0/task:0/device:CPU:15'] 2023-03-28 05:50:37.263094: I tensorflow/core/grappler/optimizers/data/replicate_on_split.cc:32] Running replicate on split optimization 2023-03-28 05:50:37.341385: I tensorflow/core/grappler/optimizers/data/replicate_on_split.cc:32] Running replicate on split optimization 2023-03-28 05:50:37.418298: I tensorflow/core/grappler/optimizers/data/replicate_on_split.cc:32] Running replicate on split optimization 2023-03-28 05:50:37.508273: I tensorflow/core/grappler/optimizers/data/replicate_on_split.cc:32] Running replicate on split optimization 2023-03-28 05:50:37.567886: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:37.568938: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:37.648816: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:37.649854: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:37.795317: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:37.796437: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:37.890832: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:37.891957: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:37.967590: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:37.968643: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:38.068431: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:38.069470: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:38.146637: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:38.147748: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:38.239967: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:38.241063: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:38.323056: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:38.324171: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:38.450550: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:38.451694: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:38.538043: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:38.539176: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:38.649040: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:38.650188: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:38.730086: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:38.731243: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:38.817822: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:38.818934: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:38.907512: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:38.908646: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:38.990363: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:38.991514: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:39.122805: I tensorflow/dtensor/cc/dtensor_device.cc:1492] DTensor cache key lookup missed for __inference_train_620. DTensor is (re-)computing its SPMD transformation. 2023-03-28 05:50:39.131019: I tensorflow/dtensor/cc/dtensor_device.cc:1561] DTensor cache key lookup missed for __inference_train_620. DTensor is (re-)computing its ExecutionFunctions. 2023-03-28 05:50:39.282027: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_1' with dtype float and shape [1] [[{{node Placeholder/_1}}]] INFO:tensorflow:time(__main__.DTensorDatasetTest.testIterOnBatchedDatasetGraph): 2.81s I0328 05:50:39.795730 281473542091648 test_util.py:2462] time(__main__.DTensorDatasetTest.testIterOnBatchedDatasetGraph): 2.81s [ OK ] DTensorDatasetTest.testIterOnBatchedDatasetGraph [ RUN ] DTensorDatasetTest.testIterWithLayouts1 (images_sharding=['unsharded', 'unsharded', 'unsharded', 'unsharded'], labels_sharding=['unsharded', 'unsharded'], is_graph=True) I0328 05:50:39.798059 281473542091648 mesh_util.py:35] This is client 0 of 1 clients I0328 05:50:39.798280 281473542091648 mesh_util.py:36] Number of global CPU devices: 16 I0328 05:50:39.798477 281473542091648 mesh_util.py:39] Global device IDs: [[[ 0 1] [ 2 3]] [[ 4 5] [ 6 7]] [[ 8 9] [10 11]] [[12 13] [14 15]]] I0328 05:50:39.799045 281473542091648 mesh_util.py:40] Local device IDs: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] I0328 05:50:39.799221 281473542091648 mesh_util.py:41] Local devices: ['/job:localhost/replica:0/task:0/device:CPU:0', '/job:localhost/replica:0/task:0/device:CPU:1', '/job:localhost/replica:0/task:0/device:CPU:2', '/job:localhost/replica:0/task:0/device:CPU:3', '/job:localhost/replica:0/task:0/device:CPU:4', '/job:localhost/replica:0/task:0/device:CPU:5', '/job:localhost/replica:0/task:0/device:CPU:6', '/job:localhost/replica:0/task:0/device:CPU:7', '/job:localhost/replica:0/task:0/device:CPU:8', '/job:localhost/replica:0/task:0/device:CPU:9', '/job:localhost/replica:0/task:0/device:CPU:10', '/job:localhost/replica:0/task:0/device:CPU:11', '/job:localhost/replica:0/task:0/device:CPU:12', '/job:localhost/replica:0/task:0/device:CPU:13', '/job:localhost/replica:0/task:0/device:CPU:14', '/job:localhost/replica:0/task:0/device:CPU:15'] 2023-03-28 05:50:40.068350: I tensorflow/core/grappler/optimizers/data/replicate_on_split.cc:32] Running replicate on split optimization 2023-03-28 05:50:40.139393: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:40.140578: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:40.228666: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:40.229851: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:40.314988: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:40.316204: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:40.401296: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:40.402474: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:40.500986: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:40.502134: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:40.587965: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:40.589123: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:40.689373: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:40.690555: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:40.781965: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:40.783150: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:40.894478: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:40.895656: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:40.979968: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:40.981103: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:41.075095: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:41.076278: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:41.176854: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:41.178040: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:41.262663: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:41.264956: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:41.357889: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:41.359076: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:41.442933: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:41.444154: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:41.531072: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:41.532306: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:41.672211: I tensorflow/dtensor/cc/dtensor_device.cc:1492] DTensor cache key lookup missed for __inference_train_855. DTensor is (re-)computing its SPMD transformation. 2023-03-28 05:50:41.681191: I tensorflow/dtensor/cc/dtensor_device.cc:1561] DTensor cache key lookup missed for __inference_train_855. DTensor is (re-)computing its ExecutionFunctions. 2023-03-28 05:50:41.933012: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_1' with dtype float and shape [1] [[{{node Placeholder/_1}}]] INFO:tensorflow:time(__main__.DTensorDatasetTest.testIterWithLayouts1 (images_sharding=['unsharded', 'unsharded', 'unsharded', 'unsharded'], labels_sharding=['unsharded', 'unsharded'], is_graph=True)): 2.85s I0328 05:50:42.643256 281473542091648 test_util.py:2462] time(__main__.DTensorDatasetTest.testIterWithLayouts1 (images_sharding=['unsharded', 'unsharded', 'unsharded', 'unsharded'], labels_sharding=['unsharded', 'unsharded'], is_graph=True)): 2.85s [ OK ] DTensorDatasetTest.testIterWithLayouts1 (images_sharding=['unsharded', 'unsharded', 'unsharded', 'unsharded'], labels_sharding=['unsharded', 'unsharded'], is_graph=True) [ RUN ] DTensorDatasetTest.testIterWithLayouts7 (images_sharding=['unsharded', 'width', 'height', 'unsharded'], labels_sharding=['unsharded', 'unsharded'], is_graph=True) I0328 05:50:42.646020 281473542091648 mesh_util.py:35] This is client 0 of 1 clients I0328 05:50:42.646281 281473542091648 mesh_util.py:36] Number of global CPU devices: 16 I0328 05:50:42.646506 281473542091648 mesh_util.py:39] Global device IDs: [[[ 0 1] [ 2 3]] [[ 4 5] [ 6 7]] [[ 8 9] [10 11]] [[12 13] [14 15]]] I0328 05:50:42.647170 281473542091648 mesh_util.py:40] Local device IDs: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] I0328 05:50:42.647377 281473542091648 mesh_util.py:41] Local devices: ['/job:localhost/replica:0/task:0/device:CPU:0', '/job:localhost/replica:0/task:0/device:CPU:1', '/job:localhost/replica:0/task:0/device:CPU:2', '/job:localhost/replica:0/task:0/device:CPU:3', '/job:localhost/replica:0/task:0/device:CPU:4', '/job:localhost/replica:0/task:0/device:CPU:5', '/job:localhost/replica:0/task:0/device:CPU:6', '/job:localhost/replica:0/task:0/device:CPU:7', '/job:localhost/replica:0/task:0/device:CPU:8', '/job:localhost/replica:0/task:0/device:CPU:9', '/job:localhost/replica:0/task:0/device:CPU:10', '/job:localhost/replica:0/task:0/device:CPU:11', '/job:localhost/replica:0/task:0/device:CPU:12', '/job:localhost/replica:0/task:0/device:CPU:13', '/job:localhost/replica:0/task:0/device:CPU:14', '/job:localhost/replica:0/task:0/device:CPU:15'] 2023-03-28 05:50:42.942511: I tensorflow/core/grappler/optimizers/data/replicate_on_split.cc:32] Running replicate on split optimization 2023-03-28 05:50:43.024386: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:43.025615: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:43.142635: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:43.143887: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:43.236205: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:43.237426: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:43.349936: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:43.351149: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:43.441140: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:43.442357: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:43.546572: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:43.547806: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:43.637901: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:43.639117: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:43.727187: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:43.728435: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:43.839016: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:43.840282: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:43.941125: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:43.942307: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:44.123225: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:44.124401: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:44.271060: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:44.272216: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:44.410321: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:44.411532: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:44.517581: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:44.518778: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_5' with dtype float and shape [1] [[{{node Placeholder/_5}}]] 2023-03-28 05:50:44.659511: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:44.660619: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_17' with dtype int32 and shape [3,4] [[{{node Placeholder/_17}}]] 2023-03-28 05:50:44.818909: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:44.820029: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_18' with dtype int32 and shape [3,2] [[{{node Placeholder/_18}}]] 2023-03-28 05:50:44.947487: I tensorflow/dtensor/cc/dtensor_device.cc:1492] DTensor cache key lookup missed for __inference_train_1094. DTensor is (re-)computing its SPMD transformation. 2023-03-28 05:50:44.955187: I tensorflow/dtensor/cc/dtensor_device.cc:1561] DTensor cache key lookup missed for __inference_train_1094. DTensor is (re-)computing its ExecutionFunctions. 2023-03-28 05:50:45.189878: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_1' with dtype float and shape [1] [[{{node Placeholder/_1}}]] INFO:tensorflow:time(__main__.DTensorDatasetTest.testIterWithLayouts7 (images_sharding=['unsharded', 'width', 'height', 'unsharded'], labels_sharding=['unsharded', 'unsharded'], is_graph=True)): 3.27s I0328 05:50:45.914451 281473542091648 test_util.py:2462] time(__main__.DTensorDatasetTest.testIterWithLayouts7 (images_sharding=['unsharded', 'width', 'height', 'unsharded'], labels_sharding=['unsharded', 'unsharded'], is_graph=True)): 3.27s [ OK ] DTensorDatasetTest.testIterWithLayouts7 (images_sharding=['unsharded', 'width', 'height', 'unsharded'], labels_sharding=['unsharded', 'unsharded'], is_graph=True) [ RUN ] DTensorIteratorSpecTest.testFromTensorList I0328 05:50:45.916289 281473542091648 mesh_util.py:35] This is client 0 of 1 clients I0328 05:50:45.916502 281473542091648 mesh_util.py:36] Number of global CPU devices: 8 I0328 05:50:45.916685 281473542091648 mesh_util.py:39] Global device IDs: [0 1 2 3 4 5 6 7] I0328 05:50:45.917112 281473542091648 mesh_util.py:40] Local device IDs: [0, 1, 2, 3, 4, 5, 6, 7] I0328 05:50:45.917279 281473542091648 mesh_util.py:41] Local devices: ['/job:localhost/replica:0/task:0/device:CPU:0', '/job:localhost/replica:0/task:0/device:CPU:1', '/job:localhost/replica:0/task:0/device:CPU:2', '/job:localhost/replica:0/task:0/device:CPU:3', '/job:localhost/replica:0/task:0/device:CPU:4', '/job:localhost/replica:0/task:0/device:CPU:5', '/job:localhost/replica:0/task:0/device:CPU:6', '/job:localhost/replica:0/task:0/device:CPU:7'] 2023-03-28 05:50:46.173561: I tensorflow/core/grappler/optimizers/data/replicate_on_split.cc:32] Running replicate on split optimization 2023-03-28 05:50:46.228177: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:46.229125: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_16' with dtype int32 and shape [1,4] [[{{node Placeholder/_16}}]] 2023-03-28 05:50:46.306361: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:46.307354: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_16' with dtype int32 and shape [1,4] [[{{node Placeholder/_16}}]] 2023-03-28 05:50:46.380507: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:46.381533: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:46.480940: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_16' with dtype int32 and shape [1,4] [[{{node Placeholder/_16}}]] 2023-03-28 05:50:46.481925: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_16' with dtype int32 and shape [1,4] [[{{node Placeholder/_16}}]] 2023-03-28 05:50:46.564573: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_16' with dtype int32 and shape [1,4] [[{{node Placeholder/_16}}]] 2023-03-28 05:50:46.565465: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_16' with dtype int32 and shape [1,4] [[{{node Placeholder/_16}}]] 2023-03-28 05:50:46.678482: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:46.679420: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:46.748793: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:46.749724: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_16' with dtype int32 and shape [1,4] [[{{node Placeholder/_16}}]] 2023-03-28 05:50:47.066395: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] 2023-03-28 05:50:47.067269: I tensorflow/core/common_runtime/executor.cc:1210] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_4' with dtype float and shape [8,8,3] [[{{node Placeholder/_4}}]] INFO:tensorflow:time(__main__.DTensorIteratorSpecTest.testFromTensorList): 1.63s I0328 05:50:47.541648 281473542091648 test_util.py:2462] time(__main__.DTensorIteratorSpecTest.testFromTensorList): 1.63s [ OK ] DTensorIteratorSpecTest.testFromTensorList [ RUN ] InputUtilHelpersTest.testShardCounts5 (mesh_dims=[('batch', 2), ('height', 4), ('width', 2)], layout_specs=['batch', 'width', 'height'], batch_dim='batch', counts=[1, 2, 4]) I0328 05:50:47.543154 281473542091648 mesh_util.py:35] This is client 0 of 1 clients I0328 05:50:47.543310 281473542091648 mesh_util.py:36] Number of global CPU devices: 16 I0328 05:50:47.543449 281473542091648 mesh_util.py:39] Global device IDs: [[[ 0 1] [ 2 3] [ 4 5] [ 6 7]] [[ 8 9] [10 11] [12 13] [14 15]]] I0328 05:50:47.543854 281473542091648 mesh_util.py:40] Local device IDs: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] I0328 05:50:47.543983 281473542091648 mesh_util.py:41] Local devices: ['/job:localhost/replica:0/task:0/device:CPU:0', '/job:localhost/replica:0/task:0/device:CPU:1', '/job:localhost/replica:0/task:0/device:CPU:2', '/job:localhost/replica:0/task:0/device:CPU:3', '/job:localhost/replica:0/task:0/device:CPU:4', '/job:localhost/replica:0/task:0/device:CPU:5', '/job:localhost/replica:0/task:0/device:CPU:6', '/job:localhost/replica:0/task:0/device:CPU:7', '/job:localhost/replica:0/task:0/device:CPU:8', '/job:localhost/replica:0/task:0/device:CPU:9', '/job:localhost/replica:0/task:0/device:CPU:10', '/job:localhost/replica:0/task:0/device:CPU:11', '/job:localhost/replica:0/task:0/device:CPU:12', '/job:localhost/replica:0/task:0/device:CPU:13', '/job:localhost/replica:0/task:0/device:CPU:14', '/job:localhost/replica:0/task:0/device:CPU:15'] INFO:tensorflow:time(__main__.InputUtilHelpersTest.testShardCounts5 (mesh_dims=[('batch', 2), ('height', 4), ('width', 2)], layout_specs=['batch', 'width', 'height'], batch_dim='batch', counts=[1, 2, 4])): 0.0s I0328 05:50:47.544565 281473542091648 test_util.py:2462] time(__main__.InputUtilHelpersTest.testShardCounts5 (mesh_dims=[('batch', 2), ('height', 4), ('width', 2)], layout_specs=['batch', 'width', 'height'], batch_dim='batch', counts=[1, 2, 4])): 0.0s [ OK ] InputUtilHelpersTest.testShardCounts5 (mesh_dims=[('batch', 2), ('height', 4), ('width', 2)], layout_specs=['batch', 'width', 'height'], batch_dim='batch', counts=[1, 2, 4]) ====================================================================== ERROR: testForInIterationEager (__main__.DTensorDatasetTest) DTensorDatasetTest.testForInIterationEager testForInIterationEager(False) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/dtensor/python/tests/input_util_test.runfiles/absl_py/absl/testing/parameterized.py", line 316, in bound_param_test return test_method(self, *testcase_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/dtensor/python/tests/input_util_test.runfiles/org_tensorflow/tensorflow/dtensor/python/tests/input_util_test.py", line 186, in testForInIteration self.assertDTensorEqual(output, images_layout, d_output) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/dtensor/python/tests/input_util_test.runfiles/org_tensorflow/tensorflow/dtensor/python/tests/test_util.py", line 303, in assertDTensorEqual unpacked = [t.numpy() for t in api.unpack(result_dtensor)] ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/dtensor/python/tests/input_util_test.runfiles/org_tensorflow/tensorflow/dtensor/python/api.py", line 352, in unpack return _dtensor_device().unpack(tensor) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/dtensor/python/tests/input_util_test.runfiles/org_tensorflow/tensorflow/dtensor/python/dtensor_device.py", line 250, in unpack tensors = _pywrap_dtensor_device.Unpack( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: cannot create std::vector larger than max_size() ---------------------------------------------------------------------- Ran 6 tests in 13.235s FAILED (errors=1) ================================================================================ ==================== Test output for //tensorflow/python/kernel_tests/nn_ops:pooling_ops_test_cpu (shard 8 of 10): 2023-03-28 05:50:56.627169: I tensorflow/core/util/port.cc:116] Experimental oneDNN custom operations are on. If you experience issues, please turn them off by setting the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.11.2: /usr/local/bin/python3 [ RUN ] PoolingTest.testAvgPoolEmpty5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test.py:225: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. W0328 05:51:00.499648 281472815166336 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test.py:225: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. [ SKIPPED ] PoolingTest.testAvgPoolEmpty5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolEmpty5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.01s I0328 05:51:00.506175 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolEmpty5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.01s [ RUN ] PoolingTest.testAvgPoolEmptyInput3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [0, 8, 8, 8] 0 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:00.506956 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [0, 8, 8, 8] 0 [1, 3, 3, 1] [1, 2, 2, 1] 2023-03-28 05:51:00.550047: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:372] MLIR V1 optimization pass is not enabled 2023-03-28 05:51:00.563203: E ./tensorflow/core/graph/mkl_graph_util.h:182] oneDNN BFloat16 support are only on platforms with AVX512. Falling back to default implementation if present. INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolEmptyInput3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.07s I0328 05:51:00.573997 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolEmptyInput3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.07s [ OK ] PoolingTest.testAvgPoolEmptyInput3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds [ FAILED ] PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds): 0.08s I0328 05:51:00.650828 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds): 0.08s [ RUN ] PoolingTest.testAvgPoolKernelSmallerThanStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolKernelSmallerThanStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolKernelSmallerThanStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.652106 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolKernelSmallerThanStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.652808 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindow3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 2, 2, 1] 4 [1, 1, 2, 1] [1, 1, 1, 1] I0328 05:51:00.653236 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 2, 2, 1] 4 [1, 1, 2, 1] [1, 1, 1, 1] INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindow3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:00.665033 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindow3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testAvgPoolSamePaddingNonSquareWindow3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch11 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch11 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch11 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.666382 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch11 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_21 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [2, 2, 2, 2] 16 [1, 2, 1, 1] [1, 1, 1, 1] I0328 05:51:00.666857 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [2, 2, 2, 2] 16 [1, 2, 1, 1] [1, 1, 1, 1] INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_21 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.02s I0328 05:51:00.686603 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_21 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_21 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.687922 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindow_27 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePaddingNonSquareWindow_27 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindow_27 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.688517 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindow_27 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingPacket_45 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePaddingPacket_45 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingPacket_45 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.689073 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingPacket_45 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingPacket_83 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 8, 8, 8] 512 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:00.689441 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 8, 8, 8] 512 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingPacket_83 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:00.701219 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingPacket_83 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testAvgPoolSamePaddingPacket_83 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolSamePadding_211 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePadding_211 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePadding_211 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.702428 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePadding_211 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolValidPadding1 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 3] 27 [1, 2, 2, 1] [1, 2, 2, 1] I0328 05:51:00.702836 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 3] 27 [1, 2, 2, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolValidPadding1 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s I0328 05:51:00.713143 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolValidPadding1 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testAvgPoolValidPadding1 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolValidPadding9 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolValidPadding9 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolValidPadding9 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.714302 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolValidPadding9 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolValidPaddingUnevenStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolValidPaddingUnevenStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolValidPaddingUnevenStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.714888 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolValidPaddingUnevenStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolValidPaddingUnevenStride_25 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolValidPaddingUnevenStride_25 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolValidPaddingUnevenStride_25 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.715436 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolValidPaddingUnevenStride_25 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testDepthwiseMaxPool1x1DepthWindow3 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 1, 1, 10] 10 [1, 1, 1, 2] [1, 1, 1, 2] I0328 05:51:00.715803 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 1, 1, 10] 10 [1, 1, 1, 2] [1, 1, 1, 2] INFO:tensorflow:time(__main__.PoolingTest.testDepthwiseMaxPool1x1DepthWindow3 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s I0328 05:51:00.730348 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testDepthwiseMaxPool1x1DepthWindow3 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testDepthwiseMaxPool1x1DepthWindow3 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testDepthwiseMaxPool2x2DepthWindow11 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=True) INFO:tensorflow:Running NHWC test. True [1, 2, 2, 6] 24 [1, 1, 1, 3] [1, 1, 1, 3] I0328 05:51:00.731335 281472815166336 pooling_ops_test.py:247] Running NHWC test. True [1, 2, 2, 6] 24 [1, 1, 1, 3] [1, 1, 1, 3] INFO:tensorflow:time(__main__.PoolingTest.testDepthwiseMaxPool2x2DepthWindow11 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=True)): 0.16s I0328 05:51:00.889375 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testDepthwiseMaxPool2x2DepthWindow11 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=True)): 0.16s [ OK ] PoolingTest.testDepthwiseMaxPool2x2DepthWindow11 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=True) [ RUN ] PoolingTest.testDepthwiseMaxPoolingWithArgmax INFO:tensorflow:time(__main__.PoolingTest.testDepthwiseMaxPoolingWithArgmax): 0.07s I0328 05:51:00.958413 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testDepthwiseMaxPoolingWithArgmax): 0.07s [ OK ] PoolingTest.testDepthwiseMaxPoolingWithArgmax [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame1_14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:00.959684 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame1_23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s I0328 05:51:00.960323 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame1_32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s I0328 05:51:00.960878 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_41 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 1] 9 [1, 1, 1, 1] [1, 2, 2, 1] I0328 05:51:00.961236 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 1] 9 [1, 1, 1, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_41 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.02s I0328 05:51:00.980571 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_41 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testKernelSmallerThanStrideSame1_41 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_50 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame1_50 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_50 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.981876 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_50 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame2_13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.982502 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame2_22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.983059 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame2_31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:00.983594 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_40 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 4, 4, 1] 16 [1, 1, 1, 1] [1, 2, 2, 1] I0328 05:51:00.983953 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 4, 4, 1] 16 [1, 1, 1, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_40 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s I0328 05:51:00.994635 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_40 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testKernelSmallerThanStrideSame2_40 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) INFO:tensorflow:Running NHWC test. True [1, 4, 4, 1] 16 [1, 1, 1, 1] [1, 2, 2, 1] I0328 05:51:00.995572 281472815166336 pooling_ops_test.py:247] Running NHWC test. True [1, 4, 4, 1] 16 [1, 1, 1, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.02s I0328 05:51:01.012956 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.02s [ OK ] PoolingTest.testKernelSmallerThanStrideSame2_5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) [ RUN ] PoolingTest.testMaxPoolEmptyInput12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolEmptyInput12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolEmptyInput12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.014194 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolEmptyInput12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolEmptyInput21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolEmptyInput21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolEmptyInput21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.014801 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolEmptyInput21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolEmptyInput30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolEmptyInput30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolEmptyInput30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.015346 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolEmptyInput30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolEmptyInput5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) INFO:tensorflow:Running NHWC test. True [0, 8, 8, 8] 0 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:01.015706 281472815166336 pooling_ops_test.py:247] Running NHWC test. True [0, 8, 8, 8] 0 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolEmptyInput5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.01s I0328 05:51:01.029177 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolEmptyInput5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.01s [ OK ] PoolingTest.testMaxPoolEmptyInput5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) [ RUN ] PoolingTest.testMaxPoolExplicitPadding13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.030396 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.030970 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding2_10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:01.031339 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding2_10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:01.042181 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding2_10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolExplicitPadding2_10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolExplicitPadding2_2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding2_2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding2_2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True)): 0.0s I0328 05:51:01.043330 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding2_2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding2_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding2_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding2_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s I0328 05:51:01.043957 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding2_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding2_38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding2_38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding2_38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:01.044514 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding2_38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s I0328 05:51:01.045068 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:01.045431 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s I0328 05:51:01.056654 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolExplicitPadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolExplicitPaddingAdvanced15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPaddingAdvanced15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.057950 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPaddingAdvanced24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPaddingAdvanced24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.058611 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPaddingAdvanced33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPaddingAdvanced33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.059170 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPaddingAdvanced8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPaddingAdvanced8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True)): 0.0s I0328 05:51:01.059659 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding_1D16 (pool_func=, data_format='NWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding_1D16 (pool_func=, data_format='NWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D16 (pool_func=, data_format='NWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.060198 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D16 (pool_func=, data_format='NWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding_1D25 (pool_func=, data_format='NCW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding_1D25 (pool_func=, data_format='NCW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D25 (pool_func=, data_format='NCW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.060734 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D25 (pool_func=, data_format='NCW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding_1D34 (pool_func=, data_format='NCW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding_1D34 (pool_func=, data_format='NCW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D34 (pool_func=, data_format='NCW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.061259 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D34 (pool_func=, data_format='NCW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolFwd_maxpool4 INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolFwd_maxpool4): 0.0s I0328 05:51:01.061647 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolFwd_maxpool4): 0.0s [ OK ] PoolingTest.testMaxPoolFwd_maxpool4 [ RUN ] PoolingTest.testMaxPoolGradWithArgmaxEagerShapeErrors INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolGradWithArgmaxEagerShapeErrors): 0.04s I0328 05:51:01.098614 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolGradWithArgmaxEagerShapeErrors): 0.04s [ OK ] PoolingTest.testMaxPoolGradWithArgmaxEagerShapeErrors [ RUN ] PoolingTest.testMaxPoolInvalidFilterSize13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolInvalidFilterSize13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.103026 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolInvalidFilterSize13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ OK ] PoolingTest.testMaxPoolInvalidFilterSize13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ RUN ] PoolingTest.testMaxPoolInvalidFilterSize22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolInvalidFilterSize22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.107378 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolInvalidFilterSize22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ OK ] PoolingTest.testMaxPoolInvalidFilterSize22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ RUN ] PoolingTest.testMaxPoolInvalidFilterSize31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolInvalidFilterSize31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.111071 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolInvalidFilterSize31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ OK ] PoolingTest.testMaxPoolInvalidFilterSize31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ RUN ] PoolingTest.testMaxPoolInvalidFilterSize6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolInvalidFilterSize6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.0s I0328 05:51:01.114581 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolInvalidFilterSize6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.0s [ OK ] PoolingTest.testMaxPoolInvalidFilterSize6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolKernelSmallerThanStrideValid4 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 7, 7, 1] 49 [1, 2, 2, 1] [1, 3, 3, 1] I0328 05:51:01.115338 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 7, 7, 1] 49 [1, 2, 2, 1] [1, 3, 3, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolKernelSmallerThanStrideValid4 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s I0328 05:51:01.130002 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolKernelSmallerThanStrideValid4 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolKernelSmallerThanStrideValid4 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPadding12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPadding12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.131278 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPadding21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPadding21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.131918 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPadding30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPadding30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.132466 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.0s I0328 05:51:01.132944 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.133479 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.134013 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.134540 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 6, 6, 1] 36 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:01.134889 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 6, 6, 1] 36 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.02s I0328 05:51:01.150491 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolSamePadding14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePadding14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePadding14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:01.151773 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePadding14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePadding23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePadding23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePadding23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s I0328 05:51:01.152374 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePadding23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s I0328 05:51:01.152921 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 2, 3, 3] 18 [1, 2, 2, 1] [1, 2, 2, 1] I0328 05:51:01.153271 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 2, 3, 3] 18 [1, 2, 2, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.02s I0328 05:51:01.169370 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testMaxPoolSamePadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.170595 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.171195 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.171737 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True) INFO:tensorflow:Running NHWC test. True [1, 2, 2, 1] 4 [1, 1, 2, 1] [1, 1, 1, 1] I0328 05:51:01.172105 281472815166336 pooling_ops_test.py:247] Running NHWC test. True [1, 2, 2, 1] 4 [1, 1, 2, 1] [1, 1, 1, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True)): 0.01s I0328 05:51:01.185770 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True)): 0.01s [ OK ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True) [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket4_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket4_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.187035 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket4_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket4_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.187663 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket4_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket4_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.188212 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket4_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 4, 4, 4] 64 [1, 2, 2, 1] [1, 2, 2, 1] I0328 05:51:01.188569 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 4, 4, 4] 64 [1, 2, 2, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:01.203446 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolSamePaddingPacket4_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket8_17 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket8_17 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_17 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s I0328 05:51:01.204668 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_17 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket8_26 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket8_26 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_26 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:01.205287 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_26 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket8_35 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket8_35 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_35 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s I0328 05:51:01.205836 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_35 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPadding0 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 3] 27 [1, 2, 2, 1] [1, 2, 2, 1] I0328 05:51:01.206219 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 3] 27 [1, 2, 2, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPadding0 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False)): 0.02s I0328 05:51:01.227264 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPadding0 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testMaxPoolValidPadding0 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolValidPadding18 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPadding18 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPadding18 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.228691 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPadding18 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPadding27 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPadding27 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPadding27 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.229373 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPadding27 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPadding36 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPadding36 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPadding36 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.229955 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPadding36 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride1 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 4, 4, 1] 16 [1, 2, 2, 1] [1, 1, 2, 1] I0328 05:51:01.230322 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 4, 4, 1] 16 [1, 2, 2, 1] [1, 1, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride1 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False)): 0.02s I0328 05:51:01.250031 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride1 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testMaxPoolValidPaddingUnevenStride1 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride19 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride19 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride19 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.251396 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride19 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride28 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride28 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride28 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.252011 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride28 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.252554 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.253085 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:01.253609 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 4, 4, 1] 16 [1, 2, 2, 1] [1, 2, 1, 1] I0328 05:51:01.253957 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 4, 4, 1] 16 [1, 2, 2, 1] [1, 2, 1, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:01.264953 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:01.266269 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolZeroExplicitPadding10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:01.266726 281472815166336 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:01.277934 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolZeroExplicitPadding10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolZeroExplicitPadding2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolZeroExplicitPadding2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True)): 0.0s I0328 05:51:01.279139 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolZeroExplicitPadding29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolZeroExplicitPadding29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s I0328 05:51:01.279823 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolZeroExplicitPadding38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolZeroExplicitPadding38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:01.280389 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolingWithArgmax INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolingWithArgmax): 0.03s I0328 05:51:01.314723 281472815166336 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolingWithArgmax): 0.03s [ OK ] PoolingTest.testMaxPoolingWithArgmax ====================================================================== FAIL: testAvgPoolGradOutputMemoryOutOfBounds (__main__.PoolingTest) PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test.py", line 2337, in testAvgPoolGradOutputMemoryOutOfBounds with self.assertRaisesRegex( AssertionError: InvalidArgumentError not raised ---------------------------------------------------------------------- Ran 96 tests in 0.816s FAILED (failures=1, skipped=65) ================================================================================ ==================== Test output for //tensorflow/python/kernel_tests/nn_ops:pooling_ops_test_cpu (shard 8 of 10): 2023-03-28 05:51:02.999071: I tensorflow/core/util/port.cc:116] Experimental oneDNN custom operations are on. If you experience issues, please turn them off by setting the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.11.2: /usr/local/bin/python3 [ RUN ] PoolingTest.testAvgPoolEmpty5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test.py:225: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. W0328 05:51:06.030899 281473789686656 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test.py:225: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. [ SKIPPED ] PoolingTest.testAvgPoolEmpty5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolEmpty5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.01s I0328 05:51:06.040006 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolEmpty5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.01s [ RUN ] PoolingTest.testAvgPoolEmptyInput3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [0, 8, 8, 8] 0 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:06.040732 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [0, 8, 8, 8] 0 [1, 3, 3, 1] [1, 2, 2, 1] 2023-03-28 05:51:06.116594: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:372] MLIR V1 optimization pass is not enabled 2023-03-28 05:51:06.129758: E ./tensorflow/core/graph/mkl_graph_util.h:182] oneDNN BFloat16 support are only on platforms with AVX512. Falling back to default implementation if present. INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolEmptyInput3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.1s I0328 05:51:06.137605 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolEmptyInput3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.1s [ OK ] PoolingTest.testAvgPoolEmptyInput3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds [ FAILED ] PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds): 0.07s I0328 05:51:06.208489 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds): 0.07s [ RUN ] PoolingTest.testAvgPoolKernelSmallerThanStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolKernelSmallerThanStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolKernelSmallerThanStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.209843 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolKernelSmallerThanStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.210530 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindow3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 2, 2, 1] 4 [1, 1, 2, 1] [1, 1, 1, 1] I0328 05:51:06.210959 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 2, 2, 1] 4 [1, 1, 2, 1] [1, 1, 1, 1] INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindow3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:06.222245 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindow3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testAvgPoolSamePaddingNonSquareWindow3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch11 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch11 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch11 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.223520 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch11 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_21 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [2, 2, 2, 2] 16 [1, 2, 1, 1] [1, 1, 1, 1] I0328 05:51:06.223989 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [2, 2, 2, 2] 16 [1, 2, 1, 1] [1, 1, 1, 1] INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_21 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.02s I0328 05:51:06.239599 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_21 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_21 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.240935 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindow_27 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePaddingNonSquareWindow_27 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindow_27 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.241556 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindow_27 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingPacket_45 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePaddingPacket_45 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingPacket_45 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.242121 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingPacket_45 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingPacket_83 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 8, 8, 8] 512 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:06.242492 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 8, 8, 8] 512 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingPacket_83 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:06.254612 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingPacket_83 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testAvgPoolSamePaddingPacket_83 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolSamePadding_211 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePadding_211 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePadding_211 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.255924 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePadding_211 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolValidPadding1 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 3] 27 [1, 2, 2, 1] [1, 2, 2, 1] I0328 05:51:06.256437 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 3] 27 [1, 2, 2, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolValidPadding1 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s I0328 05:51:06.267423 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolValidPadding1 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testAvgPoolValidPadding1 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolValidPadding9 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolValidPadding9 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolValidPadding9 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.268682 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolValidPadding9 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolValidPaddingUnevenStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolValidPaddingUnevenStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolValidPaddingUnevenStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.269306 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolValidPaddingUnevenStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolValidPaddingUnevenStride_25 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolValidPaddingUnevenStride_25 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolValidPaddingUnevenStride_25 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.269869 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolValidPaddingUnevenStride_25 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testDepthwiseMaxPool1x1DepthWindow3 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 1, 1, 10] 10 [1, 1, 1, 2] [1, 1, 1, 2] I0328 05:51:06.270239 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 1, 1, 10] 10 [1, 1, 1, 2] [1, 1, 1, 2] INFO:tensorflow:time(__main__.PoolingTest.testDepthwiseMaxPool1x1DepthWindow3 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.02s I0328 05:51:06.285655 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testDepthwiseMaxPool1x1DepthWindow3 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testDepthwiseMaxPool1x1DepthWindow3 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testDepthwiseMaxPool2x2DepthWindow11 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=True) INFO:tensorflow:Running NHWC test. True [1, 2, 2, 6] 24 [1, 1, 1, 3] [1, 1, 1, 3] I0328 05:51:06.286782 281473789686656 pooling_ops_test.py:247] Running NHWC test. True [1, 2, 2, 6] 24 [1, 1, 1, 3] [1, 1, 1, 3] INFO:tensorflow:time(__main__.PoolingTest.testDepthwiseMaxPool2x2DepthWindow11 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=True)): 0.02s I0328 05:51:06.306350 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testDepthwiseMaxPool2x2DepthWindow11 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=True)): 0.02s [ OK ] PoolingTest.testDepthwiseMaxPool2x2DepthWindow11 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=True) [ RUN ] PoolingTest.testDepthwiseMaxPoolingWithArgmax INFO:tensorflow:time(__main__.PoolingTest.testDepthwiseMaxPoolingWithArgmax): 0.03s I0328 05:51:06.339794 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testDepthwiseMaxPoolingWithArgmax): 0.03s [ OK ] PoolingTest.testDepthwiseMaxPoolingWithArgmax [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame1_14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.341236 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame1_23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.341981 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame1_32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.342602 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_41 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 1] 9 [1, 1, 1, 1] [1, 2, 2, 1] I0328 05:51:06.342980 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 1] 9 [1, 1, 1, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_41 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.02s I0328 05:51:06.361897 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_41 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testKernelSmallerThanStrideSame1_41 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_50 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame1_50 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_50 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.363224 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_50 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame2_13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.363860 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame2_22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.364434 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame2_31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.364984 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_40 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 4, 4, 1] 16 [1, 1, 1, 1] [1, 2, 2, 1] I0328 05:51:06.365344 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 4, 4, 1] 16 [1, 1, 1, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_40 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s I0328 05:51:06.376448 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_40 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testKernelSmallerThanStrideSame2_40 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) INFO:tensorflow:Running NHWC test. True [1, 4, 4, 1] 16 [1, 1, 1, 1] [1, 2, 2, 1] I0328 05:51:06.377431 281473789686656 pooling_ops_test.py:247] Running NHWC test. True [1, 4, 4, 1] 16 [1, 1, 1, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.02s I0328 05:51:06.394843 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.02s [ OK ] PoolingTest.testKernelSmallerThanStrideSame2_5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) [ RUN ] PoolingTest.testMaxPoolEmptyInput12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolEmptyInput12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolEmptyInput12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.396153 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolEmptyInput12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolEmptyInput21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolEmptyInput21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolEmptyInput21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.396802 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolEmptyInput21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolEmptyInput30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolEmptyInput30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolEmptyInput30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.397360 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolEmptyInput30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolEmptyInput5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) INFO:tensorflow:Running NHWC test. True [0, 8, 8, 8] 0 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:06.397723 281473789686656 pooling_ops_test.py:247] Running NHWC test. True [0, 8, 8, 8] 0 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolEmptyInput5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.01s I0328 05:51:06.411471 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolEmptyInput5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.01s [ OK ] PoolingTest.testMaxPoolEmptyInput5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) [ RUN ] PoolingTest.testMaxPoolExplicitPadding13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.412755 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.413397 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding2_10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:06.413789 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding2_10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:06.424784 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding2_10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolExplicitPadding2_10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolExplicitPadding2_2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding2_2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding2_2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True)): 0.0s I0328 05:51:06.425978 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding2_2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding2_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding2_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding2_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.426705 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding2_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding2_38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding2_38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding2_38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.427277 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding2_38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.427832 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:06.428197 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s I0328 05:51:06.439128 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolExplicitPadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolExplicitPaddingAdvanced15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPaddingAdvanced15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.440444 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPaddingAdvanced24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPaddingAdvanced24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.441122 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPaddingAdvanced33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPaddingAdvanced33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.441698 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPaddingAdvanced8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPaddingAdvanced8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True)): 0.0s I0328 05:51:06.442206 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding_1D16 (pool_func=, data_format='NWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding_1D16 (pool_func=, data_format='NWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D16 (pool_func=, data_format='NWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.442764 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D16 (pool_func=, data_format='NWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding_1D25 (pool_func=, data_format='NCW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding_1D25 (pool_func=, data_format='NCW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D25 (pool_func=, data_format='NCW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.443310 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D25 (pool_func=, data_format='NCW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding_1D34 (pool_func=, data_format='NCW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding_1D34 (pool_func=, data_format='NCW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D34 (pool_func=, data_format='NCW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.443856 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D34 (pool_func=, data_format='NCW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolFwd_maxpool4 INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolFwd_maxpool4): 0.0s I0328 05:51:06.444260 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolFwd_maxpool4): 0.0s [ OK ] PoolingTest.testMaxPoolFwd_maxpool4 [ RUN ] PoolingTest.testMaxPoolGradWithArgmaxEagerShapeErrors INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolGradWithArgmaxEagerShapeErrors): 0.03s I0328 05:51:06.478914 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolGradWithArgmaxEagerShapeErrors): 0.03s [ OK ] PoolingTest.testMaxPoolGradWithArgmaxEagerShapeErrors [ RUN ] PoolingTest.testMaxPoolInvalidFilterSize13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolInvalidFilterSize13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.483961 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolInvalidFilterSize13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ OK ] PoolingTest.testMaxPoolInvalidFilterSize13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ RUN ] PoolingTest.testMaxPoolInvalidFilterSize22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolInvalidFilterSize22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.487537 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolInvalidFilterSize22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ OK ] PoolingTest.testMaxPoolInvalidFilterSize22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ RUN ] PoolingTest.testMaxPoolInvalidFilterSize31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolInvalidFilterSize31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.490850 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolInvalidFilterSize31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ OK ] PoolingTest.testMaxPoolInvalidFilterSize31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ RUN ] PoolingTest.testMaxPoolInvalidFilterSize6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolInvalidFilterSize6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.0s I0328 05:51:06.494004 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolInvalidFilterSize6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.0s [ OK ] PoolingTest.testMaxPoolInvalidFilterSize6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolKernelSmallerThanStrideValid4 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 7, 7, 1] 49 [1, 2, 2, 1] [1, 3, 3, 1] I0328 05:51:06.494663 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 7, 7, 1] 49 [1, 2, 2, 1] [1, 3, 3, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolKernelSmallerThanStrideValid4 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s I0328 05:51:06.509320 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolKernelSmallerThanStrideValid4 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolKernelSmallerThanStrideValid4 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPadding12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPadding12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.510664 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPadding21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPadding21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.511321 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPadding30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPadding30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.511883 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.0s I0328 05:51:06.512372 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.512917 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.513469 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.514006 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 6, 6, 1] 36 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:06.514361 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 6, 6, 1] 36 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s I0328 05:51:06.525454 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolSamePadding14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePadding14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePadding14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.526781 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePadding14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePadding23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePadding23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePadding23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.527425 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePadding23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.527985 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 2, 3, 3] 18 [1, 2, 2, 1] [1, 2, 2, 1] I0328 05:51:06.528342 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 2, 3, 3] 18 [1, 2, 2, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.02s I0328 05:51:06.544548 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testMaxPoolSamePadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.545884 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.546579 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.547147 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True) INFO:tensorflow:Running NHWC test. True [1, 2, 2, 1] 4 [1, 1, 2, 1] [1, 1, 1, 1] I0328 05:51:06.547522 281473789686656 pooling_ops_test.py:247] Running NHWC test. True [1, 2, 2, 1] 4 [1, 1, 2, 1] [1, 1, 1, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True)): 0.01s I0328 05:51:06.561970 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True)): 0.01s [ OK ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True) [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket4_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket4_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.563216 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket4_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket4_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.563822 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket4_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket4_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.564370 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket4_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 4, 4, 4] 64 [1, 2, 2, 1] [1, 2, 2, 1] I0328 05:51:06.564730 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 4, 4, 4] 64 [1, 2, 2, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.02s I0328 05:51:06.580888 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testMaxPoolSamePaddingPacket4_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket8_17 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket8_17 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_17 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.582168 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_17 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket8_26 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket8_26 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_26 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.582765 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_26 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket8_35 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket8_35 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_35 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.583319 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_35 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPadding0 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 3] 27 [1, 2, 2, 1] [1, 2, 2, 1] I0328 05:51:06.583679 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 3] 27 [1, 2, 2, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPadding0 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False)): 0.02s I0328 05:51:06.604368 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPadding0 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testMaxPoolValidPadding0 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolValidPadding18 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPadding18 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPadding18 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.605810 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPadding18 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPadding27 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPadding27 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPadding27 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.606543 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPadding27 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPadding36 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPadding36 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPadding36 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.607104 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPadding36 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride1 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 4, 4, 1] 16 [1, 2, 2, 1] [1, 1, 2, 1] I0328 05:51:06.607465 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 4, 4, 1] 16 [1, 2, 2, 1] [1, 1, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride1 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False)): 0.02s I0328 05:51:06.626987 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride1 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testMaxPoolValidPaddingUnevenStride1 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride19 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride19 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride19 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.628357 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride19 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride28 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride28 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride28 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.628976 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride28 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.629525 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.630062 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:06.630596 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 4, 4, 1] 16 [1, 2, 2, 1] [1, 2, 1, 1] I0328 05:51:06.630950 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 4, 4, 1] 16 [1, 2, 2, 1] [1, 2, 1, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:06.642427 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.643711 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolZeroExplicitPadding10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:06.644129 281473789686656 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:06.654772 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolZeroExplicitPadding10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolZeroExplicitPadding2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolZeroExplicitPadding2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True)): 0.0s I0328 05:51:06.655932 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolZeroExplicitPadding29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolZeroExplicitPadding29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.656672 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolZeroExplicitPadding38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolZeroExplicitPadding38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:06.657246 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolingWithArgmax INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolingWithArgmax): 0.06s I0328 05:51:06.717500 281473789686656 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolingWithArgmax): 0.06s [ OK ] PoolingTest.testMaxPoolingWithArgmax ====================================================================== FAIL: testAvgPoolGradOutputMemoryOutOfBounds (__main__.PoolingTest) PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test.py", line 2337, in testAvgPoolGradOutputMemoryOutOfBounds with self.assertRaisesRegex( AssertionError: InvalidArgumentError not raised ---------------------------------------------------------------------- Ran 96 tests in 0.688s FAILED (failures=1, skipped=65) ================================================================================ ==================== Test output for //tensorflow/python/kernel_tests/nn_ops:pooling_ops_test_cpu (shard 8 of 10): 2023-03-28 05:51:08.589397: I tensorflow/core/util/port.cc:116] Experimental oneDNN custom operations are on. If you experience issues, please turn them off by setting the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.11.2: /usr/local/bin/python3 [ RUN ] PoolingTest.testAvgPoolEmpty5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test.py:225: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. W0328 05:51:10.028750 281473488876416 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test.py:225: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. [ SKIPPED ] PoolingTest.testAvgPoolEmpty5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolEmpty5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.01s I0328 05:51:10.034587 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolEmpty5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.01s [ RUN ] PoolingTest.testAvgPoolEmptyInput3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [0, 8, 8, 8] 0 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:10.035192 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [0, 8, 8, 8] 0 [1, 3, 3, 1] [1, 2, 2, 1] 2023-03-28 05:51:10.098859: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:372] MLIR V1 optimization pass is not enabled 2023-03-28 05:51:10.158954: E ./tensorflow/core/graph/mkl_graph_util.h:182] oneDNN BFloat16 support are only on platforms with AVX512. Falling back to default implementation if present. INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolEmptyInput3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.13s I0328 05:51:10.163268 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolEmptyInput3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.13s [ OK ] PoolingTest.testAvgPoolEmptyInput3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds [ FAILED ] PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds): 0.07s I0328 05:51:10.229924 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds): 0.07s [ RUN ] PoolingTest.testAvgPoolKernelSmallerThanStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolKernelSmallerThanStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolKernelSmallerThanStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.231040 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolKernelSmallerThanStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.231726 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindow3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 2, 2, 1] 4 [1, 1, 2, 1] [1, 1, 1, 1] I0328 05:51:10.232155 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 2, 2, 1] 4 [1, 1, 2, 1] [1, 1, 1, 1] INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindow3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.242479 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindow3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testAvgPoolSamePaddingNonSquareWindow3 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch11 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch11 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch11 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.243610 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch11 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_21 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [2, 2, 2, 2] 16 [1, 2, 1, 1] [1, 1, 1, 1] I0328 05:51:10.244066 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [2, 2, 2, 2] 16 [1, 2, 1, 1] [1, 1, 1, 1] INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_21 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.256538 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_21 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_21 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.257702 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindowMultiBatch_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingNonSquareWindow_27 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePaddingNonSquareWindow_27 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindow_27 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.258305 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingNonSquareWindow_27 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingPacket_45 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePaddingPacket_45 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingPacket_45 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.258867 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingPacket_45 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolSamePaddingPacket_83 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 8, 8, 8] 512 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:10.259237 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 8, 8, 8] 512 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePaddingPacket_83 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.270662 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePaddingPacket_83 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testAvgPoolSamePaddingPacket_83 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolSamePadding_211 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolSamePadding_211 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolSamePadding_211 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.271779 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolSamePadding_211 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolValidPadding1 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 3] 27 [1, 2, 2, 1] [1, 2, 2, 1] I0328 05:51:10.272198 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 3] 27 [1, 2, 2, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolValidPadding1 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.282512 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolValidPadding1 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testAvgPoolValidPadding1 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testAvgPoolValidPadding9 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolValidPadding9 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolValidPadding9 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.283586 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolValidPadding9 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolValidPaddingUnevenStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolValidPaddingUnevenStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolValidPaddingUnevenStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.284182 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolValidPaddingUnevenStride7 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testAvgPoolValidPaddingUnevenStride_25 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testAvgPoolValidPaddingUnevenStride_25 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testAvgPoolValidPaddingUnevenStride_25 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.284733 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testAvgPoolValidPaddingUnevenStride_25 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testDepthwiseMaxPool1x1DepthWindow3 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 1, 1, 10] 10 [1, 1, 1, 2] [1, 1, 1, 2] I0328 05:51:10.285107 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 1, 1, 10] 10 [1, 1, 1, 2] [1, 1, 1, 2] INFO:tensorflow:time(__main__.PoolingTest.testDepthwiseMaxPool1x1DepthWindow3 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.298185 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testDepthwiseMaxPool1x1DepthWindow3 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testDepthwiseMaxPool1x1DepthWindow3 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testDepthwiseMaxPool2x2DepthWindow11 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=True) INFO:tensorflow:Running NHWC test. True [1, 2, 2, 6] 24 [1, 1, 1, 3] [1, 1, 1, 3] I0328 05:51:10.299066 281473488876416 pooling_ops_test.py:247] Running NHWC test. True [1, 2, 2, 6] 24 [1, 1, 1, 3] [1, 1, 1, 3] INFO:tensorflow:time(__main__.PoolingTest.testDepthwiseMaxPool2x2DepthWindow11 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=True)): 0.02s I0328 05:51:10.314484 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testDepthwiseMaxPool2x2DepthWindow11 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=True)): 0.02s [ OK ] PoolingTest.testDepthwiseMaxPool2x2DepthWindow11 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=True) [ RUN ] PoolingTest.testDepthwiseMaxPoolingWithArgmax INFO:tensorflow:time(__main__.PoolingTest.testDepthwiseMaxPoolingWithArgmax): 0.02s I0328 05:51:10.331909 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testDepthwiseMaxPoolingWithArgmax): 0.02s [ OK ] PoolingTest.testDepthwiseMaxPoolingWithArgmax [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame1_14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.333024 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame1_23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.333629 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame1_32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.334189 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_41 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 1] 9 [1, 1, 1, 1] [1, 2, 2, 1] I0328 05:51:10.334545 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 1] 9 [1, 1, 1, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_41 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.346928 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_41 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testKernelSmallerThanStrideSame1_41 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testKernelSmallerThanStrideSame1_50 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame1_50 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_50 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.348028 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame1_50 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame2_13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.348611 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame2_22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.349164 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testKernelSmallerThanStrideSame2_31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.349699 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_40 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 4, 4, 1] 16 [1, 1, 1, 1] [1, 2, 2, 1] I0328 05:51:10.350052 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 4, 4, 1] 16 [1, 1, 1, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_40 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.359501 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_40 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testKernelSmallerThanStrideSame2_40 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testKernelSmallerThanStrideSame2_5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) INFO:tensorflow:Running NHWC test. True [1, 4, 4, 1] 16 [1, 1, 1, 1] [1, 2, 2, 1] I0328 05:51:10.360305 281473488876416 pooling_ops_test.py:247] Running NHWC test. True [1, 4, 4, 1] 16 [1, 1, 1, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.01s I0328 05:51:10.374728 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testKernelSmallerThanStrideSame2_5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.01s [ OK ] PoolingTest.testKernelSmallerThanStrideSame2_5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) [ RUN ] PoolingTest.testMaxPoolEmptyInput12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolEmptyInput12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolEmptyInput12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.375818 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolEmptyInput12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolEmptyInput21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolEmptyInput21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolEmptyInput21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.376426 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolEmptyInput21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolEmptyInput30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolEmptyInput30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolEmptyInput30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.376963 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolEmptyInput30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolEmptyInput5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) INFO:tensorflow:Running NHWC test. True [0, 8, 8, 8] 0 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:10.377318 281473488876416 pooling_ops_test.py:247] Running NHWC test. True [0, 8, 8, 8] 0 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolEmptyInput5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.03s I0328 05:51:10.410130 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolEmptyInput5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.03s [ OK ] PoolingTest.testMaxPoolEmptyInput5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) [ RUN ] PoolingTest.testMaxPoolExplicitPadding13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.411154 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.411716 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding2_10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:10.412088 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding2_10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.421882 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding2_10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolExplicitPadding2_10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolExplicitPadding2_2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding2_2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding2_2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True)): 0.0s I0328 05:51:10.422749 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding2_2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding2_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding2_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding2_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.423317 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding2_29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding2_38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding2_38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding2_38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.423856 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding2_38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.424394 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:10.424743 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.433580 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolExplicitPadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolExplicitPaddingAdvanced15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPaddingAdvanced15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.434425 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPaddingAdvanced24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPaddingAdvanced24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.434977 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPaddingAdvanced33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPaddingAdvanced33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.435513 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPaddingAdvanced8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPaddingAdvanced8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True)): 0.0s I0328 05:51:10.435997 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPaddingAdvanced8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding_1D16 (pool_func=, data_format='NWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding_1D16 (pool_func=, data_format='NWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D16 (pool_func=, data_format='NWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.436547 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D16 (pool_func=, data_format='NWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding_1D25 (pool_func=, data_format='NCW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding_1D25 (pool_func=, data_format='NCW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D25 (pool_func=, data_format='NCW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.437069 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D25 (pool_func=, data_format='NCW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolExplicitPadding_1D34 (pool_func=, data_format='NCW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolExplicitPadding_1D34 (pool_func=, data_format='NCW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D34 (pool_func=, data_format='NCW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.437591 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolExplicitPadding_1D34 (pool_func=, data_format='NCW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolFwd_maxpool4 INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolFwd_maxpool4): 0.0s I0328 05:51:10.437973 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolFwd_maxpool4): 0.0s [ OK ] PoolingTest.testMaxPoolFwd_maxpool4 [ RUN ] PoolingTest.testMaxPoolGradWithArgmaxEagerShapeErrors INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolGradWithArgmaxEagerShapeErrors): 0.03s I0328 05:51:10.471922 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolGradWithArgmaxEagerShapeErrors): 0.03s [ OK ] PoolingTest.testMaxPoolGradWithArgmaxEagerShapeErrors [ RUN ] PoolingTest.testMaxPoolInvalidFilterSize13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolInvalidFilterSize13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.476078 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolInvalidFilterSize13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ OK ] PoolingTest.testMaxPoolInvalidFilterSize13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ RUN ] PoolingTest.testMaxPoolInvalidFilterSize22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolInvalidFilterSize22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.478980 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolInvalidFilterSize22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ OK ] PoolingTest.testMaxPoolInvalidFilterSize22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ RUN ] PoolingTest.testMaxPoolInvalidFilterSize31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolInvalidFilterSize31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.481674 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolInvalidFilterSize31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ OK ] PoolingTest.testMaxPoolInvalidFilterSize31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ RUN ] PoolingTest.testMaxPoolInvalidFilterSize6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolInvalidFilterSize6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.0s I0328 05:51:10.484276 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolInvalidFilterSize6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.0s [ OK ] PoolingTest.testMaxPoolInvalidFilterSize6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolKernelSmallerThanStrideValid4 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 7, 7, 1] 49 [1, 2, 2, 1] [1, 3, 3, 1] I0328 05:51:10.484808 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 7, 7, 1] 49 [1, 2, 2, 1] [1, 3, 3, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolKernelSmallerThanStrideValid4 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.498587 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolKernelSmallerThanStrideValid4 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolKernelSmallerThanStrideValid4 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPadding12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPadding12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.499659 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding12 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPadding21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPadding21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.500232 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding21 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPadding30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPadding30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.500772 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding30 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.0s I0328 05:51:10.501250 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPadding5 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=False, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.501783 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv13 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.502320 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv22 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.502842 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv31 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 6, 6, 1] 36 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:10.503189 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 6, 6, 1] 36 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.512818 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolNegativeInputExpPaddingAdv6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolNegativeInputExpPaddingAdv6 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolSamePadding14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePadding14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePadding14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.513813 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePadding14 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePadding23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePadding23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePadding23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.514368 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePadding23 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.514910 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePadding32 (pool_func=, data_format='NCHW', data_type=tf.float64, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 2, 3, 3] 18 [1, 2, 2, 1] [1, 2, 2, 1] I0328 05:51:10.515259 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 2, 3, 3] 18 [1, 2, 2, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.527887 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolSamePadding7 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.528894 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow15 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.529462 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow24 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.530000 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow33 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True) INFO:tensorflow:Running NHWC test. True [1, 2, 2, 1] 4 [1, 1, 2, 1] [1, 1, 1, 1] I0328 05:51:10.530363 281473488876416 pooling_ops_test.py:247] Running NHWC test. True [1, 2, 2, 1] 4 [1, 1, 2, 1] [1, 1, 1, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True)): 0.01s I0328 05:51:10.542547 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingNonSquareWindow8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True)): 0.01s [ OK ] PoolingTest.testMaxPoolSamePaddingNonSquareWindow8 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=False, v2=True) [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket4_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket4_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.543583 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket4_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket4_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.544170 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket4_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket4_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.544710 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket4_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 4, 4, 4] 64 [1, 2, 2, 1] [1, 2, 2, 1] I0328 05:51:10.545067 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 4, 4, 4] 64 [1, 2, 2, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.557931 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket4_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolSamePaddingPacket4_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket8_17 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket8_17 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_17 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.559000 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_17 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket8_26 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket8_26 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_26 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.559572 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_26 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolSamePaddingPacket8_35 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolSamePaddingPacket8_35 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_35 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.560112 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolSamePaddingPacket8_35 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPadding0 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 3] 27 [1, 2, 2, 1] [1, 2, 2, 1] I0328 05:51:10.560463 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 3] 27 [1, 2, 2, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPadding0 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False)): 0.02s I0328 05:51:10.578631 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPadding0 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testMaxPoolValidPadding0 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolValidPadding18 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPadding18 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPadding18 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.579760 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPadding18 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPadding27 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPadding27 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPadding27 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.580336 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPadding27 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPadding36 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPadding36 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPadding36 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.580871 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPadding36 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride1 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 4, 4, 1] 16 [1, 2, 2, 1] [1, 1, 2, 1] I0328 05:51:10.581222 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 4, 4, 1] 16 [1, 2, 2, 1] [1, 1, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride1 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False)): 0.02s I0328 05:51:10.597037 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride1 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False)): 0.02s [ OK ] PoolingTest.testMaxPoolValidPaddingUnevenStride1 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride19 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride19 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride19 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.598165 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride19 (pool_func=, data_format='NHWC', data_type=tf.float64, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride28 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride28 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride28 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.598723 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride28 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.599257 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_16 (pool_func=, data_format='NHWC', data_type=tf.float16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.599781 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_25 (pool_func=, data_format='NCHW', data_type=tf.float32, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s I0328 05:51:10.600297 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_34 (pool_func=, data_format='NCHW', data_type=tf.bfloat16, use_gpu=True, v2=False)): 0.0s [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 4, 4, 1] 16 [1, 2, 2, 1] [1, 2, 1, 1] I0328 05:51:10.600642 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 4, 4, 1] 16 [1, 2, 2, 1] [1, 2, 1, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.610714 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride2_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolValidPaddingUnevenStride2_9 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolValidPaddingUnevenStride38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolValidPaddingUnevenStride38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.611758 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolValidPaddingUnevenStride38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolZeroExplicitPadding10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) INFO:tensorflow:Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] I0328 05:51:10.612150 281473488876416 pooling_ops_test.py:247] Running NHWC test. False [1, 3, 3, 1] 9 [1, 3, 3, 1] [1, 2, 2, 1] INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s I0328 05:51:10.621738 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False)): 0.01s [ OK ] PoolingTest.testMaxPoolZeroExplicitPadding10 (pool_func=, data_format='NHWC', data_type=tf.bfloat16, use_gpu=False, v2=False) [ RUN ] PoolingTest.testMaxPoolZeroExplicitPadding2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolZeroExplicitPadding2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True)): 0.0s I0328 05:51:10.622695 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding2 (pool_func=, data_format='NHWC', data_type=tf.float32, use_gpu=False, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolZeroExplicitPadding29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolZeroExplicitPadding29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.623293 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding29 (pool_func=, data_format='NCHW', data_type=tf.float16, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolZeroExplicitPadding38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) [ SKIPPED ] PoolingTest.testMaxPoolZeroExplicitPadding38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True) INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s I0328 05:51:10.623849 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolZeroExplicitPadding38 (pool_func=, data_format='NCHW_VECT_C', data_type=tf.float32, use_gpu=True, v2=True)): 0.0s [ RUN ] PoolingTest.testMaxPoolingWithArgmax INFO:tensorflow:time(__main__.PoolingTest.testMaxPoolingWithArgmax): 0.02s I0328 05:51:10.648166 281473488876416 test_util.py:2462] time(__main__.PoolingTest.testMaxPoolingWithArgmax): 0.02s [ OK ] PoolingTest.testMaxPoolingWithArgmax ====================================================================== FAIL: testAvgPoolGradOutputMemoryOutOfBounds (__main__.PoolingTest) PoolingTest.testAvgPoolGradOutputMemoryOutOfBounds ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test_cpu.runfiles/org_tensorflow/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test.py", line 2337, in testAvgPoolGradOutputMemoryOutOfBounds with self.assertRaisesRegex( AssertionError: InvalidArgumentError not raised ---------------------------------------------------------------------- Ran 96 tests in 0.621s FAILED (failures=1, skipped=65) ================================================================================ ==================== Test output for //tensorflow/python/distribute/failure_handling:gce_failure_handler_test (shard 7 of 8): 2023-03-28 06:06:22.017592: I tensorflow/core/util/port.cc:116] Experimental oneDNN custom operations are on. If you experience issues, please turn them off by setting the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Running tests under Python 3.11.2: /usr/local/bin/python3 [ RUN ] GceFailureHandlingTest.test_basic_run_test_inputarg_manager_strategyoption_MWMSmultiworker INFO:tensorflow:Using local port 15108 I0328 06:06:26.747537 281472867201920 test_util.py:3794] Using local port 15108 INFO:tensorflow:Using local port 17526 I0328 06:06:26.748115 281472867201920 test_util.py:3794] Using local port 17526 INFO:tensorflow:Using local port 24604 I0328 06:06:26.748491 281472867201920 test_util.py:3794] Using local port 24604 INFO:tensorflow:Using local port 21699 I0328 06:06:26.748854 281472867201920 test_util.py:3794] Using local port 21699 2023-03-28 06:06:27.954950: I tensorflow/core/util/port.cc:116] Experimental oneDNN custom operations are on. If you experience issues, please turn them off by setting the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2023-03-28 06:06:28.449190: I tensorflow/core/util/port.cc:116] Experimental oneDNN custom operations are on. If you experience issues, please turn them off by setting the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. INFO:tensorflow:Cluster starting. I0328 06:06:34.862568 281472867201920 gce_failure_handler_test.py:317] Cluster starting. [worker-0]: I0328 06:06:35.367465 281473374647168 multi_process_runner.py:840] Subprocess with PID 2571574 (worker, 0) is now being started. [worker-1]: I0328 06:06:35.427349 281473374647168 multi_process_runner.py:840] Subprocess with PID 2571577 (worker, 1) is now being started. [worker-0]: I0328 06:06:35.367784 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:15108", "localhost:17526", "localhost:24604", "localhost:21699"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' [worker-2]: I0328 06:06:35.453598 281473374647168 multi_process_runner.py:840] Subprocess with PID 2571582 (worker, 2) is now being started. [worker-3]: I0328 06:06:35.468572 281473374647168 multi_process_runner.py:840] Subprocess with PID 2571598 (worker, 3) is now being started. [worker-2]: I0328 06:06:35.453941 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:15108", "localhost:17526", "localhost:24604", "localhost:21699"]}, "task": {"type": "worker", "index": 2}, "rpc_layer": "grpc"}' [worker-3]: I0328 06:06:35.468903 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:15108", "localhost:17526", "localhost:24604", "localhost:21699"]}, "task": {"type": "worker", "index": 3}, "rpc_layer": "grpc"}' [worker-1]: I0328 06:06:35.427679 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:15108", "localhost:17526", "localhost:24604", "localhost:21699"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [worker-1]: 2023-03-28 06:06:35.560036: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:17526 [worker-2]: 2023-03-28 06:06:35.577409: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:24604 [worker-0]: 2023-03-28 06:06:35.687002: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:15108 [worker-0]: 2023-03-28 06:06:35.727081: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:0 has connected to coordination service. Incarnation: 16564869542279732330 [worker-0]: 2023-03-28 06:06:35.727665: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-3]: 2023-03-28 06:06:35.729132: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:21699 [worker-0]: 2023-03-28 06:06:35.732349: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:3 has connected to coordination service. Incarnation: 12719034876079732047 [worker-3]: 2023-03-28 06:06:35.735183: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-0]: 2023-03-28 06:06:36.562855: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:1 has connected to coordination service. Incarnation: 4338204155643670598 [worker-1]: 2023-03-28 06:06:36.563181: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-0]: 2023-03-28 06:06:36.606435: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:2 has connected to coordination service. Incarnation: 15185655284776258902 [worker-2]: 2023-03-28 06:06:36.607189: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-3]: I0328 06:06:36.616424 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-1]: I0328 06:06:36.610431 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-2]: I0328 06:06:36.627524 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: I0328 06:06:36.636809 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0328 06:06:36.702098 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0328 06:06:36.703272 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0328 06:06:36.703493 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0328 06:06:36.706026 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0328 06:06:36.706540 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0328 06:06:36.706757 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0328 06:06:36.718566 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0328 06:06:36.719016 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0328 06:06:36.719232 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0328 06:06:36.730522 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0328 06:06:36.730964 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0328 06:06:36.731174 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Start watcher for peer's signal. [worker-2]: INFO:tensorflow:Start watcher for peer's signal. [worker-2]: I0328 06:06:36.900495 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-1]: INFO:tensorflow:Start watcher for peer's signal. [worker-1]: I0328 06:06:36.915174 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-0]: I0328 06:06:36.898823 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-0]: INFO:tensorflow:Start polling for termination signal. [worker-0]: I0328 06:06:36.916912 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-2]: INFO:tensorflow:Start polling for termination signal. [worker-2]: I0328 06:06:36.916902 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-2]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-2]: I0328 06:06:36.936417 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-2]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: W0328 06:06:36.936775 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-0]: I0328 06:06:36.936595 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-0]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-0]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: W0328 06:06:36.936916 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: INFO:tensorflow:Start training at 0 [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: INFO:tensorflow:Start training at 0 [worker-0]: I0328 06:06:36.937072 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-2]: I0328 06:06:36.936930 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-3]: INFO:tensorflow:Start watcher for peer's signal. [worker-3]: I0328 06:06:36.963703 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-1]: INFO:tensorflow:Start polling for termination signal. [worker-1]: I0328 06:06:36.968778 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-3]: INFO:tensorflow:Start polling for termination signal. [worker-3]: I0328 06:06:36.971233 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-1]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-1]: I0328 06:06:36.986639 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-1]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: W0328 06:06:36.986975 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: INFO:tensorflow:Start training at 0 [worker-1]: I0328 06:06:36.987144 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-3]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-3]: I0328 06:06:37.016412 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-3]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: W0328 06:06:37.016905 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: INFO:tensorflow:Start training at 0 [worker-3]: I0328 06:06:37.017076 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:37.088027 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:37.134142 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:37.158908 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:37.243157 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:37.411275 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:37.430560 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:37.455021 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:37.451034 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:37.550223 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:37.580467 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:37.562319 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:37.602365 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:37.779845 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:37.761828 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:37.777715 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:37.801351 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:37.944088 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:37.960244 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:37.976795 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:37.979923 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Termination notice available. [worker-0]: I0328 06:06:38.005404 281454743122400 gce_failure_handler_test.py:142] Termination notice available. [worker-0]: INFO:tensorflow:Member 0 has received termination notice. [worker-0]: I0328 06:06:38.013132 281454743122400 failure_handling.py:710] Member 0 has received termination notice. [worker-0]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9f060> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:06:38.063273 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9f060> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: INFO:tensorflow:Termination caught in main thread on preempted worker [worker-0]: I0328 06:06:38.063623 281473374647168 failure_handling.py:1158] Termination caught in main thread on preempted worker [worker-3]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9e980> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9f060> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:06:38.063145 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9e980> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:06:38.063794 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9f060> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9c360> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:06:38.067973 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9c360> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:38.074595 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:RUN_TO_CHECKPOINT set to 6 [worker-3]: I0328 06:06:38.080170 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:38.079879 281473374647168 failure_handling.py:1167] RUN_TO_CHECKPOINT set to 6 [worker-0]: INFO:tensorflow:PreemptionCheckpointHandler: RECEIVED_SIGNAL_RUN_TO_CHECKPOINT_0 set, preemption awareness acknowledged [worker-0]: I0328 06:06:38.081454 281454751576544 failure_handling.py:1241] PreemptionCheckpointHandler: RECEIVED_SIGNAL_RUN_TO_CHECKPOINT_0 set, preemption awareness acknowledged [worker-0]: INFO:tensorflow:Sigterm acknowledgement from replica 0 received [worker-0]: I0328 06:06:38.082472 281473374647168 failure_handling.py:1176] Sigterm acknowledgement from replica 0 received [worker-0]: INFO:tensorflow:Sigterm acknowledgement from replica 1 received [worker-3]: INFO:tensorflow:PreemptionCheckpointHandler: RECEIVED_SIGNAL_RUN_TO_CHECKPOINT_3 set, preemption awareness acknowledged [worker-0]: I0328 06:06:38.083477 281473374647168 failure_handling.py:1176] Sigterm acknowledgement from replica 1 received [worker-3]: I0328 06:06:38.083516 281447310946784 failure_handling.py:1241] PreemptionCheckpointHandler: RECEIVED_SIGNAL_RUN_TO_CHECKPOINT_3 set, preemption awareness acknowledged [worker-0]: INFO:tensorflow:Sigterm acknowledgement from replica 2 received [worker-0]: I0328 06:06:38.086185 281473374647168 failure_handling.py:1176] Sigterm acknowledgement from replica 2 received [worker-0]: INFO:tensorflow:Sigterm acknowledgement from replica 3 received [worker-0]: I0328 06:06:38.086755 281473374647168 failure_handling.py:1176] Sigterm acknowledgement from replica 3 received [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:38.093497 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:PreemptionCheckpointHandler: RECEIVED_SIGNAL_RUN_TO_CHECKPOINT_2 set, preemption awareness acknowledged [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:38.089482 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:PreemptionCheckpointHandler: RECEIVED_SIGNAL_RUN_TO_CHECKPOINT_1 set, preemption awareness acknowledged [worker-1]: I0328 06:06:38.096434 281447310946784 failure_handling.py:1241] PreemptionCheckpointHandler: RECEIVED_SIGNAL_RUN_TO_CHECKPOINT_1 set, preemption awareness acknowledged [worker-2]: I0328 06:06:38.086926 281449257103840 failure_handling.py:1241] PreemptionCheckpointHandler: RECEIVED_SIGNAL_RUN_TO_CHECKPOINT_2 set, preemption awareness acknowledged [worker-3]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d07c0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0180> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:06:38.148840 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d07c0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:06:38.148913 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0180> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: INFO:tensorflow:epoch 0 finished [worker-0]: INFO:tensorflow:epoch 0 finished [worker-3]: I0328 06:06:38.149183 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-0]: I0328 06:06:38.149216 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-3]: INFO:tensorflow:PreemptionCheckpointHandler: Starting saving a checkpoint. [worker-3]: I0328 06:06:38.149454 281473374647168 failure_handling.py:1062] PreemptionCheckpointHandler: Starting saving a checkpoint. [worker-0]: INFO:tensorflow:PreemptionCheckpointHandler: Starting saving a checkpoint. [worker-0]: I0328 06:06:38.149461 281473374647168 failure_handling.py:1062] PreemptionCheckpointHandler: Starting saving a checkpoint. [worker-1]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0180> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c8400> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:06:38.191382 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c8400> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: INFO:tensorflow:epoch 0 finished [worker-2]: I0328 06:06:38.191712 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-2]: INFO:tensorflow:PreemptionCheckpointHandler: Starting saving a checkpoint. [worker-2]: I0328 06:06:38.191974 281473374647168 failure_handling.py:1062] PreemptionCheckpointHandler: Starting saving a checkpoint. [worker-1]: W0328 06:06:38.158105 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0180> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: INFO:tensorflow:epoch 0 finished [worker-1]: I0328 06:06:38.158428 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-1]: INFO:tensorflow:PreemptionCheckpointHandler: Starting saving a checkpoint. [worker-1]: I0328 06:06:38.158697 281473374647168 failure_handling.py:1062] PreemptionCheckpointHandler: Starting saving a checkpoint. [worker-0]: INFO:tensorflow:Checkpoint finished at path /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/284377733e18a9a7ee6d6d7363a8b705_9a5b7uw/tmp4swbv36s/fh_ckpt/ [worker-0]: I0328 06:06:38.212171 281473374647168 failure_handling.py:1077] Checkpoint finished at path /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/284377733e18a9a7ee6d6d7363a8b705_9a5b7uw/tmp4swbv36s/fh_ckpt/ [worker-0]: INFO:tensorflow:Shut down watcher for one's own termination signal [worker-0]: I0328 06:06:38.212440 281473374647168 failure_handling.py:737] Shut down watcher for one's own termination signal [worker-0]: INFO:tensorflow:Shut down watcher for peer's termination signal. [worker-0]: I0328 06:06:38.213664 281473374647168 failure_handling.py:771] Shut down watcher for peer's termination signal. [worker-0]: INFO:tensorflow:PreemptionCheckpointHandler: checkpoint saved. Exiting. [worker-0]: I0328 06:06:38.213812 281473374647168 failure_handling.py:1127] PreemptionCheckpointHandler: checkpoint saved. Exiting. [worker-3]: INFO:tensorflow:Checkpoint finished at path /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/284377733e18a9a7ee6d6d7363a8b705_9a5b7uw/tmp4swbv36s/fh_ckpt/workertemp_3/ [worker-3]: I0328 06:06:38.218397 281473374647168 failure_handling.py:1077] Checkpoint finished at path /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/284377733e18a9a7ee6d6d7363a8b705_9a5b7uw/tmp4swbv36s/fh_ckpt/workertemp_3/ [worker-2]: INFO:tensorflow:Checkpoint finished at path /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/284377733e18a9a7ee6d6d7363a8b705_9a5b7uw/tmp4swbv36s/fh_ckpt/workertemp_2/ [worker-2]: I0328 06:06:38.268265 281473374647168 failure_handling.py:1077] Checkpoint finished at path /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/284377733e18a9a7ee6d6d7363a8b705_9a5b7uw/tmp4swbv36s/fh_ckpt/workertemp_2/ [worker-1]: INFO:tensorflow:Checkpoint finished at path /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/284377733e18a9a7ee6d6d7363a8b705_9a5b7uw/tmp4swbv36s/fh_ckpt/workertemp_1/ [worker-1]: I0328 06:06:38.388030 281473374647168 failure_handling.py:1077] Checkpoint finished at path /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/_tmp/284377733e18a9a7ee6d6d7363a8b705_9a5b7uw/tmp4swbv36s/fh_ckpt/workertemp_1/ [worker-2]: INFO:tensorflow:Shut down watcher for one's own termination signal [worker-2]: I0328 06:06:39.016303 281473374647168 failure_handling.py:737] Shut down watcher for one's own termination signal [worker-1]: INFO:tensorflow:Shut down watcher for one's own termination signal [worker-1]: I0328 06:06:39.026313 281473374647168 failure_handling.py:737] Shut down watcher for one's own termination signal [worker-2]: INFO:tensorflow:Shut down watcher for peer's termination signal. [worker-1]: INFO:tensorflow:Shut down watcher for peer's termination signal. [worker-1]: I0328 06:06:39.037541 281473374647168 failure_handling.py:771] Shut down watcher for peer's termination signal. [worker-1]: INFO:tensorflow:PreemptionCheckpointHandler: checkpoint saved. Exiting. [worker-2]: I0328 06:06:39.036586 281473374647168 failure_handling.py:771] Shut down watcher for peer's termination signal. [worker-1]: I0328 06:06:39.037732 281473374647168 failure_handling.py:1127] PreemptionCheckpointHandler: checkpoint saved. Exiting. [worker-2]: INFO:tensorflow:PreemptionCheckpointHandler: checkpoint saved. Exiting. [worker-2]: I0328 06:06:39.036797 281473374647168 failure_handling.py:1127] PreemptionCheckpointHandler: checkpoint saved. Exiting. [worker-3]: INFO:tensorflow:Shut down watcher for one's own termination signal [worker-3]: I0328 06:06:39.066214 281473374647168 failure_handling.py:737] Shut down watcher for one's own termination signal [worker-3]: INFO:tensorflow:Shut down watcher for peer's termination signal. [worker-3]: I0328 06:06:39.068549 281473374647168 failure_handling.py:771] Shut down watcher for peer's termination signal. [worker-3]: INFO:tensorflow:PreemptionCheckpointHandler: checkpoint saved. Exiting. [worker-3]: I0328 06:06:39.068735 281473374647168 failure_handling.py:1127] PreemptionCheckpointHandler: checkpoint saved. Exiting. INFO:tensorflow:restarting workers I0328 06:06:40.987494 281472867201920 gce_failure_handler_test.py:323] restarting workers [worker-0]: I0328 06:06:41.035164 281473374647168 multi_process_runner.py:840] Subprocess with PID 2581396 (worker, 0) is now being started. [worker-0]: I0328 06:06:41.035500 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:15108", "localhost:17526", "localhost:24604", "localhost:21699"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' INFO:tensorflow:workers restarted I0328 06:06:41.086567 281472867201920 gce_failure_handler_test.py:327] workers restarted [worker-2]: I0328 06:06:41.115591 281473374647168 multi_process_runner.py:840] Subprocess with PID 2581822 (worker, 2) is now being started. [worker-3]: I0328 06:06:41.116699 281473374647168 multi_process_runner.py:840] Subprocess with PID 2581878 (worker, 3) is now being started. [worker-2]: I0328 06:06:41.115911 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:15108", "localhost:17526", "localhost:24604", "localhost:21699"]}, "task": {"type": "worker", "index": 2}, "rpc_layer": "grpc"}' [worker-3]: I0328 06:06:41.117004 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:15108", "localhost:17526", "localhost:24604", "localhost:21699"]}, "task": {"type": "worker", "index": 3}, "rpc_layer": "grpc"}' [worker-1]: I0328 06:06:41.195519 281473374647168 multi_process_runner.py:840] Subprocess with PID 2581641 (worker, 1) is now being started. [worker-1]: I0328 06:06:41.195844 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:15108", "localhost:17526", "localhost:24604", "localhost:21699"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [worker-0]: 2023-03-28 06:06:41.214286: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:15108 [worker-0]: 2023-03-28 06:06:41.259448: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:0 has connected to coordination service. Incarnation: 1896842490858791998 [worker-0]: 2023-03-28 06:06:41.278843: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-3]: 2023-03-28 06:06:41.332010: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:21699 [worker-2]: 2023-03-28 06:06:41.337089: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:24604 [worker-0]: 2023-03-28 06:06:41.346193: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:2 has connected to coordination service. Incarnation: 8794380542656509512 [worker-1]: 2023-03-28 06:06:41.354298: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:17526 [worker-2]: 2023-03-28 06:06:41.356138: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-0]: 2023-03-28 06:06:41.423182: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:1 has connected to coordination service. Incarnation: 11990155136708552694 [worker-0]: 2023-03-28 06:06:41.426457: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:3 has connected to coordination service. Incarnation: 10773852209011367975 [worker-1]: 2023-03-28 06:06:41.426817: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-3]: 2023-03-28 06:06:41.427755: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-3]: I0328 06:06:41.443530 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-1]: I0328 06:06:41.443633 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: I0328 06:06:41.442133 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0328 06:06:41.503314 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0328 06:06:41.504312 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0328 06:06:41.504522 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-2]: I0328 06:06:41.518048 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0328 06:06:41.536036 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0328 06:06:41.536676 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0328 06:06:41.536879 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0328 06:06:41.564750 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0328 06:06:41.565595 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0328 06:06:41.565800 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0328 06:06:41.663406 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0328 06:06:41.663872 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0328 06:06:41.664065 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:15108', 'localhost:17526', 'localhost:24604', 'localhost:21699']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Start watcher for peer's signal. [worker-2]: INFO:tensorflow:Start watcher for peer's signal. [worker-0]: I0328 06:06:42.007828 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-2]: I0328 06:06:42.015625 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-3]: INFO:tensorflow:Start watcher for peer's signal. [worker-3]: I0328 06:06:42.022624 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-0]: INFO:tensorflow:Start polling for termination signal. [worker-0]: I0328 06:06:42.027168 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-1]: INFO:tensorflow:Start watcher for peer's signal. [worker-1]: I0328 06:06:42.029543 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-2]: INFO:tensorflow:Start polling for termination signal. [worker-2]: I0328 06:06:42.038101 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-0]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-0]: I0328 06:06:42.046417 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-0]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: W0328 06:06:42.046894 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: INFO:tensorflow:Start polling for termination signal. [worker-3]: INFO:tensorflow:Start polling for termination signal. [worker-3]: I0328 06:06:42.049760 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-0]: INFO:tensorflow:Start training at 6 [worker-0]: I0328 06:06:42.047074 281473374647168 gce_failure_handler_test.py:194] Start training at 6 [worker-2]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-2]: I0328 06:06:42.056528 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-2]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: W0328 06:06:42.057001 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: INFO:tensorflow:Start training at 6 [worker-2]: I0328 06:06:42.057166 281473374647168 gce_failure_handler_test.py:194] Start training at 6 [worker-3]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-1]: I0328 06:06:42.049048 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-1]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-3]: I0328 06:06:42.068029 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-1]: I0328 06:06:42.066840 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-3]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: W0328 06:06:42.067296 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: W0328 06:06:42.068459 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-3]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: INFO:tensorflow:Start training at 6 [worker-1]: I0328 06:06:42.067467 281473374647168 gce_failure_handler_test.py:194] Start training at 6 [worker-3]: INFO:tensorflow:Start training at 6 [worker-3]: I0328 06:06:42.068618 281473374647168 gce_failure_handler_test.py:194] Start training at 6 [worker-0]: INFO:tensorflow:['workertemp_2', 'workertemp_1', 'workertemp_3', 'ckpt-1.data-00000-of-00001', 'ckpt-1.index', 'checkpoint'] [worker-0]: I0328 06:06:42.080844 281473374647168 gce_failure_handler_test.py:203] ['workertemp_2', 'workertemp_1', 'workertemp_3', 'ckpt-1.data-00000-of-00001', 'ckpt-1.index', 'checkpoint'] [worker-3]: INFO:tensorflow:['workertemp_2', 'workertemp_1', 'workertemp_3', 'ckpt-1.data-00000-of-00001', 'ckpt-1.index', 'checkpoint'] [worker-3]: I0328 06:06:42.147084 281473374647168 gce_failure_handler_test.py:203] ['workertemp_2', 'workertemp_1', 'workertemp_3', 'ckpt-1.data-00000-of-00001', 'ckpt-1.index', 'checkpoint'] [worker-2]: INFO:tensorflow:['workertemp_2', 'workertemp_1', 'workertemp_3', 'ckpt-1.data-00000-of-00001', 'ckpt-1.index', 'checkpoint'] [worker-1]: INFO:tensorflow:['workertemp_2', 'workertemp_1', 'workertemp_3', 'ckpt-1.data-00000-of-00001', 'ckpt-1.index', 'checkpoint'] [worker-1]: I0328 06:06:42.157193 281473374647168 gce_failure_handler_test.py:203] ['workertemp_2', 'workertemp_1', 'workertemp_3', 'ckpt-1.data-00000-of-00001', 'ckpt-1.index', 'checkpoint'] [worker-2]: I0328 06:06:42.160458 281473374647168 gce_failure_handler_test.py:203] ['workertemp_2', 'workertemp_1', 'workertemp_3', 'ckpt-1.data-00000-of-00001', 'ckpt-1.index', 'checkpoint'] [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:42.176821 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:42.234865 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:42.278749 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:42.338724 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:42.541544 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:42.530696 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:42.529807 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:42.540693 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:42.660468 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:42.662513 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:42.682838 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:42.670513 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:42.795654 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:42.810890 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:42.830528 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:42.889120 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:43.030501 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:43.040776 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:43.050558 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:43.070837 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4aa2ca0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4aa6480> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:06:43.154127 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4aa6480> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4aa7420> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:06:43.154889 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4aa7420> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:06:43.148349 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4aa2ca0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:43.156768 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4aa8ae0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:06:43.158777 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4aa8ae0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: I0328 06:06:43.162958 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:43.162354 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:43.167806 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d4860> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:06:43.238602 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d4860> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: INFO:tensorflow:epoch 1 finished [worker-3]: I0328 06:06:43.239028 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-0]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d4180> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:06:43.256921 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d4180> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: INFO:tensorflow:epoch 1 finished [worker-0]: I0328 06:06:43.257343 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:43.265506 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:43.257176 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d4540> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d4220> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:06:43.282894 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d4220> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:06:43.277258 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d4540> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: INFO:tensorflow:epoch 1 finished [worker-1]: INFO:tensorflow:epoch 1 finished [worker-2]: I0328 06:06:43.283310 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-1]: I0328 06:06:43.277666 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:43.319744 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:43.310450 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:43.440461 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:43.433903 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:43.453058 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:43.470829 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:43.610357 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:43.620846 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:43.630572 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:43.650673 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:43.881114 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:43.900656 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:43.901170 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:43.921340 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:44.050386 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:44.060136 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:44.082176 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:44.060284 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:44.210055 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:44.220057 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:44.237462 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:44.250427 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:epoch 2 finished [worker-0]: I0328 06:06:44.330565 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-1]: INFO:tensorflow:epoch 2 finished [worker-1]: I0328 06:06:44.330813 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-2]: INFO:tensorflow:epoch 2 finished [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:44.336503 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-1]: I0328 06:06:44.338809 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 2 finished [worker-0]: I0328 06:06:44.340874 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:44.341508 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:44.350028 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:44.350537 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:44.443495 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:44.453772 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:44.455468 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:44.477553 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:44.586809 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:44.602727 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:44.600628 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:44.600142 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:44.718038 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:44.742766 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:44.740238 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:44.740245 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:44.844510 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:44.856301 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:44.860946 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:44.900255 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:44.986539 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:44.989234 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:44.986089 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:45.030705 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 3 finished [worker-3]: I0328 06:06:45.116606 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-0]: INFO:tensorflow:epoch 3 finished [worker-0]: I0328 06:06:45.122188 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-2]: INFO:tensorflow:epoch 3 finished [worker-2]: I0328 06:06:45.128600 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-1]: INFO:tensorflow:epoch 3 finished [worker-1]: I0328 06:06:45.128967 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:45.138749 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:45.150241 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:45.150705 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:45.152163 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:45.246923 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:45.264240 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:45.280461 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:45.270380 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:45.400121 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:45.400114 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:45.421486 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:45.419350 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:45.749554 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:45.743839 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:45.761466 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:45.780576 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:45.892804 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:45.898135 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:45.915252 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:45.955098 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:46.047838 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:46.037612 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:46.057134 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:46.080224 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 4 finished [worker-3]: I0328 06:06:46.137421 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-3]: INFO:tensorflow:Training finished. [worker-3]: I0328 06:06:46.137764 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-3]: INFO:tensorflow:Shut down watcher for peer's termination signal. [worker-3]: I0328 06:06:46.146360 281473374647168 failure_handling.py:771] Shut down watcher for peer's termination signal. [worker-0]: INFO:tensorflow:epoch 4 finished [worker-0]: I0328 06:06:46.163326 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-0]: INFO:tensorflow:Training finished. [worker-0]: I0328 06:06:46.163678 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-2]: INFO:tensorflow:epoch 4 finished [worker-1]: INFO:tensorflow:epoch 4 finished [worker-2]: I0328 06:06:46.169074 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-3]: INFO:tensorflow:Shut down watcher for one's own termination signal [worker-1]: I0328 06:06:46.169627 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-3]: I0328 06:06:46.176428 281473374647168 failure_handling.py:737] Shut down watcher for one's own termination signal [worker-2]: INFO:tensorflow:Training finished. [worker-1]: INFO:tensorflow:Training finished. [worker-2]: I0328 06:06:46.169430 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-1]: I0328 06:06:46.169956 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-3]: 2023-03-28 06:06:46.511930: E tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:737] Coordination agent is in ERROR: UNAVAILABLE: failed to connect to all addresses [worker-3]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-3]: :{"created":"@1679983606.511785863","description":"Failed to pick subchannel","file":"external/com_github_grpc_grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":3940,"referenced_errors":[{"created":"@1679983606.507007983","description":"failed to connect to all addresses","file":"external/com_github_grpc_grpc/src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":392,"grpc_status":14}]} [worker-3]: 2023-03-28 06:06:46.511988: E tensorflow/core/common_runtime/base_collective_executor.cc:249] BaseCollectiveExecutor::StartAbort UNAVAILABLE: failed to connect to all addresses [worker-3]: Additional GRPC error information from remote target /job:worker/replica:0/task:0: [worker-3]: :{"created":"@1679983606.511785863","description":"Failed to pick subchannel","file":"external/com_github_grpc_grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":3940,"referenced_errors":[{"created":"@1679983606.507007983","description":"failed to connect to all addresses","file":"external/com_github_grpc_grpc/src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":392,"grpc_status":14}]} I0328 06:06:49.086658 281472867201920 multi_process_runner.py:646] worker-0 exit code: 0 I0328 06:06:49.086972 281472867201920 multi_process_runner.py:646] worker-1 exit code: 0 I0328 06:06:49.087103 281472867201920 multi_process_runner.py:646] worker-2 exit code: 0 I0328 06:06:49.087215 281472867201920 multi_process_runner.py:646] worker-3 exit code: 0 I0328 06:06:49.089375 281472867201920 multi_process_runner.py:662] Joining log reading threads. I0328 06:06:49.089597 281472867201920 multi_process_runner.py:665] Joined log reading threads. INFO:tensorflow:time(__main__.GceFailureHandlingTest.test_basic_run_test_inputarg_manager_strategyoption_MWMSmultiworker): 22.49s I0328 06:06:49.234052 281472867201920 test_util.py:2462] time(__main__.GceFailureHandlingTest.test_basic_run_test_inputarg_manager_strategyoption_MWMSmultiworker): 22.49s [ OK ] GceFailureHandlingTest.test_basic_run_test_inputarg_manager_strategyoption_MWMSmultiworker [ RUN ] GceFailureHandlingTest.test_grace_period_continue_training_test_apiwrappingtrain_False_inputarg_manager_strategyoption_MWMSmultiworker INFO:tensorflow:Using MirroredStrategy with devices ('/device:CPU:0',) I0328 06:06:49.534170 281472867201920 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/device:CPU:0',) INFO:tensorflow:Single-worker MultiWorkerMirroredStrategy with local_devices = ('/device:CPU:0',), communication = CommunicationImplementation.AUTO I0328 06:06:49.534768 281472867201920 collective_all_reduce_strategy.py:447] Single-worker MultiWorkerMirroredStrategy with local_devices = ('/device:CPU:0',), communication = CommunicationImplementation.AUTO INFO:tensorflow:Start polling for termination signal. I0328 06:06:49.590663 281472867201920 failure_handling.py:683] Start polling for termination signal. INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. I0328 06:06:49.591642 281472867201920 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. Instructions for updating: Track steps using a tf.Variable saved in checkpoint instead. W0328 06:06:49.591964 281472867201920 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. Instructions for updating: Track steps using a tf.Variable saved in checkpoint instead. INFO:tensorflow:Start training at 0 I0328 06:06:49.592194 281472867201920 gce_failure_handler_test.py:194] Start training at 0 WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffed674d4e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. W0328 06:06:49.936868 281472867201920 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffed674d4e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xfffed674cc20> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. W0328 06:06:49.964386 281472867201920 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xfffed674cc20> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. INFO:tensorflow:epoch 0 finished I0328 06:06:49.964866 281472867201920 gce_failure_handler_test.py:192] epoch 0 finished INFO:tensorflow:epoch 1 finished I0328 06:06:50.326811 281472867201920 gce_failure_handler_test.py:192] epoch 1 finished INFO:tensorflow:epoch 2 finished I0328 06:06:50.796627 281472867201920 gce_failure_handler_test.py:192] epoch 2 finished INFO:tensorflow:epoch 3 finished I0328 06:06:51.092483 281472867201920 gce_failure_handler_test.py:192] epoch 3 finished INFO:tensorflow:epoch 4 finished I0328 06:06:51.396471 281472867201920 gce_failure_handler_test.py:192] epoch 4 finished INFO:tensorflow:Training finished. I0328 06:06:51.396883 281472867201920 gce_failure_handler_test.py:244] Training finished. INFO:tensorflow:time(__main__.GceFailureHandlingTest.test_grace_period_continue_training_test_apiwrappingtrain_False_inputarg_manager_strategyoption_MWMSmultiworker): 2.18s I0328 06:06:51.412839 281472867201920 test_util.py:2462] time(__main__.GceFailureHandlingTest.test_grace_period_continue_training_test_apiwrappingtrain_False_inputarg_manager_strategyoption_MWMSmultiworker): 2.18s [ OK ] GceFailureHandlingTest.test_grace_period_continue_training_test_apiwrappingtrain_False_inputarg_manager_strategyoption_MWMSmultiworker [ RUN ] GceFailureHandlingTest.test_grace_period_continue_training_test_apiwrappingtrain_True_inputarg_manager_strategyoption_MWMSmultiworker INFO:tensorflow:Using MirroredStrategy with devices ('/device:CPU:0',) I0328 06:06:51.459447 281472867201920 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/device:CPU:0',) INFO:tensorflow:Single-worker MultiWorkerMirroredStrategy with local_devices = ('/device:CPU:0',), communication = CommunicationImplementation.AUTO I0328 06:06:51.459925 281472867201920 collective_all_reduce_strategy.py:447] Single-worker MultiWorkerMirroredStrategy with local_devices = ('/device:CPU:0',), communication = CommunicationImplementation.AUTO INFO:tensorflow:Start polling for termination signal. I0328 06:06:51.494345 281472867201920 failure_handling.py:683] Start polling for termination signal. INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. I0328 06:06:51.516367 281472867201920 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. INFO:tensorflow:Start training at 0 I0328 06:06:51.516855 281472867201920 gce_failure_handler_test.py:194] Start training at 0 INFO:tensorflow:epoch 0 finished I0328 06:06:52.003058 281472867201920 gce_failure_handler_test.py:192] epoch 0 finished INFO:tensorflow:epoch 1 finished I0328 06:06:52.326141 281472867201920 gce_failure_handler_test.py:192] epoch 1 finished INFO:tensorflow:epoch 2 finished I0328 06:06:52.765258 281472867201920 gce_failure_handler_test.py:192] epoch 2 finished INFO:tensorflow:epoch 3 finished I0328 06:06:52.996472 281472867201920 gce_failure_handler_test.py:192] epoch 3 finished INFO:tensorflow:epoch 4 finished I0328 06:06:53.146316 281472867201920 gce_failure_handler_test.py:192] epoch 4 finished INFO:tensorflow:Training finished. I0328 06:06:53.146695 281472867201920 gce_failure_handler_test.py:244] Training finished. INFO:tensorflow:time(__main__.GceFailureHandlingTest.test_grace_period_continue_training_test_apiwrappingtrain_True_inputarg_manager_strategyoption_MWMSmultiworker): 1.72s I0328 06:06:53.152401 281472867201920 test_util.py:2462] time(__main__.GceFailureHandlingTest.test_grace_period_continue_training_test_apiwrappingtrain_True_inputarg_manager_strategyoption_MWMSmultiworker): 1.72s [ OK ] GceFailureHandlingTest.test_grace_period_continue_training_test_apiwrappingtrain_True_inputarg_manager_strategyoption_MWMSmultiworker [ RUN ] GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_False_graceperiod_0_inputarg_manager_strategyoption_MWMSmultiworker INFO:tensorflow:Using local port 21829 I0328 06:06:53.156922 281472867201920 test_util.py:3794] Using local port 21829 INFO:tensorflow:Using local port 21243 I0328 06:06:53.157712 281472867201920 test_util.py:3794] Using local port 21243 INFO:tensorflow:Using local port 22894 I0328 06:06:53.158097 281472867201920 test_util.py:3794] Using local port 22894 INFO:tensorflow:Using local port 19018 I0328 06:06:53.158460 281472867201920 test_util.py:3794] Using local port 19018 INFO:tensorflow:Cluster starting. I0328 06:06:53.702390 281472867201920 gce_failure_handler_test.py:405] Cluster starting. [worker-0]: I0328 06:06:53.914472 281473374647168 multi_process_runner.py:840] Subprocess with PID 2625548 (worker, 0) is now being started. [worker-0]: I0328 06:06:53.914893 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:21829", "localhost:21243", "localhost:22894", "localhost:19018"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' [worker-2]: I0328 06:06:53.924031 281473374647168 multi_process_runner.py:840] Subprocess with PID 2625884 (worker, 2) is now being started. [worker-1]: I0328 06:06:53.924962 281473374647168 multi_process_runner.py:840] Subprocess with PID 2625864 (worker, 1) is now being started. [worker-2]: I0328 06:06:53.924392 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:21829", "localhost:21243", "localhost:22894", "localhost:19018"]}, "task": {"type": "worker", "index": 2}, "rpc_layer": "grpc"}' [worker-3]: I0328 06:06:53.925826 281473374647168 multi_process_runner.py:840] Subprocess with PID 2626018 (worker, 3) is now being started. [worker-3]: I0328 06:06:53.926184 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:21829", "localhost:21243", "localhost:22894", "localhost:19018"]}, "task": {"type": "worker", "index": 3}, "rpc_layer": "grpc"}' [worker-1]: I0328 06:06:53.925297 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:21829", "localhost:21243", "localhost:22894", "localhost:19018"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [worker-1]: 2023-03-28 06:06:54.004279: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:21243 [worker-2]: 2023-03-28 06:06:54.008008: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:22894 [worker-0]: 2023-03-28 06:06:54.020820: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:21829 [worker-0]: 2023-03-28 06:06:54.036784: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:1 has connected to coordination service. Incarnation: 12608115896814633562 [worker-0]: 2023-03-28 06:06:54.042997: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:0 has connected to coordination service. Incarnation: 16607689209158525398 [worker-2]: 2023-03-28 06:06:54.043710: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-0]: 2023-03-28 06:06:54.043247: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-0]: 2023-03-28 06:06:54.043496: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:2 has connected to coordination service. Incarnation: 8456654114468834786 [worker-1]: 2023-03-28 06:06:54.059451: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-3]: 2023-03-28 06:06:54.176960: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:19018 [worker-0]: 2023-03-28 06:06:54.194298: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:3 has connected to coordination service. Incarnation: 3448465452453945264 [worker-3]: 2023-03-28 06:06:54.195344: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-2]: I0328 06:06:54.201745 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: I0328 06:06:54.199355 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-1]: I0328 06:06:54.217330 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-3]: I0328 06:06:54.256947 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0328 06:06:54.335828 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0328 06:06:54.337071 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0328 06:06:54.337745 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0328 06:06:54.388795 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0328 06:06:54.389252 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0328 06:06:54.389459 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0328 06:06:54.458292 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0328 06:06:54.458760 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0328 06:06:54.458970 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0328 06:06:54.513112 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0328 06:06:54.514292 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0328 06:06:54.514958 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Start watcher for peer's signal. [worker-1]: INFO:tensorflow:Start watcher for peer's signal. [worker-1]: I0328 06:06:54.835121 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-0]: INFO:tensorflow:Start watcher for peer's signal. [worker-0]: I0328 06:06:54.849656 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-1]: INFO:tensorflow:Start polling for termination signal. [worker-1]: I0328 06:06:54.856732 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-1]: Exception in thread WorkerTerminationSignalWatcher-1: [worker-1]: Traceback (most recent call last): [worker-1]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-1]: self.run() [worker-1]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-1]: self._target(*self._args, **self._kwargs) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-1]: if self._termination_watcher_fn(): [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-1]: elif frequent_send and not maintenance_event.is_set(): [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: AttributeError: 'str' object has no attribute 'is_set' [worker-3]: I0328 06:06:54.816667 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-3]: INFO:tensorflow:Start polling for termination signal. [worker-3]: I0328 06:06:54.879637 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-3]: Exception in thread WorkerTerminationSignalWatcher-3: [worker-3]: Traceback (most recent call last): [worker-3]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-3]: self.run() [worker-3]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-3]: self._target(*self._args, **self._kwargs) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-3]: if self._termination_watcher_fn(): [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-3]: elif frequent_send and not maintenance_event.is_set(): [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: AttributeError: 'str' object has no attribute 'is_set' [worker-3]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-3]: I0328 06:06:54.882825 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-3]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: W0328 06:06:54.883098 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: INFO:tensorflow:Start training at 0 [worker-3]: I0328 06:06:54.883253 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-1]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-1]: I0328 06:06:54.886197 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-0]: INFO:tensorflow:Start polling for termination signal. [worker-0]: I0328 06:06:54.886619 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-1]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: W0328 06:06:54.886523 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: INFO:tensorflow:Start training at 0 [worker-1]: I0328 06:06:54.886679 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-2]: INFO:tensorflow:Start watcher for peer's signal. [worker-2]: I0328 06:06:54.897408 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-0]: Exception in thread WorkerTerminationSignalWatcher-0: [worker-0]: Traceback (most recent call last): [worker-0]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-0]: self.run() [worker-0]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-0]: self._target(*self._args, **self._kwargs) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-0]: if self._termination_watcher_fn(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-0]: elif frequent_send and not maintenance_event.is_set(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: AttributeError: 'str' object has no attribute 'is_set' [worker-0]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-0]: I0328 06:06:54.919795 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-0]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: W0328 06:06:54.920179 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: INFO:tensorflow:Start training at 0 [worker-0]: I0328 06:06:54.920337 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-2]: INFO:tensorflow:Start polling for termination signal. [worker-2]: I0328 06:06:54.936707 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-2]: Exception in thread WorkerTerminationSignalWatcher-2: [worker-2]: Traceback (most recent call last): [worker-2]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-2]: self.run() [worker-2]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-2]: self._target(*self._args, **self._kwargs) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-2]: if self._termination_watcher_fn(): [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-2]: elif frequent_send and not maintenance_event.is_set(): [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: AttributeError: 'str' object has no attribute 'is_set' [worker-2]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-2]: I0328 06:06:54.958071 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-2]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: W0328 06:06:54.958387 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: INFO:tensorflow:Start training at 0 [worker-2]: I0328 06:06:54.958544 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:55.005490 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:55.009437 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:55.098060 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:55.131704 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:55.269641 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:55.290198 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:55.321354 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:55.321338 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:55.507099 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:55.510993 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:55.518711 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:55.522443 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:55.601792 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:55.605691 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:55.605920 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:55.629640 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:55.721004 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:55.730777 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:55.749994 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:55.791584 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9e340> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:06:55.919256 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9e340> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a94720> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:06:55.924339 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a94720> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9a5c0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9f600> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:06:55.940203 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9f600> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:06:55.936522 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9a5c0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:55.959753 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:55.960359 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:55.980182 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:56.349898 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c8d60> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c89a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:06:56.427311 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c89a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: INFO:tensorflow:epoch 0 finished [worker-0]: I0328 06:06:56.427635 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-2]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0ccae0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:06:56.436408 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0ccae0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: INFO:tensorflow:epoch 0 finished [worker-1]: W0328 06:06:56.431801 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c8d60> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: I0328 06:06:56.436681 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-1]: INFO:tensorflow:epoch 0 finished [worker-1]: I0328 06:06:56.432095 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:56.445099 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:56.460210 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:56.481829 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c89a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:06:56.497108 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c89a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: INFO:tensorflow:epoch 0 finished [worker-3]: I0328 06:06:56.497413 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:56.544188 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:56.669464 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:56.680566 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:56.681422 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:56.684108 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:56.889707 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:56.889706 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:56.910140 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:56.930288 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:57.168572 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:57.207347 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:57.219857 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:57.239174 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:57.317946 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:57.339946 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:57.340766 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:57.397257 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:57.489668 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:57.489666 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:57.509551 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:57.529709 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:epoch 1 finished [worker-0]: I0328 06:06:57.578459 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-3]: INFO:tensorflow:epoch 1 finished [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:57.584857 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:epoch 1 finished [worker-3]: I0328 06:06:57.578157 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-2]: INFO:tensorflow:epoch 1 finished [worker-2]: I0328 06:06:57.581766 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-1]: I0328 06:06:57.578968 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:57.600022 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:57.599226 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:57.643459 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:57.702680 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:57.710588 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:57.739763 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:57.759579 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:57.861967 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:57.871810 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:57.871732 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:57.929174 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:58.032946 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:58.027260 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:58.041773 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:58.061429 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:58.229967 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:58.251632 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:58.270520 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:58.279677 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:58.429666 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:58.468057 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:58.452240 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:58.509291 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:epoch 2 finished [worker-0]: I0328 06:06:58.651051 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-2]: INFO:tensorflow:epoch 2 finished [worker-1]: INFO:tensorflow:epoch 2 finished [worker-3]: INFO:tensorflow:epoch 2 finished [worker-1]: I0328 06:06:58.650917 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-3]: I0328 06:06:58.655541 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-2]: I0328 06:06:58.651098 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:58.660744 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:58.661336 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:58.664716 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:58.687989 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:58.800687 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:58.799490 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:58.817935 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:58.819768 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:58.998316 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:58.998914 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:59.031522 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:59.051951 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:59.189879 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:59.228174 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:59.230729 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:59.259731 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:59.379661 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:59.382755 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:59.390116 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:59.419203 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:59.500271 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:59.529775 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:59.531994 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:59.579811 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 3 finished [worker-3]: I0328 06:06:59.708722 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-0]: INFO:tensorflow:epoch 3 finished [worker-0]: I0328 06:06:59.713615 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-2]: INFO:tensorflow:epoch 3 finished [worker-2]: I0328 06:06:59.714128 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-1]: INFO:tensorflow:epoch 3 finished [worker-1]: I0328 06:06:59.714178 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:06:59.739610 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:06:59.739614 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:06:59.759427 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:06:59.773219 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:00.029438 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:00.041084 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:00.049893 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:00.060038 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:00.191542 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:00.221316 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:00.219315 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:00.221323 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:00.359826 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:00.349317 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:00.349646 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:00.371199 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:00.491356 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:00.483115 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:00.489314 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:00.519578 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:00.567459 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:00.571093 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:00.571134 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:00.579562 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 4 finished [worker-3]: I0328 06:07:00.666879 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-3]: INFO:tensorflow:Training finished. [worker-3]: I0328 06:07:00.667096 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-0]: INFO:tensorflow:epoch 4 finished [worker-1]: INFO:tensorflow:epoch 4 finished [worker-1]: I0328 06:07:00.694014 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-1]: INFO:tensorflow:Training finished. [worker-1]: I0328 06:07:00.694224 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-0]: I0328 06:07:00.692597 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-0]: INFO:tensorflow:Training finished. [worker-0]: I0328 06:07:00.692845 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-2]: INFO:tensorflow:epoch 4 finished [worker-2]: I0328 06:07:00.706376 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-2]: INFO:tensorflow:Training finished. [worker-2]: I0328 06:07:00.706616 281473374647168 gce_failure_handler_test.py:244] Training finished. INFO:tensorflow:restarting workers I0328 06:07:01.917149 281472867201920 gce_failure_handler_test.py:411] restarting workers INFO:tensorflow:workers restarted I0328 06:07:02.095808 281472867201920 gce_failure_handler_test.py:415] workers restarted [worker-0]: I0328 06:07:02.152937 281473374647168 multi_process_runner.py:840] Subprocess with PID 2658602 (worker, 0) is now being started. [worker-0]: I0328 06:07:02.153285 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:21829", "localhost:21243", "localhost:22894", "localhost:19018"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' [worker-2]: I0328 06:07:02.335736 281473374647168 multi_process_runner.py:840] Subprocess with PID 2658911 (worker, 2) is now being started. [worker-3]: I0328 06:07:02.405360 281473374647168 multi_process_runner.py:840] Subprocess with PID 2658918 (worker, 3) is now being started. [worker-3]: I0328 06:07:02.405672 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:21829", "localhost:21243", "localhost:22894", "localhost:19018"]}, "task": {"type": "worker", "index": 3}, "rpc_layer": "grpc"}' [worker-1]: I0328 06:07:02.397992 281473374647168 multi_process_runner.py:840] Subprocess with PID 2658901 (worker, 1) is now being started. [worker-2]: I0328 06:07:02.336074 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:21829", "localhost:21243", "localhost:22894", "localhost:19018"]}, "task": {"type": "worker", "index": 2}, "rpc_layer": "grpc"}' [worker-1]: I0328 06:07:02.398342 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:21829", "localhost:21243", "localhost:22894", "localhost:19018"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [worker-1]: 2023-03-28 06:07:02.443040: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:21243 [worker-2]: 2023-03-28 06:07:02.443612: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:22894 [worker-3]: 2023-03-28 06:07:02.453498: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:19018 [worker-0]: 2023-03-28 06:07:02.471615: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:21829 [worker-0]: 2023-03-28 06:07:02.486201: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:2 has connected to coordination service. Incarnation: 15898918648068835688 [worker-2]: 2023-03-28 06:07:02.486547: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-0]: 2023-03-28 06:07:02.492753: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:3 has connected to coordination service. Incarnation: 13451992889501474509 [worker-0]: 2023-03-28 06:07:02.492800: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:0 has connected to coordination service. Incarnation: 10762765175433915127 [worker-3]: 2023-03-28 06:07:02.493059: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-0]: 2023-03-28 06:07:02.493053: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. INFO:tensorflow:Termination notice available. I0328 06:07:02.668598 281462997512672 gce_failure_handler_test.py:142] Termination notice available. --- Logging error --- Traceback (most recent call last): File "/usr/lib/python3.11/logging/__init__.py", line 1110, in emit msg = self.format(record) ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/__init__.py", line 953, in format return fmt.format(record) ^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/__init__.py", line 687, in format record.message = record.getMessage() ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/__init__.py", line 377, in getMessage msg = msg % self.args ~~~~^~~~~~~~~~~ TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.11/threading.py", line 995, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner self.run() File "/usr/lib/python3.11/threading.py", line 975, in run self._target(*self._args, **self._kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 696, in _poll_termination_signal self._maybe_set_received_own_sigterm() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 701, in _maybe_set_received_own_sigterm logging.info('Received termination notice.', File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/platform/tf_logging.py", line 198, in info get_logger().info(msg, *args, **kwargs) Message: 'Received termination notice.' Arguments: ('single_worker',) --- Logging error --- Traceback (most recent call last): File "/usr/lib/python3.11/logging/__init__.py", line 1110, in emit msg = self.format(record) ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/__init__.py", line 953, in format return fmt.format(record) ^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/logging/__init__.py", line 1025, in format return prefix + super(PythonFormatter, self).format(record) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/__init__.py", line 687, in format record.message = record.getMessage() ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/__init__.py", line 377, in getMessage msg = msg % self.args ~~~~^~~~~~~~~~~ TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.11/threading.py", line 995, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner self.run() File "/usr/lib/python3.11/threading.py", line 975, in run self._target(*self._args, **self._kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 696, in _poll_termination_signal self._maybe_set_received_own_sigterm() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 701, in _maybe_set_received_own_sigterm logging.info('Received termination notice.', File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/platform/tf_logging.py", line 198, in info get_logger().info(msg, *args, **kwargs) File "/usr/lib/python3.11/logging/__init__.py", line 1489, in info self._log(INFO, msg, args, **kwargs) File "/usr/lib/python3.11/logging/__init__.py", line 1634, in _log self.handle(record) File "/usr/lib/python3.11/logging/__init__.py", line 1644, in handle self.callHandlers(record) File "/usr/lib/python3.11/logging/__init__.py", line 1706, in callHandlers hdlr.handle(record) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/logging/__init__.py", line 988, in handle return self._current_handler.handle(record) File "/usr/lib/python3.11/logging/__init__.py", line 978, in handle self.emit(record) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/logging/__init__.py", line 925, in emit super(PythonHandler, self).emit(record) Message: 'Received termination notice.' Arguments: ('single_worker',) Exception ignored in: Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 775, in __del__ self._stop_poll_termination_signal_thread() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 734, in _stop_poll_termination_signal_thread self._poll_termination_signal_thread.join() File "/usr/lib/python3.11/threading.py", line 1109, in join raise RuntimeError("cannot join current thread") RuntimeError: cannot join current thread [worker-0]: 2023-03-28 06:07:02.681526: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:1 has connected to coordination service. Incarnation: 11788690304605726605 [worker-1]: 2023-03-28 06:07:02.682368: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-3]: I0328 06:07:02.700164 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-2]: I0328 06:07:02.737578 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: I0328 06:07:02.717027 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-1]: I0328 06:07:02.761179 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0328 06:07:02.828115 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0328 06:07:02.829352 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0328 06:07:02.830001 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0328 06:07:02.974498 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0328 06:07:02.975719 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0328 06:07:02.976402 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0328 06:07:02.993318 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0328 06:07:02.993777 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0328 06:07:02.993981 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0328 06:07:02.958958 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0328 06:07:03.027993 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0328 06:07:03.028227 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:21829', 'localhost:21243', 'localhost:22894', 'localhost:19018']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Start watcher for peer's signal. [worker-2]: I0328 06:07:03.143601 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-2]: INFO:tensorflow:Start polling for termination signal. [worker-2]: I0328 06:07:03.176265 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-0]: INFO:tensorflow:Start watcher for peer's signal. [worker-0]: I0328 06:07:03.189051 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-1]: INFO:tensorflow:Start watcher for peer's signal. [worker-1]: I0328 06:07:03.199604 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-3]: INFO:tensorflow:Start watcher for peer's signal. [worker-3]: I0328 06:07:03.197359 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-2]: Exception in thread WorkerTerminationSignalWatcher-2: [worker-2]: Traceback (most recent call last): [worker-2]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-2]: self.run() [worker-2]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-2]: self._target(*self._args, **self._kwargs) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-2]: if self._termination_watcher_fn(): [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-2]: elif frequent_send and not maintenance_event.is_set(): [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: AttributeError: 'str' object has no attribute 'is_set' [worker-2]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-2]: I0328 06:07:03.209188 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-2]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: W0328 06:07:03.209510 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: INFO:tensorflow:Start training at 0 [worker-2]: I0328 06:07:03.209661 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-0]: INFO:tensorflow:Start polling for termination signal. [worker-0]: I0328 06:07:03.217257 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-3]: INFO:tensorflow:Start polling for termination signal. [worker-3]: I0328 06:07:03.234302 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-1]: INFO:tensorflow:Start polling for termination signal. [worker-1]: I0328 06:07:03.236691 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-0]: Exception in thread WorkerTerminationSignalWatcher-0: [worker-0]: Traceback (most recent call last): [worker-0]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-0]: self.run() [worker-0]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-0]: self._target(*self._args, **self._kwargs) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-0]: if self._termination_watcher_fn(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-0]: elif frequent_send and not maintenance_event.is_set(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: AttributeError: 'str' object has no attribute 'is_set' [worker-0]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-0]: I0328 06:07:03.238016 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-0]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: W0328 06:07:03.238311 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: INFO:tensorflow:Start training at 0 [worker-0]: I0328 06:07:03.238461 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-3]: Exception in thread WorkerTerminationSignalWatcher-3: [worker-3]: Traceback (most recent call last): [worker-3]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-3]: self.run() [worker-3]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-3]: self._target(*self._args, **self._kwargs) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-3]: if self._termination_watcher_fn(): [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-3]: elif frequent_send and not maintenance_event.is_set(): [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: AttributeError: 'str' object has no attribute 'is_set' [worker-3]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-3]: I0328 06:07:03.256223 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-3]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: W0328 06:07:03.256586 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: INFO:tensorflow:Start training at 0 [worker-3]: I0328 06:07:03.256744 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-1]: Exception in thread WorkerTerminationSignalWatcher-1: [worker-1]: Traceback (most recent call last): [worker-1]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-1]: self.run() [worker-1]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-1]: self._target(*self._args, **self._kwargs) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-1]: if self._termination_watcher_fn(): [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-1]: elif frequent_send and not maintenance_event.is_set(): [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: AttributeError: 'str' object has no attribute 'is_set' [worker-1]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-1]: I0328 06:07:03.276194 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-1]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: W0328 06:07:03.276560 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: INFO:tensorflow:Start training at 0 [worker-1]: I0328 06:07:03.276718 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:03.486409 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:03.511492 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:03.521904 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:03.553399 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:03.750361 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:03.770236 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:03.790883 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:03.799932 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:03.917376 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:03.961030 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:03.981899 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:04.031472 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:04.268475 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:04.265792 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:04.256695 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:04.268923 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:04.389822 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:04.418107 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:04.443045 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:04.462035 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9e480> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:07:04.607205 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9e480> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9f1a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:07:04.627778 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9f1a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9c540> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:07:04.631955 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9c540> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9eb60> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:07:04.635649 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9eb60> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:04.637031 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:04.643674 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:04.643640 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:04.667348 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0400> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0180> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:07:04.754122 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0180> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0ccfe0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:07:04.753986 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0ccfe0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0860> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:07:04.759063 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0860> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: INFO:tensorflow:epoch 0 finished [worker-0]: W0328 06:07:04.748557 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0400> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: I0328 06:07:04.759357 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-2]: INFO:tensorflow:epoch 0 finished [worker-0]: INFO:tensorflow:epoch 0 finished [worker-2]: I0328 06:07:04.754430 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-0]: I0328 06:07:04.748869 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-1]: INFO:tensorflow:epoch 0 finished [worker-1]: I0328 06:07:04.754413 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:04.762088 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:04.766815 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:04.780462 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:04.800546 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:04.952678 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:04.969008 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:04.950742 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:04.958031 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:05.106989 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:05.129939 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:05.129504 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:05.140945 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:05.289955 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:05.302434 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:05.301354 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:05.320007 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:05.433834 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:05.455408 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:05.470308 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:05.453837 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:05.567370 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:05.579579 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:05.599777 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:05.606796 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 1 finished [worker-3]: I0328 06:07:05.852178 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-1]: INFO:tensorflow:epoch 1 finished [worker-1]: I0328 06:07:05.857769 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-0]: INFO:tensorflow:epoch 1 finished [worker-0]: I0328 06:07:05.862271 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-2]: INFO:tensorflow:epoch 1 finished [worker-2]: I0328 06:07:05.866538 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:05.859987 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:05.869812 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:05.890679 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:05.897123 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:05.962947 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:05.973221 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:05.972961 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:05.980873 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:06.085472 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:06.091806 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:06.116401 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:06.132234 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:06.258372 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:06.267162 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:06.290585 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:06.301428 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:06.471470 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:06.495890 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:06.494349 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:06.546466 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:06.752667 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:06.772877 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:06.786860 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:06.777703 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:epoch 2 finished [worker-0]: I0328 06:07:06.840706 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-3]: INFO:tensorflow:epoch 2 finished [worker-3]: I0328 06:07:06.840565 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-2]: INFO:tensorflow:epoch 2 finished [worker-2]: I0328 06:07:06.845139 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:06.847236 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:epoch 2 finished [worker-1]: I0328 06:07:06.852708 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:06.860065 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:06.851612 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:06.899508 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:06.982917 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:06.979796 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:06.987134 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:07.020303 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:07.077631 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:07.089490 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:07.090282 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:07.136724 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:07.224012 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:07.235724 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:07.243088 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:07.249531 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:07.306525 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:07.306913 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:07.313185 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:07.323224 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:07.419600 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:07.424759 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:07.439676 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:07.482814 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 3 finished [worker-3]: I0328 06:07:07.546790 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:07.553504 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:epoch 3 finished [worker-0]: I0328 06:07:07.566821 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-1]: INFO:tensorflow:epoch 3 finished [worker-2]: INFO:tensorflow:epoch 3 finished [worker-1]: I0328 06:07:07.571725 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-2]: I0328 06:07:07.575364 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:07.576646 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:07.581964 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:07.606043 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:07.688784 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:07.699966 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:07.693453 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:07.709725 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:07.841659 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:07.844465 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:07.891595 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:07.879131 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:07.996234 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:08.017288 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:08.023031 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:08.020894 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:08.078051 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:08.078530 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:08.093114 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:08.101352 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:08.180024 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:08.182541 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:08.186363 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:08.188649 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 4 finished [worker-3]: I0328 06:07:08.248054 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-3]: INFO:tensorflow:Training finished. [worker-3]: I0328 06:07:08.248301 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-0]: INFO:tensorflow:epoch 4 finished [worker-0]: I0328 06:07:08.267055 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-0]: INFO:tensorflow:Training finished. [worker-0]: I0328 06:07:08.267285 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-1]: INFO:tensorflow:epoch 4 finished [worker-1]: I0328 06:07:08.279825 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-1]: INFO:tensorflow:Training finished. [worker-1]: I0328 06:07:08.280065 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-2]: INFO:tensorflow:epoch 4 finished [worker-2]: I0328 06:07:08.296594 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-2]: INFO:tensorflow:Training finished. [worker-2]: I0328 06:07:08.296833 281473374647168 gce_failure_handler_test.py:244] Training finished. I0328 06:07:09.336908 281472867201920 multi_process_runner.py:646] worker-0 exit code: 0 I0328 06:07:09.337178 281472867201920 multi_process_runner.py:646] worker-1 exit code: 0 I0328 06:07:09.337293 281472867201920 multi_process_runner.py:646] worker-2 exit code: 0 I0328 06:07:09.337395 281472867201920 multi_process_runner.py:646] worker-3 exit code: 0 I0328 06:07:09.349287 281472867201920 multi_process_runner.py:662] Joining log reading threads. I0328 06:07:09.349582 281472867201920 multi_process_runner.py:665] Joined log reading threads. INFO:tensorflow:time(__main__.GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_False_graceperiod_0_inputarg_manager_strategyoption_MWMSmultiworker): 16.76s I0328 06:07:09.916444 281472867201920 test_util.py:2462] time(__main__.GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_False_graceperiod_0_inputarg_manager_strategyoption_MWMSmultiworker): 16.76s [ OK ] GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_False_graceperiod_0_inputarg_manager_strategyoption_MWMSmultiworker [ RUN ] GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_False_graceperiod_7_inputarg_manager_strategyoption_MWMSmultiworker INFO:tensorflow:Using local port 18407 I0328 06:07:09.917883 281472867201920 test_util.py:3794] Using local port 18407 INFO:tensorflow:Using local port 24929 I0328 06:07:09.918245 281472867201920 test_util.py:3794] Using local port 24929 INFO:tensorflow:Using local port 15928 I0328 06:07:09.918572 281472867201920 test_util.py:3794] Using local port 15928 INFO:tensorflow:Using local port 24716 I0328 06:07:09.918893 281472867201920 test_util.py:3794] Using local port 24716 INFO:tensorflow:Cluster starting. I0328 06:07:10.347091 281472867201920 gce_failure_handler_test.py:405] Cluster starting. [worker-0]: I0328 06:07:10.567578 281473374647168 multi_process_runner.py:840] Subprocess with PID 2699083 (worker, 0) is now being started. [worker-0]: I0328 06:07:10.567937 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:18407", "localhost:24929", "localhost:15928", "localhost:24716"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' [worker-1]: I0328 06:07:10.789185 281473374647168 multi_process_runner.py:840] Subprocess with PID 2699398 (worker, 1) is now being started. [worker-2]: I0328 06:07:10.789624 281473374647168 multi_process_runner.py:840] Subprocess with PID 2699603 (worker, 2) is now being started. [worker-1]: I0328 06:07:10.789495 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:18407", "localhost:24929", "localhost:15928", "localhost:24716"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [worker-1]: 2023-03-28 06:07:10.824400: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:24929 [worker-0]: 2023-03-28 06:07:10.864861: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:18407 [worker-2]: I0328 06:07:10.789925 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:18407", "localhost:24929", "localhost:15928", "localhost:24716"]}, "task": {"type": "worker", "index": 2}, "rpc_layer": "grpc"}' [worker-3]: I0328 06:07:10.913722 281473374647168 multi_process_runner.py:840] Subprocess with PID 2699725 (worker, 3) is now being started. [worker-3]: I0328 06:07:10.914047 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:18407", "localhost:24929", "localhost:15928", "localhost:24716"]}, "task": {"type": "worker", "index": 3}, "rpc_layer": "grpc"}' [worker-0]: 2023-03-28 06:07:10.960362: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:1 has connected to coordination service. Incarnation: 16749353587426151820 [worker-1]: 2023-03-28 06:07:10.960657: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-0]: 2023-03-28 06:07:10.960428: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:0 has connected to coordination service. Incarnation: 18373787363029994253 [worker-0]: 2023-03-28 06:07:10.960773: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-3]: 2023-03-28 06:07:11.060899: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:24716 [worker-0]: 2023-03-28 06:07:11.064432: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:3 has connected to coordination service. Incarnation: 15115061801146974282 [worker-3]: 2023-03-28 06:07:11.069836: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-2]: 2023-03-28 06:07:11.385364: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:15928 [worker-0]: 2023-03-28 06:07:11.398098: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:2 has connected to coordination service. Incarnation: 7350149499785752254 [worker-2]: 2023-03-28 06:07:11.398839: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-3]: I0328 06:07:11.417362 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-1]: I0328 06:07:11.402429 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: I0328 06:07:11.438007 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-2]: I0328 06:07:11.471602 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0328 06:07:11.798652 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0328 06:07:11.799109 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0328 06:07:11.799315 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0328 06:07:11.784450 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0328 06:07:11.896620 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0328 06:07:11.896862 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0328 06:07:11.948571 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0328 06:07:11.949048 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0328 06:07:11.949254 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0328 06:07:11.978333 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0328 06:07:11.978796 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0328 06:07:11.978996 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Start watcher for peer's signal. [worker-1]: I0328 06:07:12.079581 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-0]: INFO:tensorflow:Start watcher for peer's signal. [worker-3]: INFO:tensorflow:Start watcher for peer's signal. [worker-3]: I0328 06:07:12.108274 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-0]: I0328 06:07:12.107929 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-1]: INFO:tensorflow:Start polling for termination signal. [worker-1]: I0328 06:07:12.116265 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-2]: INFO:tensorflow:Start watcher for peer's signal. [worker-2]: I0328 06:07:12.122217 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-0]: INFO:tensorflow:Start polling for termination signal. [worker-0]: I0328 06:07:12.126632 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-1]: Exception in thread WorkerTerminationSignalWatcher-1: [worker-1]: Traceback (most recent call last): [worker-1]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-1]: self.run() [worker-1]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-1]: self._target(*self._args, **self._kwargs) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-1]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-1]: I0328 06:07:12.137281 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-1]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: W0328 06:07:12.137607 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: INFO:tensorflow:Start training at 0 [worker-1]: I0328 06:07:12.137758 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-1]: if self._termination_watcher_fn(): [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-1]: elif frequent_send and not maintenance_event.is_set(): [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: AttributeError: 'str' object has no attribute 'is_set' [worker-0]: Exception in thread WorkerTerminationSignalWatcher-0: [worker-0]: Traceback (most recent call last): [worker-0]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-0]: self.run() [worker-0]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-0]: self._target(*self._args, **self._kwargs) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-0]: if self._termination_watcher_fn(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-0]: elif frequent_send and not maintenance_event.is_set(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: AttributeError: 'str' object has no attribute 'is_set' [worker-0]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-0]: I0328 06:07:12.153162 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-0]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: W0328 06:07:12.153491 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-3]: INFO:tensorflow:Start polling for termination signal. [worker-3]: I0328 06:07:12.157160 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: INFO:tensorflow:Start training at 0 [worker-0]: I0328 06:07:12.153642 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-2]: INFO:tensorflow:Start polling for termination signal. [worker-2]: I0328 06:07:12.176265 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-2]: Exception in thread WorkerTerminationSignalWatcher-2: [worker-2]: Traceback (most recent call last): [worker-2]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-2]: self.run() [worker-2]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-2]: self._target(*self._args, **self._kwargs) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-2]: if self._termination_watcher_fn(): [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-2]: elif frequent_send and not maintenance_event.is_set(): [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: AttributeError: 'str' object has no attribute 'is_set' [worker-2]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-2]: I0328 06:07:12.199789 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-2]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: W0328 06:07:12.200110 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: INFO:tensorflow:Start training at 0 [worker-2]: I0328 06:07:12.200262 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-3]: Exception in thread WorkerTerminationSignalWatcher-3: [worker-3]: Traceback (most recent call last): [worker-3]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-3]: self.run() [worker-3]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-3]: self._target(*self._args, **self._kwargs) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-3]: if self._termination_watcher_fn(): [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-3]: elif frequent_send and not maintenance_event.is_set(): [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: AttributeError: 'str' object has no attribute 'is_set' [worker-3]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-3]: I0328 06:07:12.234357 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-3]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: W0328 06:07:12.234714 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: INFO:tensorflow:Start training at 0 [worker-3]: I0328 06:07:12.234866 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:12.331470 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:12.512653 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:12.507002 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:12.549681 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:13.009205 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:13.011738 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:13.019567 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:13.039674 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:13.113912 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:13.114794 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:13.123771 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:13.139487 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:13.230025 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:13.252735 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:13.249921 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:13.269990 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:13.363707 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:13.379855 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:13.381872 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:13.409752 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9a3e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:07:13.486391 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9a3e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4aa2200> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:07:13.491902 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4aa2200> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a999e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:07:13.496850 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a999e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:13.504186 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a991c0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:07:13.520448 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a991c0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:13.509766 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:13.539777 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:13.539785 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c91c0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:07:13.640334 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c91c0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: INFO:tensorflow:epoch 0 finished [worker-1]: I0328 06:07:13.640767 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-3]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xfffef7040540> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:07:13.638036 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xfffef7040540> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: INFO:tensorflow:epoch 0 finished [worker-3]: I0328 06:07:13.638341 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-2]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c8d60> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:07:13.640352 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c8d60> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: INFO:tensorflow:epoch 0 finished [worker-2]: I0328 06:07:13.640651 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:13.648634 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xfffef4aa1300> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:07:13.656505 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xfffef4aa1300> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: INFO:tensorflow:epoch 0 finished [worker-0]: I0328 06:07:13.656800 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:13.658905 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:13.659926 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:13.679824 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:13.791323 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:13.789990 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:13.820269 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:13.839595 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:14.022064 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:14.024418 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:14.049680 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:14.059220 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:14.183539 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:14.176979 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:14.199875 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:14.202197 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:14.314818 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:14.317624 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:14.339448 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:14.349685 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:14.418244 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:14.418175 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:14.421582 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:14.418608 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 1 finished [worker-0]: INFO:tensorflow:epoch 1 finished [worker-0]: I0328 06:07:14.489030 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-3]: I0328 06:07:14.488955 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-2]: INFO:tensorflow:epoch 1 finished [worker-2]: I0328 06:07:14.493012 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-1]: INFO:tensorflow:epoch 1 finished [worker-1]: I0328 06:07:14.493427 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:14.495553 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:14.495550 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:14.499938 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:14.500067 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:14.558153 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:14.558416 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:14.558331 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:14.558227 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:14.618890 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:14.618393 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:14.627348 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:14.618651 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:14.692270 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:14.696345 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:14.696419 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:14.729547 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:14.901881 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:14.902977 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:14.901589 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:14.919461 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:15.004294 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:15.003967 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:15.023422 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:15.029565 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:epoch 2 finished [worker-0]: I0328 06:07:15.072622 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-3]: INFO:tensorflow:epoch 2 finished [worker-3]: I0328 06:07:15.076276 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:15.080149 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:epoch 2 finished [worker-1]: I0328 06:07:15.077097 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:15.082796 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:15.083575 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:epoch 2 finished [worker-2]: I0328 06:07:15.096433 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:15.111287 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:15.159488 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:15.162085 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:15.161736 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:15.166619 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:15.220355 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:15.223364 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:15.221931 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:15.240237 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:15.304939 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:15.322973 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:15.329823 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:15.349461 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:15.480785 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:15.489582 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:15.529677 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:15.542506 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:15.823421 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:15.832489 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:15.859398 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:15.902112 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 3 finished [worker-0]: INFO:tensorflow:epoch 3 finished [worker-3]: I0328 06:07:16.106976 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-0]: I0328 06:07:16.107226 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-1]: INFO:tensorflow:epoch 3 finished [worker-1]: I0328 06:07:16.111375 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-2]: INFO:tensorflow:epoch 3 finished [worker-2]: I0328 06:07:16.111757 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:16.113560 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:16.114633 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:16.120191 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:16.159796 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:16.209408 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:16.212213 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:16.228782 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:16.239691 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:16.308379 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:16.337713 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:16.334726 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:16.380334 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:16.463272 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:16.469907 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:16.454609 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:16.488044 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:16.673193 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:16.673507 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:16.719658 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:16.744245 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:16.907523 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:16.923133 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:16.919903 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:16.940680 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 4 finished [worker-1]: INFO:tensorflow:epoch 4 finished [worker-1]: I0328 06:07:17.029741 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-3]: I0328 06:07:17.027621 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-1]: INFO:tensorflow:Training finished. [worker-3]: INFO:tensorflow:Training finished. [worker-1]: I0328 06:07:17.029955 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-3]: I0328 06:07:17.027863 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-2]: INFO:tensorflow:epoch 4 finished [worker-2]: I0328 06:07:17.031959 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-2]: INFO:tensorflow:Training finished. [worker-2]: I0328 06:07:17.032173 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-0]: INFO:tensorflow:epoch 4 finished [worker-0]: I0328 06:07:17.043709 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-0]: INFO:tensorflow:Training finished. [worker-0]: I0328 06:07:17.043938 281473374647168 gce_failure_handler_test.py:244] Training finished. INFO:tensorflow:restarting workers I0328 06:07:18.577166 281472867201920 gce_failure_handler_test.py:411] restarting workers INFO:tensorflow:workers restarted I0328 06:07:18.676477 281472867201920 gce_failure_handler_test.py:415] workers restarted [worker-0]: I0328 06:07:18.737004 281473374647168 multi_process_runner.py:840] Subprocess with PID 2736019 (worker, 0) is now being started. [worker-0]: I0328 06:07:18.737359 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:18407", "localhost:24929", "localhost:15928", "localhost:24716"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' [worker-1]: I0328 06:07:18.759644 281473374647168 multi_process_runner.py:840] Subprocess with PID 2736028 (worker, 1) is now being started. [worker-3]: I0328 06:07:18.823075 281473374647168 multi_process_runner.py:840] Subprocess with PID 2736468 (worker, 3) is now being started. [worker-1]: I0328 06:07:18.760002 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:18407", "localhost:24929", "localhost:15928", "localhost:24716"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [worker-2]: I0328 06:07:18.841095 281473374647168 multi_process_runner.py:840] Subprocess with PID 2736032 (worker, 2) is now being started. [worker-2]: I0328 06:07:18.841443 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:18407", "localhost:24929", "localhost:15928", "localhost:24716"]}, "task": {"type": "worker", "index": 2}, "rpc_layer": "grpc"}' [worker-3]: I0328 06:07:18.823389 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:18407", "localhost:24929", "localhost:15928", "localhost:24716"]}, "task": {"type": "worker", "index": 3}, "rpc_layer": "grpc"}' [worker-0]: 2023-03-28 06:07:18.912752: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:18407 [worker-0]: 2023-03-28 06:07:18.916886: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:0 has connected to coordination service. Incarnation: 5965085747014253923 [worker-0]: 2023-03-28 06:07:18.926521: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-1]: 2023-03-28 06:07:18.936792: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:24929 [worker-0]: 2023-03-28 06:07:18.940212: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:1 has connected to coordination service. Incarnation: 16401876010410908157 [worker-1]: 2023-03-28 06:07:18.940885: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-2]: 2023-03-28 06:07:18.958104: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:15928 [worker-0]: 2023-03-28 06:07:18.976258: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:2 has connected to coordination service. Incarnation: 16357111190661386297 [worker-2]: 2023-03-28 06:07:18.977059: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-3]: 2023-03-28 06:07:19.082327: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:24716 [worker-0]: 2023-03-28 06:07:19.086197: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:3 has connected to coordination service. Incarnation: 11900548501078822168 [worker-3]: 2023-03-28 06:07:19.086874: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-2]: I0328 06:07:19.097017 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-1]: I0328 06:07:19.096854 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: I0328 06:07:19.090485 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-3]: I0328 06:07:19.118473 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0328 06:07:19.239758 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0328 06:07:19.240339 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0328 06:07:19.240560 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0328 06:07:19.267473 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0328 06:07:19.267974 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0328 06:07:19.268190 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0328 06:07:19.271082 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0328 06:07:19.271531 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-3]: I0328 06:07:19.271752 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0328 06:07:19.271073 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0328 06:07:19.271589 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0328 06:07:19.271797 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:18407', 'localhost:24929', 'localhost:15928', 'localhost:24716']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Start watcher for peer's signal. [worker-0]: I0328 06:07:19.332318 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-2]: INFO:tensorflow:Start watcher for peer's signal. [worker-3]: INFO:tensorflow:Start watcher for peer's signal. [worker-2]: I0328 06:07:19.333642 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-0]: INFO:tensorflow:Start polling for termination signal. [worker-3]: I0328 06:07:19.333645 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-2]: INFO:tensorflow:Start polling for termination signal. [worker-0]: I0328 06:07:19.333737 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-2]: I0328 06:07:19.334348 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-3]: INFO:tensorflow:Start polling for termination signal. [worker-2]: Exception in thread WorkerTerminationSignalWatcher-2: [worker-3]: I0328 06:07:19.335121 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-2]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-3]: Exception in thread WorkerTerminationSignalWatcher-3: [worker-3]: Traceback (most recent call last): [worker-3]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-3]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-3]: I0328 06:07:19.335674 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-2]: Traceback (most recent call last): [worker-3]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-2]: I0328 06:07:19.334791 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-3]: Instructions for updating: [worker-2]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: W0328 06:07:19.335934 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: Instructions for updating: [worker-2]: W0328 06:07:19.335332 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: Instructions for updating: [worker-3]: INFO:tensorflow:Start training at 0 [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: I0328 06:07:19.336092 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-2]: INFO:tensorflow:Start training at 0 [worker-3]: self.run() [worker-2]: I0328 06:07:19.335842 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-2]: self.run() [worker-2]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-3]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-2]: self._target(*self._args, **self._kwargs) [worker-3]: self._target(*self._args, **self._kwargs) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-2]: if self._termination_watcher_fn(): [worker-3]: if self._termination_watcher_fn(): [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-2]: elif frequent_send and not maintenance_event.is_set(): [worker-3]: elif frequent_send and not maintenance_event.is_set(): [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: AttributeError: 'str' object has no attribute 'is_set' [worker-3]: AttributeError: 'str' object has no attribute 'is_set' [worker-0]: Exception in thread WorkerTerminationSignalWatcher-0: [worker-0]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-0]: Traceback (most recent call last): [worker-0]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-0]: I0328 06:07:19.346397 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-0]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: INFO:tensorflow:Start watcher for peer's signal. [worker-1]: I0328 06:07:19.352521 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-1]: INFO:tensorflow:Start polling for termination signal. [worker-1]: I0328 06:07:19.353218 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-1]: Exception in thread WorkerTerminationSignalWatcher-1: [worker-1]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-1]: Traceback (most recent call last): [worker-1]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-1]: I0328 06:07:19.353686 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-1]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: W0328 06:07:19.346916 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: INFO:tensorflow:Start training at 0 [worker-0]: I0328 06:07:19.347161 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-0]: self.run() [worker-0]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-0]: self._target(*self._args, **self._kwargs) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-0]: if self._termination_watcher_fn(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-0]: elif frequent_send and not maintenance_event.is_set(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: AttributeError: 'str' object has no attribute 'is_set' [worker-1]: W0328 06:07:19.354068 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: INFO:tensorflow:Start training at 0 [worker-1]: I0328 06:07:19.354291 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-1]: self.run() [worker-1]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-1]: self._target(*self._args, **self._kwargs) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-1]: if self._termination_watcher_fn(): [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-1]: elif frequent_send and not maintenance_event.is_set(): [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: AttributeError: 'str' object has no attribute 'is_set' [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:19.424534 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:19.427632 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:19.457410 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:19.470939 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:19.610509 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:19.604926 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:19.604448 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:19.649823 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:19.763650 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:19.780886 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:19.781861 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:19.794777 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:19.973400 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:19.969727 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:19.979958 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:19.989948 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:20.109966 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:20.109786 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:20.119757 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:20.129963 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9cfe0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9fe20> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:07:20.245980 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9fe20> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9c400> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:07:20.250517 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9c400> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9f420> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:07:20.256624 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9f420> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:07:20.238560 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9cfe0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:20.256345 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:20.273265 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:20.279978 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:20.269928 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d00e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:07:20.369115 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d00e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: INFO:tensorflow:epoch 0 finished [worker-0]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c89a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:07:20.375494 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c89a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: INFO:tensorflow:epoch 0 finished [worker-0]: I0328 06:07:20.375814 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-2]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c89a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:07:20.380700 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c89a0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: INFO:tensorflow:epoch 0 finished [worker-2]: I0328 06:07:20.380990 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-1]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0cc0e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:07:20.380602 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0cc0e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: INFO:tensorflow:epoch 0 finished [worker-1]: I0328 06:07:20.380922 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-3]: I0328 06:07:20.369464 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:20.388344 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:20.388816 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:20.410628 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:20.434901 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:20.555266 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:20.559776 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:20.559892 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:20.569485 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:20.850104 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:20.869907 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:20.889711 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:20.912298 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:21.059808 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:21.049916 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:21.059796 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:21.069649 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:21.165831 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:21.169780 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:21.179630 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:21.210768 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:21.330127 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:21.339818 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:21.339656 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:21.369744 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 1 finished [worker-3]: I0328 06:07:21.460426 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-0]: INFO:tensorflow:epoch 1 finished [worker-0]: I0328 06:07:21.459187 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-2]: INFO:tensorflow:epoch 1 finished [worker-1]: INFO:tensorflow:epoch 1 finished [worker-1]: I0328 06:07:21.467496 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-2]: I0328 06:07:21.467767 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:21.484030 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:21.490994 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:21.499688 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:21.554289 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:21.680047 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:21.719660 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:21.740021 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:21.754509 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:21.913499 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:21.921725 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:21.931641 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:21.999147 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:22.133320 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:22.149739 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:22.153701 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:22.219792 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:22.350843 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:22.360408 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:22.369703 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:22.369778 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:22.529704 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:22.535243 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:22.559608 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:22.580571 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 2 finished [worker-0]: INFO:tensorflow:epoch 2 finished [worker-2]: INFO:tensorflow:epoch 2 finished [worker-2]: I0328 06:07:22.755738 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-0]: I0328 06:07:22.755325 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-3]: I0328 06:07:22.750749 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:22.771940 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:epoch 2 finished [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:22.789347 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:22.779415 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:22.772732 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:22.800597 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:22.929677 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:22.929273 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:22.939619 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:22.949482 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:23.069544 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:23.093075 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:23.099612 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:23.109457 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:23.209316 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:23.219344 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:23.249502 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:23.249484 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:23.399502 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:23.409925 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:23.423551 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:23.459722 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:23.629750 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:23.640407 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:23.649769 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:23.689565 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 3 finished [worker-0]: INFO:tensorflow:epoch 3 finished [worker-0]: I0328 06:07:23.776809 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-3]: I0328 06:07:23.771955 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-2]: INFO:tensorflow:epoch 3 finished [worker-2]: I0328 06:07:23.781663 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-1]: INFO:tensorflow:epoch 3 finished [worker-1]: I0328 06:07:23.796539 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:23.789557 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:23.799544 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:23.799267 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:23.819969 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:23.980164 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:24.000334 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:23.999367 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:24.059798 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:24.171508 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:24.180555 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:24.172165 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:24.191485 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:24.254579 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:24.263855 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:24.269335 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:24.283221 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:24.377718 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:24.411932 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:24.450672 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:24.474486 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:24.679527 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:24.701064 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:24.869923 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:24.909619 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 4 finished [worker-3]: I0328 06:07:24.978492 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-3]: INFO:tensorflow:Training finished. [worker-3]: I0328 06:07:24.978725 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-0]: INFO:tensorflow:epoch 4 finished [worker-1]: INFO:tensorflow:epoch 4 finished [worker-1]: I0328 06:07:24.999374 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-2]: INFO:tensorflow:epoch 4 finished [worker-2]: I0328 06:07:25.000267 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-2]: INFO:tensorflow:Training finished. [worker-0]: I0328 06:07:24.997648 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-2]: I0328 06:07:25.000472 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-0]: INFO:tensorflow:Training finished. [worker-1]: INFO:tensorflow:Training finished. [worker-0]: I0328 06:07:24.997899 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-1]: I0328 06:07:24.999611 281473374647168 gce_failure_handler_test.py:244] Training finished. I0328 06:07:26.646261 281472867201920 multi_process_runner.py:646] worker-0 exit code: 0 I0328 06:07:26.646523 281472867201920 multi_process_runner.py:646] worker-1 exit code: 0 I0328 06:07:26.646636 281472867201920 multi_process_runner.py:646] worker-2 exit code: 0 I0328 06:07:26.646736 281472867201920 multi_process_runner.py:646] worker-3 exit code: 0 I0328 06:07:26.649235 281472867201920 multi_process_runner.py:662] Joining log reading threads. I0328 06:07:26.649418 281472867201920 multi_process_runner.py:665] Joined log reading threads. INFO:tensorflow:time(__main__.GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_False_graceperiod_7_inputarg_manager_strategyoption_MWMSmultiworker): 17.04s I0328 06:07:26.958974 281472867201920 test_util.py:2462] time(__main__.GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_False_graceperiod_7_inputarg_manager_strategyoption_MWMSmultiworker): 17.04s [ OK ] GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_False_graceperiod_7_inputarg_manager_strategyoption_MWMSmultiworker [ RUN ] GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_True_graceperiod_0_inputarg_manager_strategyoption_MWMSmultiworker INFO:tensorflow:Using local port 22895 I0328 06:07:26.960757 281472867201920 test_util.py:3794] Using local port 22895 INFO:tensorflow:Using local port 17953 I0328 06:07:26.961162 281472867201920 test_util.py:3794] Using local port 17953 INFO:tensorflow:Using local port 19861 I0328 06:07:26.961493 281472867201920 test_util.py:3794] Using local port 19861 INFO:tensorflow:Using local port 16674 I0328 06:07:26.961822 281472867201920 test_util.py:3794] Using local port 16674 INFO:tensorflow:Cluster starting. I0328 06:07:27.137490 281472867201920 gce_failure_handler_test.py:405] Cluster starting. [worker-0]: I0328 06:07:27.537711 281473374647168 multi_process_runner.py:840] Subprocess with PID 2754209 (worker, 0) is now being started. [worker-0]: I0328 06:07:27.538053 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22895", "localhost:17953", "localhost:19861", "localhost:16674"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' [worker-1]: I0328 06:07:27.947537 281473374647168 multi_process_runner.py:840] Subprocess with PID 2754949 (worker, 1) is now being started. [worker-2]: I0328 06:07:27.987588 281473374647168 multi_process_runner.py:840] Subprocess with PID 2754955 (worker, 2) is now being started. [worker-0]: 2023-03-28 06:07:27.996752: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:22895 [worker-2]: I0328 06:07:27.988041 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22895", "localhost:17953", "localhost:19861", "localhost:16674"]}, "task": {"type": "worker", "index": 2}, "rpc_layer": "grpc"}' [worker-3]: I0328 06:07:28.028931 281473374647168 multi_process_runner.py:840] Subprocess with PID 2755019 (worker, 3) is now being started. [worker-1]: I0328 06:07:27.947944 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22895", "localhost:17953", "localhost:19861", "localhost:16674"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [worker-0]: 2023-03-28 06:07:28.038026: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:0 has connected to coordination service. Incarnation: 4299057887399151178 [worker-0]: 2023-03-28 06:07:28.038414: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-3]: I0328 06:07:28.029265 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22895", "localhost:17953", "localhost:19861", "localhost:16674"]}, "task": {"type": "worker", "index": 3}, "rpc_layer": "grpc"}' [worker-1]: 2023-03-28 06:07:28.296528: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:17953 [worker-0]: 2023-03-28 06:07:28.346433: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:1 has connected to coordination service. Incarnation: 7508287490637501054 [worker-1]: 2023-03-28 06:07:28.416361: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-3]: 2023-03-28 06:07:28.548799: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:16674 [worker-0]: 2023-03-28 06:07:28.556371: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:3 has connected to coordination service. Incarnation: 8226068829015403524 [worker-3]: 2023-03-28 06:07:28.557260: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-2]: 2023-03-28 06:07:28.598810: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:19861 [worker-0]: 2023-03-28 06:07:28.667168: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:2 has connected to coordination service. Incarnation: 16709900837690940471 [worker-2]: 2023-03-28 06:07:28.674113: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-1]: I0328 06:07:28.685310 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: I0328 06:07:28.718019 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-3]: I0328 06:07:28.697669 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-2]: I0328 06:07:28.717622 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0328 06:07:28.837122 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0328 06:07:28.837742 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0328 06:07:28.837954 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-0]: I0328 06:07:28.874769 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-3]: I0328 06:07:28.875097 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-1]: I0328 06:07:28.877096 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-0]: I0328 06:07:28.875418 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: I0328 06:07:28.875619 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0328 06:07:28.877694 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: I0328 06:07:28.875828 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0328 06:07:28.875633 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0328 06:07:28.877901 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Start watcher for peer's signal. [worker-2]: I0328 06:07:29.040942 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-2]: INFO:tensorflow:Start polling for termination signal. [worker-2]: I0328 06:07:29.056850 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-0]: INFO:tensorflow:Start watcher for peer's signal. [worker-0]: I0328 06:07:29.057754 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-2]: Exception in thread WorkerTerminationSignalWatcher-2: [worker-2]: Traceback (most recent call last): [worker-2]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-2]: self.run() [worker-2]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-2]: self._target(*self._args, **self._kwargs) [worker-1]: INFO:tensorflow:Start watcher for peer's signal. [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-2]: if self._termination_watcher_fn(): [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-2]: elif frequent_send and not maintenance_event.is_set(): [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: AttributeError: 'str' object has no attribute 'is_set' [worker-2]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-2]: I0328 06:07:29.076455 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-2]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: W0328 06:07:29.078989 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: INFO:tensorflow:Start training at 0 [worker-2]: I0328 06:07:29.079176 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-3]: INFO:tensorflow:Start watcher for peer's signal. [worker-3]: I0328 06:07:29.093447 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-1]: I0328 06:07:29.079708 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-0]: INFO:tensorflow:Start polling for termination signal. [worker-0]: I0328 06:07:29.094928 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-0]: Exception in thread WorkerTerminationSignalWatcher-0: [worker-0]: Traceback (most recent call last): [worker-1]: INFO:tensorflow:Start polling for termination signal. [worker-3]: INFO:tensorflow:Start polling for termination signal. [worker-1]: I0328 06:07:29.106889 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-3]: I0328 06:07:29.106954 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-0]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-0]: self.run() [worker-0]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-0]: self._target(*self._args, **self._kwargs) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-0]: if self._termination_watcher_fn(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-0]: elif frequent_send and not maintenance_event.is_set(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: AttributeError: 'str' object has no attribute 'is_set' [worker-0]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-0]: I0328 06:07:29.106590 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-0]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: W0328 06:07:29.109354 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: INFO:tensorflow:Start training at 0 [worker-0]: I0328 06:07:29.109537 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-3]: Exception in thread WorkerTerminationSignalWatcher-3: [worker-3]: Traceback (most recent call last): [worker-3]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-3]: self.run() [worker-1]: Exception in thread WorkerTerminationSignalWatcher-1: [worker-1]: Traceback (most recent call last): [worker-1]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-3]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-3]: self._target(*self._args, **self._kwargs) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-3]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-3]: I0328 06:07:29.127452 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-3]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: W0328 06:07:29.127890 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: INFO:tensorflow:Start training at 0 [worker-3]: I0328 06:07:29.128059 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-3]: if self._termination_watcher_fn(): [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-3]: elif frequent_send and not maintenance_event.is_set(): [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: AttributeError: 'str' object has no attribute 'is_set' [worker-1]: self.run() [worker-1]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-1]: self._target(*self._args, **self._kwargs) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-1]: if self._termination_watcher_fn(): [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-1]: elif frequent_send and not maintenance_event.is_set(): [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: AttributeError: 'str' object has no attribute 'is_set' [worker-1]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-1]: I0328 06:07:29.130094 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-1]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: W0328 06:07:29.130510 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: INFO:tensorflow:Start training at 0 [worker-1]: I0328 06:07:29.130674 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:29.238605 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:29.238501 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:29.360640 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:29.361859 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:29.750185 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:29.780080 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:29.797129 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:29.760217 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:29.918205 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:29.920791 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:29.920922 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:29.931176 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:30.090211 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:30.090113 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:30.079892 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:30.111808 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:30.244517 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:30.260244 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:30.259872 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:30.268043 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9d080> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9e340> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:07:30.391120 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9e340> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:07:30.385377 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9d080> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9e340> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:07:30.396381 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9e340> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:30.404433 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9f380> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:07:30.417023 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9f380> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:30.422457 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:30.439880 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:30.409586 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xfffef4a9c040> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:07:30.571962 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xfffef4a9c040> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: INFO:tensorflow:epoch 0 finished [worker-3]: I0328 06:07:30.572386 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-2]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d00e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:07:30.578529 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d00e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: INFO:tensorflow:epoch 0 finished [worker-2]: I0328 06:07:30.578908 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-1]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0c20> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:07:30.583737 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0c20> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: INFO:tensorflow:epoch 0 finished [worker-1]: I0328 06:07:30.584105 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-0]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0400> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:07:30.586536 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0d0400> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: INFO:tensorflow:epoch 0 finished [worker-0]: I0328 06:07:30.586921 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:30.594165 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:30.599956 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:30.620543 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:30.610195 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:30.688982 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:30.690586 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:30.693229 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:30.740325 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:30.849400 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:30.853965 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:30.849398 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:30.870506 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:30.969816 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:30.983258 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:31.005745 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:31.022325 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:31.164374 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:31.169894 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:31.180084 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:31.190463 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:31.433229 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:31.440063 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:31.450071 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:31.459778 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 1 finished [worker-3]: I0328 06:07:31.531412 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-0]: INFO:tensorflow:epoch 1 finished [worker-0]: I0328 06:07:31.554691 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-2]: INFO:tensorflow:epoch 1 finished [worker-2]: I0328 06:07:31.566396 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-1]: INFO:tensorflow:epoch 1 finished [worker-1]: I0328 06:07:31.547431 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:31.569231 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:31.579793 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:31.600466 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:31.611493 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:31.721885 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:31.718464 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:31.733695 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:31.729767 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:31.890079 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:31.912171 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:31.900917 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:31.944126 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:32.110579 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:32.130439 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:32.129736 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:32.181920 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:32.238202 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:32.250833 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:32.269822 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:32.307547 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:32.429806 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:32.429802 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:32.439943 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:32.440293 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 2 finished [worker-3]: I0328 06:07:32.548960 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-0]: INFO:tensorflow:epoch 2 finished [worker-0]: I0328 06:07:32.554394 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-1]: INFO:tensorflow:epoch 2 finished [worker-1]: I0328 06:07:32.559408 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-2]: INFO:tensorflow:epoch 2 finished [worker-2]: I0328 06:07:32.564049 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:32.597952 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:32.597383 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:32.582655 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:32.628277 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:32.784288 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:32.790896 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:32.810528 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:32.794324 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:32.903413 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:32.910290 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:32.920281 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:33.003628 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:33.118213 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:33.118715 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:33.122988 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:33.123550 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:33.234926 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:33.239812 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:33.259947 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:33.281019 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:33.341933 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:33.345520 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:33.349745 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:33.352896 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:epoch 3 finished [worker-0]: I0328 06:07:33.407193 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-2]: INFO:tensorflow:epoch 3 finished [worker-2]: I0328 06:07:33.407801 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:33.414020 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:33.416679 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 3 finished [worker-3]: I0328 06:07:33.417971 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-1]: INFO:tensorflow:epoch 3 finished [worker-1]: I0328 06:07:33.426658 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:33.450053 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:33.439154 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:33.524682 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:33.524666 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:33.524682 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:33.539818 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:33.649539 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:33.659802 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:33.659882 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:33.709547 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:33.834704 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:33.840020 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:33.870812 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:33.870903 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:33.973789 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:33.979441 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:33.994743 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:34.000065 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:34.179618 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:34.200217 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:34.220900 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:34.241596 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 4 finished [worker-3]: I0328 06:07:34.366569 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-3]: INFO:tensorflow:Training finished. [worker-3]: I0328 06:07:34.366848 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-1]: INFO:tensorflow:epoch 4 finished [worker-1]: I0328 06:07:34.369920 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-1]: INFO:tensorflow:Training finished. [worker-1]: I0328 06:07:34.370186 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-2]: INFO:tensorflow:epoch 4 finished [worker-2]: I0328 06:07:34.378093 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-2]: INFO:tensorflow:Training finished. [worker-2]: I0328 06:07:34.378345 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-0]: INFO:tensorflow:epoch 4 finished [worker-0]: I0328 06:07:34.385301 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-0]: INFO:tensorflow:Training finished. [worker-0]: I0328 06:07:34.385564 281473374647168 gce_failure_handler_test.py:244] Training finished. INFO:tensorflow:restarting workers I0328 06:07:35.997032 281472867201920 gce_failure_handler_test.py:411] restarting workers INFO:tensorflow:workers restarted I0328 06:07:36.196532 281472867201920 gce_failure_handler_test.py:415] workers restarted [worker-0]: I0328 06:07:36.288207 281473374647168 multi_process_runner.py:840] Subprocess with PID 2774300 (worker, 0) is now being started. [worker-0]: I0328 06:07:36.288560 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22895", "localhost:17953", "localhost:19861", "localhost:16674"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' [worker-3]: I0328 06:07:36.697437 281473374647168 multi_process_runner.py:840] Subprocess with PID 2774403 (worker, 3) is now being started. [worker-2]: I0328 06:07:36.717154 281473374647168 multi_process_runner.py:840] Subprocess with PID 2774336 (worker, 2) is now being started. [worker-3]: I0328 06:07:36.697762 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22895", "localhost:17953", "localhost:19861", "localhost:16674"]}, "task": {"type": "worker", "index": 3}, "rpc_layer": "grpc"}' [worker-0]: 2023-03-28 06:07:36.976993: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:22895 [worker-2]: I0328 06:07:36.717476 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22895", "localhost:17953", "localhost:19861", "localhost:16674"]}, "task": {"type": "worker", "index": 2}, "rpc_layer": "grpc"}' [worker-0]: 2023-03-28 06:07:37.029390: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:0 has connected to coordination service. Incarnation: 11557998636428467379 [worker-0]: 2023-03-28 06:07:37.046325: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-1]: I0328 06:07:37.067646 281473374647168 multi_process_runner.py:840] Subprocess with PID 2774322 (worker, 1) is now being started. [worker-1]: I0328 06:07:37.067939 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22895", "localhost:17953", "localhost:19861", "localhost:16674"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [worker-2]: 2023-03-28 06:07:37.187232: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:19861 [worker-0]: 2023-03-28 06:07:37.206664: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:2 has connected to coordination service. Incarnation: 9582527091198594613 [worker-3]: 2023-03-28 06:07:37.231756: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:16674 [worker-0]: 2023-03-28 06:07:37.235198: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:3 has connected to coordination service. Incarnation: 16672727682221091852 [worker-3]: 2023-03-28 06:07:37.246146: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-2]: 2023-03-28 06:07:37.266159: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-1]: 2023-03-28 06:07:37.447216: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:17953 [worker-0]: 2023-03-28 06:07:37.476264: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:1 has connected to coordination service. Incarnation: 935734843760036308 [worker-1]: 2023-03-28 06:07:37.496260: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-3]: I0328 06:07:37.501149 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-2]: I0328 06:07:37.505820 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: I0328 06:07:37.516887 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-1]: I0328 06:07:37.566792 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0328 06:07:37.610895 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0328 06:07:37.611349 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0328 06:07:37.611552 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: I0328 06:07:37.689078 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-0]: I0328 06:07:37.689600 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0328 06:07:37.689812 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: I0328 06:07:37.757049 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-2]: INFO:tensorflow:Check health not enabled. [worker-2]: I0328 06:07:37.758135 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0328 06:07:37.758785 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0328 06:07:37.894324 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0328 06:07:37.894738 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0328 06:07:37.894936 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22895', 'localhost:17953', 'localhost:19861', 'localhost:16674']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: INFO:tensorflow:Start watcher for peer's signal. [worker-0]: I0328 06:07:38.142823 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-2]: INFO:tensorflow:Start watcher for peer's signal. [worker-2]: I0328 06:07:38.157271 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-1]: INFO:tensorflow:Start watcher for peer's signal. [worker-1]: I0328 06:07:38.159734 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-0]: INFO:tensorflow:Start polling for termination signal. [worker-0]: I0328 06:07:38.166442 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-3]: INFO:tensorflow:Start watcher for peer's signal. [worker-3]: I0328 06:07:38.172425 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-1]: INFO:tensorflow:Start polling for termination signal. [worker-1]: I0328 06:07:38.187177 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-3]: INFO:tensorflow:Start polling for termination signal. [worker-3]: I0328 06:07:38.188328 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-0]: Exception in thread WorkerTerminationSignalWatcher-0: [worker-0]: Traceback (most recent call last): [worker-0]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-0]: self.run() [worker-0]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-0]: self._target(*self._args, **self._kwargs) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-0]: if self._termination_watcher_fn(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-0]: elif frequent_send and not maintenance_event.is_set(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: AttributeError: 'str' object has no attribute 'is_set' [worker-0]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-0]: I0328 06:07:38.197509 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-0]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: W0328 06:07:38.197834 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: INFO:tensorflow:Start training at 0 [worker-0]: I0328 06:07:38.197986 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-2]: INFO:tensorflow:Start polling for termination signal. [worker-2]: I0328 06:07:38.216256 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-1]: Exception in thread WorkerTerminationSignalWatcher-1: [worker-3]: Exception in thread WorkerTerminationSignalWatcher-3: [worker-3]: Traceback (most recent call last): [worker-3]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-1]: Traceback (most recent call last): [worker-3]: self.run() [worker-1]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-3]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-1]: self.run() [worker-3]: self._target(*self._args, **self._kwargs) [worker-1]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-1]: self._target(*self._args, **self._kwargs) [worker-3]: if self._termination_watcher_fn(): [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: if self._termination_watcher_fn(): [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: elif frequent_send and not maintenance_event.is_set(): [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: elif frequent_send and not maintenance_event.is_set(): [worker-3]: AttributeError: 'str' object has no attribute 'is_set' [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: AttributeError: 'str' object has no attribute 'is_set' [worker-1]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-1]: I0328 06:07:38.230862 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-1]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: W0328 06:07:38.231200 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: Instructions for updating: [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: INFO:tensorflow:Start training at 0 [worker-1]: I0328 06:07:38.231354 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-3]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-3]: I0328 06:07:38.233925 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-3]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: W0328 06:07:38.234234 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: INFO:tensorflow:Start training at 0 [worker-2]: Exception in thread WorkerTerminationSignalWatcher-2: [worker-2]: Traceback (most recent call last): [worker-2]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-2]: self.run() [worker-2]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-2]: self._target(*self._args, **self._kwargs) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-2]: if self._termination_watcher_fn(): [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-2]: elif frequent_send and not maintenance_event.is_set(): [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: AttributeError: 'str' object has no attribute 'is_set' [worker-2]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-2]: I0328 06:07:38.239279 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-2]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: W0328 06:07:38.239594 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: I0328 06:07:38.234386 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-2]: INFO:tensorflow:Start training at 0 [worker-2]: I0328 06:07:38.239748 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:38.387678 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:38.537552 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:38.645541 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:38.708088 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:38.873697 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:38.879537 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:38.889410 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:38.899802 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:39.365568 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:39.421332 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:39.400920 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:39.377421 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:39.619320 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:39.619349 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:39.609709 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:39.639222 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:39.753231 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:39.771330 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:39.770279 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:39.789255 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a99580> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9db20> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:07:39.907300 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9db20> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:07:39.900828 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a99580> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9ea20> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: WARNING:tensorflow:5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9c900> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:07:39.911532 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9ea20> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: W0328 06:07:39.911820 281473374647168 polymorphic_function.py:158] 5 out of the last 5 calls to .wrapped_fn at 0xfffef4a9c900> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:39.931126 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:39.950247 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:39.949156 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:39.969477 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c8180> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0cc0e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: W0328 06:07:40.051128 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0c8180> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-0]: W0328 06:07:40.051288 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0cc0e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-3]: INFO:tensorflow:epoch 0 finished [worker-0]: INFO:tensorflow:epoch 0 finished [worker-3]: I0328 06:07:40.051436 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-0]: I0328 06:07:40.051572 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-1]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0cc400> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-2]: WARNING:tensorflow:6 out of the last 6 calls to .wrapped_fn at 0xffff9c0cc540> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: W0328 06:07:40.051461 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0cc400> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: INFO:tensorflow:epoch 0 finished [worker-2]: W0328 06:07:40.055193 281473374647168 polymorphic_function.py:158] 6 out of the last 6 calls to .wrapped_fn at 0xffff9c0cc540> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details. [worker-1]: I0328 06:07:40.051736 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-2]: INFO:tensorflow:epoch 0 finished [worker-2]: I0328 06:07:40.055466 281473374647168 gce_failure_handler_test.py:192] epoch 0 finished [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:40.057950 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:40.059888 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:40.057947 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:40.070992 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:40.333128 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:40.370114 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:40.367198 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:40.389480 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:40.541732 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:40.559210 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:40.548884 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:40.629923 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:40.709330 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:40.709330 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:40.709709 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:40.719015 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:40.869669 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:40.889228 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:40.869553 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:40.883811 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:41.030955 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:41.037249 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:41.059340 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:41.097118 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:epoch 1 finished [worker-1]: I0328 06:07:41.231032 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-2]: INFO:tensorflow:epoch 1 finished [worker-2]: I0328 06:07:41.235743 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-3]: INFO:tensorflow:epoch 1 finished [worker-3]: I0328 06:07:41.239947 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-0]: INFO:tensorflow:epoch 1 finished [worker-0]: I0328 06:07:41.244161 281473374647168 gce_failure_handler_test.py:192] epoch 1 finished [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:41.259239 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:41.249284 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:41.259247 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:41.270640 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:41.379310 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:41.407135 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:41.389689 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:41.449159 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:41.567057 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:41.570139 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:41.591426 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:41.612024 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:42.023840 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:42.043616 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:42.060091 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:42.199655 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:42.610136 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:42.640568 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:42.690687 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:42.760766 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:43.147663 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:43.189580 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:43.300198 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:43.317499 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 2 finished [worker-0]: INFO:tensorflow:epoch 2 finished [worker-0]: I0328 06:07:43.941288 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-3]: I0328 06:07:43.937090 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:43.949607 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:43.970235 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:epoch 2 finished [worker-2]: INFO:tensorflow:epoch 2 finished [worker-2]: I0328 06:07:44.032343 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-1]: I0328 06:07:44.028663 281473374647168 gce_failure_handler_test.py:192] epoch 2 finished [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:44.049812 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:44.071699 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:44.236691 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:44.229642 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:44.239879 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:44.249491 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:44.489466 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:44.479502 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:44.499918 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:44.499801 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:44.722016 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:44.719426 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:44.739524 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:44.762962 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:44.870334 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:44.872980 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:44.919962 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:44.960761 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:45.113682 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:45.113287 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:45.135631 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:45.126731 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 3 finished [worker-1]: INFO:tensorflow:epoch 3 finished [worker-1]: I0328 06:07:45.251866 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-3]: I0328 06:07:45.246584 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-2]: INFO:tensorflow:epoch 3 finished [worker-2]: I0328 06:07:45.256289 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-0]: INFO:tensorflow:epoch 3 finished [worker-0]: I0328 06:07:45.276457 281473374647168 gce_failure_handler_test.py:192] epoch 3 finished [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:45.283662 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:45.327844 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:45.279747 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:45.351942 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:45.469143 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:45.458731 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:45.469158 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:45.489609 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:45.594524 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:45.609218 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:45.611534 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:45.620891 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:45.725300 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:45.737006 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:45.739127 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:45.760075 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:45.893206 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:45.890500 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:45.903573 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:45.903107 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: I0328 06:07:45.982693 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-0]: I0328 06:07:45.993633 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: INFO:tensorflow:Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-2]: I0328 06:07:46.023029 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-1]: I0328 06:07:46.020116 281473374647168 cross_device_ops.py:1151] Collective all_reduce tensors: 1 all_reduces, num_devices = 1, group_size = 4, implementation = CommunicationImplementation.AUTO, num_packs = 1 [worker-3]: INFO:tensorflow:epoch 4 finished [worker-3]: I0328 06:07:46.674345 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-3]: INFO:tensorflow:Training finished. [worker-3]: I0328 06:07:46.674605 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-0]: INFO:tensorflow:epoch 4 finished [worker-0]: I0328 06:07:46.676572 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-0]: INFO:tensorflow:Training finished. [worker-0]: I0328 06:07:46.676813 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-2]: INFO:tensorflow:epoch 4 finished [worker-2]: I0328 06:07:46.677670 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-2]: INFO:tensorflow:Training finished. [worker-2]: I0328 06:07:46.677897 281473374647168 gce_failure_handler_test.py:244] Training finished. [worker-1]: INFO:tensorflow:epoch 4 finished [worker-1]: I0328 06:07:46.689808 281473374647168 gce_failure_handler_test.py:192] epoch 4 finished [worker-1]: INFO:tensorflow:Training finished. [worker-1]: I0328 06:07:46.690628 281473374647168 gce_failure_handler_test.py:244] Training finished. INFO:tensorflow:Termination notice available. I0328 06:07:46.773259 281462989058528 gce_failure_handler_test.py:142] Termination notice available. --- Logging error --- Traceback (most recent call last): File "/usr/lib/python3.11/logging/__init__.py", line 1110, in emit msg = self.format(record) ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/__init__.py", line 953, in format return fmt.format(record) ^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/__init__.py", line 687, in format record.message = record.getMessage() ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/__init__.py", line 377, in getMessage msg = msg % self.args ~~~~^~~~~~~~~~~ TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.11/threading.py", line 995, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner self.run() File "/usr/lib/python3.11/threading.py", line 975, in run self._target(*self._args, **self._kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 696, in _poll_termination_signal self._maybe_set_received_own_sigterm() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 701, in _maybe_set_received_own_sigterm logging.info('Received termination notice.', File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/platform/tf_logging.py", line 198, in info get_logger().info(msg, *args, **kwargs) Message: 'Received termination notice.' Arguments: ('single_worker',) --- Logging error --- Traceback (most recent call last): File "/usr/lib/python3.11/logging/__init__.py", line 1110, in emit msg = self.format(record) ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/__init__.py", line 953, in format return fmt.format(record) ^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/logging/__init__.py", line 1025, in format return prefix + super(PythonFormatter, self).format(record) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/__init__.py", line 687, in format record.message = record.getMessage() ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/__init__.py", line 377, in getMessage msg = msg % self.args ~~~~^~~~~~~~~~~ TypeError: not all arguments converted during string formatting Call stack: File "/usr/lib/python3.11/threading.py", line 995, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner self.run() File "/usr/lib/python3.11/threading.py", line 975, in run self._target(*self._args, **self._kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 696, in _poll_termination_signal self._maybe_set_received_own_sigterm() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 701, in _maybe_set_received_own_sigterm logging.info('Received termination notice.', File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/platform/tf_logging.py", line 198, in info get_logger().info(msg, *args, **kwargs) File "/usr/lib/python3.11/logging/__init__.py", line 1489, in info self._log(INFO, msg, args, **kwargs) File "/usr/lib/python3.11/logging/__init__.py", line 1634, in _log self.handle(record) File "/usr/lib/python3.11/logging/__init__.py", line 1644, in handle self.callHandlers(record) File "/usr/lib/python3.11/logging/__init__.py", line 1706, in callHandlers hdlr.handle(record) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/logging/__init__.py", line 988, in handle return self._current_handler.handle(record) File "/usr/lib/python3.11/logging/__init__.py", line 978, in handle self.emit(record) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/logging/__init__.py", line 925, in emit super(PythonHandler, self).emit(record) Message: 'Received termination notice.' Arguments: ('single_worker',) Exception ignored in: Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 775, in __del__ self._stop_poll_termination_signal_thread() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 734, in _stop_poll_termination_signal_thread self._poll_termination_signal_thread.join() File "/usr/lib/python3.11/threading.py", line 1109, in join raise RuntimeError("cannot join current thread") RuntimeError: cannot join current thread I0328 06:07:47.107829 281472867201920 multi_process_runner.py:646] worker-0 exit code: 0 I0328 06:07:47.108101 281472867201920 multi_process_runner.py:646] worker-1 exit code: 0 I0328 06:07:47.108215 281472867201920 multi_process_runner.py:646] worker-2 exit code: 0 I0328 06:07:47.108319 281472867201920 multi_process_runner.py:646] worker-3 exit code: 0 I0328 06:07:47.128489 281472867201920 multi_process_runner.py:662] Joining log reading threads. I0328 06:07:47.128828 281472867201920 multi_process_runner.py:665] Joined log reading threads. INFO:tensorflow:time(__main__.GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_True_graceperiod_0_inputarg_manager_strategyoption_MWMSmultiworker): 20.36s I0328 06:07:47.318811 281472867201920 test_util.py:2462] time(__main__.GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_True_graceperiod_0_inputarg_manager_strategyoption_MWMSmultiworker): 20.36s [ OK ] GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_True_graceperiod_0_inputarg_manager_strategyoption_MWMSmultiworker [ RUN ] GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_True_graceperiod_7_inputarg_manager_strategyoption_MWMSmultiworker INFO:tensorflow:Using local port 22633 I0328 06:07:47.320498 281472867201920 test_util.py:3794] Using local port 22633 INFO:tensorflow:Using local port 17942 I0328 06:07:47.320870 281472867201920 test_util.py:3794] Using local port 17942 INFO:tensorflow:Using local port 24068 I0328 06:07:47.321207 281472867201920 test_util.py:3794] Using local port 24068 INFO:tensorflow:Using local port 18697 I0328 06:07:47.321530 281472867201920 test_util.py:3794] Using local port 18697 INFO:tensorflow:Cluster starting. I0328 06:07:48.197944 281472867201920 gce_failure_handler_test.py:405] Cluster starting. [worker-1]: I0328 06:07:48.273371 281473374647168 multi_process_runner.py:840] Subprocess with PID 2817811 (worker, 1) is now being started. [worker-1]: I0328 06:07:48.273678 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22633", "localhost:17942", "localhost:24068", "localhost:18697"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [worker-3]: I0328 06:07:48.277043 281473374647168 multi_process_runner.py:840] Subprocess with PID 2817879 (worker, 3) is now being started. [worker-3]: I0328 06:07:48.277334 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22633", "localhost:17942", "localhost:24068", "localhost:18697"]}, "task": {"type": "worker", "index": 3}, "rpc_layer": "grpc"}' [worker-0]: I0328 06:07:48.294769 281473374647168 multi_process_runner.py:840] Subprocess with PID 2817785 (worker, 0) is now being started. [worker-3]: E0328 06:07:48.308191264 2817879 server_chttp2.cc:40] {"created":"@1679983668.308053830","description":"No address added out of total 1 resolved","file":"external/com_github_grpc_grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":395,"referenced_errors":[{"created":"@1679983668.308048774","description":"Failed to add any wildcard listeners","file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_posix.cc","file_line":341,"referenced_errors":[{"created":"@1679983668.308020166","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1679983668.308009395","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]},{"created":"@1679983668.308047539","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1679983668.308039818","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]}]}]} [worker-3]: 2023-03-28 06:07:48.308324: E tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:601] UNKNOWN: Could not start gRPC server [worker-3]: 2023-03-28 06:07:48.308993: E tensorflow/core/common_runtime/eager/context_distributed_manager.cc:699] Could not start gRPC server [worker-0]: I0328 06:07:48.295114 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22633", "localhost:17942", "localhost:24068", "localhost:18697"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' [worker-1]: 2023-03-28 06:07:48.312445: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:17942 [worker-2]: I0328 06:07:48.314824 281473374647168 multi_process_runner.py:840] Subprocess with PID 2817816 (worker, 2) is now being started. [worker-2]: I0328 06:07:48.315139 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22633", "localhost:17942", "localhost:24068", "localhost:18697"]}, "task": {"type": "worker", "index": 2}, "rpc_layer": "grpc"}' [worker-3]: Process _Process-37: [worker-3]: Traceback (most recent call last): [worker-3]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-3]: self.run() [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755, in _run_with_setenv [worker-3]: return self._actual_run() [worker-3]: ^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in _run_with_absl [worker-3]: app.run(lambda _: self._run_impl()) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 312, in run [worker-3]: _run_main(main, args) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 258, in _run_main [worker-3]: sys.exit(main(argv)) [worker-3]: ^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in [worker-3]: app.run(lambda _: self._run_impl()) [worker-3]: ^^^^^^^^^^^^^^^^ [worker-3]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-3]: self._target(*self._args, **self._kwargs) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 866, in __call__ [worker-3]: six.reraise(*info.exc_info) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/six_archive/six.py", line 719, in reraise [worker-3]: raise value [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained [worker-3]: return_value = fn(*args, **kwargs) [worker-3]: ^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 134, in worker_fn [worker-3]: strategy = collective_all_reduce_strategy.CollectiveAllReduceStrategy() [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 188, in __init__ [worker-3]: CollectiveAllReduceExtended( [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 340, in __init__ [worker-3]: self._initialize_strategy(self._cluster_resolver, devices=devices) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 359, in _initialize_strategy [worker-3]: self._initialize_multi_worker(cluster_resolver) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 531, in _initialize_multi_worker [worker-3]: context.context().ensure_initialized() [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/eager/context.py", line 595, in ensure_initialized [worker-3]: pywrap_tfe.TFE_EnableCollectiveOps(context_handle, server_def_str) [worker-3]: tensorflow.python.framework.errors_impl.UnknownError: Could not start gRPC server [worker-2]: 2023-03-28 06:07:48.349263: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:24068 [worker-0]: 2023-03-28 06:07:48.351550: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:22633 [worker-0]: 2023-03-28 06:07:48.357378: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:2 has connected to coordination service. Incarnation: 10419139551169348552 [worker-2]: 2023-03-28 06:07:48.357915: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-0]: 2023-03-28 06:07:48.360996: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:0 has connected to coordination service. Incarnation: 13429356680451415295 [worker-0]: 2023-03-28 06:07:48.361161: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-0]: 2023-03-28 06:07:48.517596: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:1 has connected to coordination service. Incarnation: 1229928011653023296 [worker-1]: 2023-03-28 06:07:48.518278: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. INFO:tensorflow:restarting workers I0328 06:08:18.476449 281472867201920 gce_failure_handler_test.py:411] restarting workers INFO:tensorflow:workers restarted I0328 06:08:18.596443 281472867201920 gce_failure_handler_test.py:415] workers restarted [worker-1]: I0328 06:08:18.829638 281473374647168 multi_process_runner.py:840] Subprocess with PID 2959666 (worker, 1) is now being started. [worker-0]: I0328 06:08:18.831491 281473374647168 multi_process_runner.py:840] Subprocess with PID 2959662 (worker, 0) is now being started. [worker-2]: I0328 06:08:18.841459 281473374647168 multi_process_runner.py:840] Subprocess with PID 2959670 (worker, 2) is now being started. [worker-0]: I0328 06:08:18.831776 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22633", "localhost:17942", "localhost:24068", "localhost:18697"]}, "task": {"type": "worker", "index": 0}, "rpc_layer": "grpc"}' [worker-1]: I0328 06:08:18.829954 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22633", "localhost:17942", "localhost:24068", "localhost:18697"]}, "task": {"type": "worker", "index": 1}, "rpc_layer": "grpc"}' [worker-2]: I0328 06:08:18.841750 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22633", "localhost:17942", "localhost:24068", "localhost:18697"]}, "task": {"type": "worker", "index": 2}, "rpc_layer": "grpc"}' [worker-3]: I0328 06:08:18.845572 281473374647168 multi_process_runner.py:840] Subprocess with PID 2959676 (worker, 3) is now being started. [worker-3]: I0328 06:08:18.845858 281473374647168 multi_process_runner.py:842] TF_CONFIG: '{"cluster": {"worker": ["localhost:22633", "localhost:17942", "localhost:24068", "localhost:18697"]}, "task": {"type": "worker", "index": 3}, "rpc_layer": "grpc"}' [worker-1]: E0328 06:08:18.871915564 2959666 server_chttp2.cc:40] {"created":"@1679983698.871792401","description":"No address added out of total 1 resolved","file":"external/com_github_grpc_grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":395,"referenced_errors":[{"created":"@1679983698.871788611","description":"Failed to add any wildcard listeners","file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_posix.cc","file_line":341,"referenced_errors":[{"created":"@1679983698.871762613","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1679983698.871752597","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]},{"created":"@1679983698.871788011","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1679983698.871780215","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]}]}]} [worker-2]: E0328 06:08:18.871761783 2959670 server_chttp2.cc:40] {"created":"@1679983698.871629704","description":"No address added out of total 1 resolved","file":"external/com_github_grpc_grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":395,"referenced_errors":[{"created":"@1679983698.871624699","description":"Failed to add any wildcard listeners","file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_posix.cc","file_line":341,"referenced_errors":[{"created":"@1679983698.871597496","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1679983698.871586175","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]},{"created":"@1679983698.871624104","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1679983698.871615998","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]}]}]} [worker-1]: 2023-03-28 06:08:18.872007: E tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:601] UNKNOWN: Could not start gRPC server [worker-1]: 2023-03-28 06:08:18.872438: E tensorflow/core/common_runtime/eager/context_distributed_manager.cc:699] Could not start gRPC server [worker-2]: 2023-03-28 06:08:18.871887: E tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:601] UNKNOWN: Could not start gRPC server [worker-2]: 2023-03-28 06:08:18.876059: E tensorflow/core/common_runtime/eager/context_distributed_manager.cc:699] Could not start gRPC server [worker-0]: E0328 06:08:18.877158058 2959662 server_chttp2.cc:40] {"created":"@1679983698.877030039","description":"No address added out of total 1 resolved","file":"external/com_github_grpc_grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":395,"referenced_errors":[{"created":"@1679983698.877026099","description":"Failed to add any wildcard listeners","file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_posix.cc","file_line":341,"referenced_errors":[{"created":"@1679983698.876998626","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1679983698.876988295","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]},{"created":"@1679983698.877025439","description":"Unable to configure socket","fd":9,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":215,"referenced_errors":[{"created":"@1679983698.877016963","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]}]}]} [worker-0]: 2023-03-28 06:08:18.877252: E tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:601] UNKNOWN: Could not start gRPC server [worker-2]: Process _Process-40: [worker-2]: Traceback (most recent call last): [worker-0]: 2023-03-28 06:08:18.896568: E tensorflow/core/common_runtime/eager/context_distributed_manager.cc:699] Could not start gRPC server [worker-2]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-1]: Process _Process-39: [worker-1]: Traceback (most recent call last): [worker-2]: self.run() [worker-1]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-1]: self.run() [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755, in _run_with_setenv [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755, in _run_with_setenv [worker-2]: return self._actual_run() [worker-1]: return self._actual_run() [worker-2]: ^^^^^^^^^^^^^^^^^^ [worker-1]: ^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in _run_with_absl [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in _run_with_absl [worker-2]: app.run(lambda _: self._run_impl()) [worker-1]: app.run(lambda _: self._run_impl()) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 312, in run [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 312, in run [worker-1]: _run_main(main, args) [worker-2]: _run_main(main, args) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 258, in _run_main [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 258, in _run_main [worker-2]: sys.exit(main(argv)) [worker-1]: sys.exit(main(argv)) [worker-1]: ^^^^^^^^^^ [worker-2]: ^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in [worker-2]: app.run(lambda _: self._run_impl()) [worker-2]: ^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in [worker-2]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-2]: self._target(*self._args, **self._kwargs) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 866, in __call__ [worker-1]: app.run(lambda _: self._run_impl()) [worker-2]: six.reraise(*info.exc_info) [worker-1]: ^^^^^^^^^^^^^^^^ [worker-1]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/six_archive/six.py", line 719, in reraise [worker-1]: self._target(*self._args, **self._kwargs) [worker-2]: raise value [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 866, in __call__ [worker-1]: six.reraise(*info.exc_info) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/six_archive/six.py", line 719, in reraise [worker-1]: raise value [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained [worker-2]: return_value = fn(*args, **kwargs) [worker-2]: ^^^^^^^^^^^^^^^^^^^ [worker-1]: return_value = fn(*args, **kwargs) [worker-1]: ^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 134, in worker_fn [worker-1]: strategy = collective_all_reduce_strategy.CollectiveAllReduceStrategy() [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 134, in worker_fn [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: strategy = collective_all_reduce_strategy.CollectiveAllReduceStrategy() [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 188, in __init__ [worker-1]: CollectiveAllReduceExtended( [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 340, in __init__ [worker-1]: self._initialize_strategy(self._cluster_resolver, devices=devices) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 359, in _initialize_strategy [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: self._initialize_multi_worker(cluster_resolver) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 188, in __init__ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 531, in _initialize_multi_worker [worker-2]: CollectiveAllReduceExtended( [worker-1]: context.context().ensure_initialized() [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 340, in __init__ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/eager/context.py", line 595, in ensure_initialized [worker-2]: self._initialize_strategy(self._cluster_resolver, devices=devices) [worker-1]: pywrap_tfe.TFE_EnableCollectiveOps(context_handle, server_def_str) [worker-1]: tensorflow.python.framework.errors_impl.UnknownError: Could not start gRPC server [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 359, in _initialize_strategy [worker-2]: self._initialize_multi_worker(cluster_resolver) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 531, in _initialize_multi_worker [worker-2]: context.context().ensure_initialized() [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/eager/context.py", line 595, in ensure_initialized [worker-2]: pywrap_tfe.TFE_EnableCollectiveOps(context_handle, server_def_str) [worker-2]: tensorflow.python.framework.errors_impl.UnknownError: Could not start gRPC server [worker-3]: 2023-03-28 06:08:18.917862: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:450] Started server with target: grpc://localhost:18697 [worker-3]: 2023-03-28 06:08:18.922840: I tensorflow/tsl/distributed_runtime/coordination/coordination_service_agent.cc:298] Coordination agent has successfully connected. [worker-1]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-3]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-3]: I0328 06:08:18.927129 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1'] [worker-2]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-2]: I0328 06:08:18.926712 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: 2023-03-28 06:08:18.920518: I tensorflow/tsl/distributed_runtime/coordination/coordination_service.cc:525] /job:worker/replica:0/task:3 has connected to coordination service. Incarnation: 1145890003656025686 [worker-0]: INFO:tensorflow:Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-1]: I0328 06:08:18.926862 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: I0328 06:08:18.926718 281473374647168 collective_all_reduce_strategy.py:532] Enabled multi-worker collective ops with available devices: ['/job:worker/replica:0/task:0/device:CPU:0', '/job:worker/replica:0/task:0/device:CPU:1', '/job:worker/replica:0/task:1/device:CPU:0', '/job:worker/replica:0/task:1/device:CPU:1', '/job:worker/replica:0/task:2/device:CPU:0', '/job:worker/replica:0/task:2/device:CPU:1', '/job:worker/replica:0/task:3/device:CPU:0', '/job:worker/replica:0/task:3/device:CPU:1'] [worker-0]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-2]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-0]: I0328 06:08:19.009466 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:0/device:CPU:0',) [worker-2]: I0328 06:08:19.009462 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:2/device:CPU:0',) [worker-0]: INFO:tensorflow:Check health not enabled. [worker-2]: INFO:tensorflow:Check health not enabled. [worker-0]: I0328 06:08:19.009909 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-2]: I0328 06:08:19.009918 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-0]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22633', 'localhost:17942', 'localhost:24068', 'localhost:18697']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22633', 'localhost:17942', 'localhost:24068', 'localhost:18697']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: I0328 06:08:19.010116 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22633', 'localhost:17942', 'localhost:24068', 'localhost:18697']}, task_type = 'worker', task_id = 0, num_workers = 4, local_devices = ('/job:worker/task:0/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-2]: I0328 06:08:19.010115 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22633', 'localhost:17942', 'localhost:24068', 'localhost:18697']}, task_type = 'worker', task_id = 2, num_workers = 4, local_devices = ('/job:worker/task:2/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: I0328 06:08:19.021421 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:3/device:CPU:0',) [worker-3]: INFO:tensorflow:Check health not enabled. [worker-3]: I0328 06:08:19.021855 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-3]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22633', 'localhost:17942', 'localhost:24068', 'localhost:18697']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: I0328 06:08:19.022057 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22633', 'localhost:17942', 'localhost:24068', 'localhost:18697']}, task_type = 'worker', task_id = 3, num_workers = 4, local_devices = ('/job:worker/task:3/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-0]: Process _Process-38: [worker-0]: Traceback (most recent call last): [worker-0]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-0]: self.run() [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755, in _run_with_setenv [worker-0]: return self._actual_run() [worker-0]: ^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in _run_with_absl [worker-0]: app.run(lambda _: self._run_impl()) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 312, in run [worker-0]: _run_main(main, args) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 258, in _run_main [worker-0]: sys.exit(main(argv)) [worker-0]: ^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in [worker-0]: app.run(lambda _: self._run_impl()) [worker-0]: ^^^^^^^^^^^^^^^^ [worker-0]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-0]: self._target(*self._args, **self._kwargs) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 866, in __call__ [worker-0]: six.reraise(*info.exc_info) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/six_archive/six.py", line 719, in reraise [worker-0]: raise value [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained [worker-0]: return_value = fn(*args, **kwargs) [worker-0]: ^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 134, in worker_fn [worker-0]: strategy = collective_all_reduce_strategy.CollectiveAllReduceStrategy() [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 188, in __init__ [worker-0]: CollectiveAllReduceExtended( [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 340, in __init__ [worker-0]: self._initialize_strategy(self._cluster_resolver, devices=devices) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 359, in _initialize_strategy [worker-0]: self._initialize_multi_worker(cluster_resolver) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/collective_all_reduce_strategy.py", line 531, in _initialize_multi_worker [worker-0]: context.context().ensure_initialized() [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/eager/context.py", line 595, in ensure_initialized [worker-0]: pywrap_tfe.TFE_EnableCollectiveOps(context_handle, server_def_str) [worker-0]: tensorflow.python.framework.errors_impl.UnknownError: Could not start gRPC server [worker-1]: INFO:tensorflow:Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: I0328 06:08:19.092420 281473374647168 mirrored_strategy.py:420] Using MirroredStrategy with devices ('/job:worker/task:1/device:CPU:0',) [worker-1]: INFO:tensorflow:Check health not enabled. [worker-1]: I0328 06:08:19.093569 281473374647168 collective_all_reduce_strategy.py:575] Check health not enabled. [worker-1]: INFO:tensorflow:MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22633', 'localhost:17942', 'localhost:24068', 'localhost:18697']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-1]: I0328 06:08:19.094252 281473374647168 collective_all_reduce_strategy.py:577] MultiWorkerMirroredStrategy with cluster_spec = {'worker': ['localhost:22633', 'localhost:17942', 'localhost:24068', 'localhost:18697']}, task_type = 'worker', task_id = 1, num_workers = 4, local_devices = ('/job:worker/task:1/device:CPU:0',), communication = CommunicationImplementation.AUTO [worker-3]: INFO:tensorflow:Start watcher for peer's signal. [worker-0]: INFO:tensorflow:Start watcher for peer's signal. [worker-3]: I0328 06:08:19.221878 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-3]: INFO:tensorflow:Start polling for termination signal. [worker-3]: I0328 06:08:19.224015 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-1]: INFO:tensorflow:Start watcher for peer's signal. [worker-2]: INFO:tensorflow:Start watcher for peer's signal. [worker-1]: I0328 06:08:19.234539 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-2]: I0328 06:08:19.234549 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-0]: I0328 06:08:19.222311 281473374647168 failure_handling.py:634] Start watcher for peer's signal. [worker-0]: INFO:tensorflow:Start polling for termination signal. [worker-0]: I0328 06:08:19.237363 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-0]: Exception in thread WorkerTerminationSignalWatcher-0: [worker-0]: Traceback (most recent call last): [worker-0]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-0]: self.run() [worker-0]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-0]: self._target(*self._args, **self._kwargs) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-3]: Exception in thread WorkerTerminationSignalWatcher-3: [worker-0]: if self._termination_watcher_fn(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-0]: elif frequent_send and not maintenance_event.is_set(): [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: AttributeError: 'str' object has no attribute 'is_set' [worker-3]: Traceback (most recent call last): [worker-3]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-3]: self.run() [worker-3]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-3]: self._target(*self._args, **self._kwargs) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-3]: if self._termination_watcher_fn(): [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-3]: elif frequent_send and not maintenance_event.is_set(): [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: AttributeError: 'str' object has no attribute 'is_set' [worker-3]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-3]: I0328 06:08:19.240837 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-3]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-3]: Instructions for updating: [worker-0]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: I0328 06:08:19.242032 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-3]: W0328 06:08:19.241146 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-0]: W0328 06:08:19.242311 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-0]: Instructions for updating: [worker-0]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: Instructions for updating: [worker-0]: INFO:tensorflow:Start training at 0 [worker-3]: Track steps using a tf.Variable saved in checkpoint instead. [worker-3]: INFO:tensorflow:Start training at 0 [worker-3]: I0328 06:08:19.241297 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-3]: INFO:tensorflow:['workertemp_2', 'workertemp_1', 'workertemp_3'] [worker-3]: I0328 06:08:19.245995 281473374647168 gce_failure_handler_test.py:203] ['workertemp_2', 'workertemp_1', 'workertemp_3'] [worker-0]: I0328 06:08:19.242456 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-1]: INFO:tensorflow:Start polling for termination signal. [worker-2]: INFO:tensorflow:Start polling for termination signal. [worker-1]: I0328 06:08:19.266246 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-2]: I0328 06:08:19.266245 281473374647168 failure_handling.py:683] Start polling for termination signal. [worker-2]: Exception in thread WorkerTerminationSignalWatcher-2: [worker-1]: Exception in thread WorkerTerminationSignalWatcher-1: [worker-2]: Traceback (most recent call last): [worker-1]: Traceback (most recent call last): [worker-2]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-0]: INFO:tensorflow:['workertemp_2', 'workertemp_1', 'workertemp_3'] [worker-1]: File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner [worker-0]: I0328 06:08:19.292132 281473374647168 gce_failure_handler_test.py:203] ['workertemp_2', 'workertemp_1', 'workertemp_3'] [worker-2]: self.run() [worker-1]: self.run() [worker-2]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-1]: File "/usr/lib/python3.11/threading.py", line 975, in run [worker-1]: self._target(*self._args, **self._kwargs) [worker-2]: self._target(*self._args, **self._kwargs) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/failure_handling.py", line 692, in _poll_termination_signal [worker-1]: if self._termination_watcher_fn(): [worker-2]: if self._termination_watcher_fn(): [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 145, in mock_termination_watcher_function_gce [worker-1]: elif frequent_send and not maintenance_event.is_set(): [worker-2]: elif frequent_send and not maintenance_event.is_set(): [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: AttributeError: 'str' object has no attribute 'is_set' [worker-1]: AttributeError: 'str' object has no attribute 'is_set' [worker-2]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-1]: INFO:tensorflow:PreemptionCheckpointHandler initialized or restored. [worker-2]: I0328 06:08:19.274922 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-1]: I0328 06:08:19.288087 281473374647168 failure_handling.py:538] PreemptionCheckpointHandler initialized or restored. [worker-2]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: WARNING:tensorflow:From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-1]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: W0328 06:08:19.275229 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-1]: W0328 06:08:19.288407 281473374647168 deprecation.py:364] From /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py:195: PreemptionCheckpointHandler.total_run_calls (from tensorflow.python.distribute.failure_handling.failure_handling) is deprecated and will be removed in a future version. [worker-2]: Instructions for updating: [worker-1]: Instructions for updating: [worker-2]: Track steps using a tf.Variable saved in checkpoint instead. [worker-1]: Track steps using a tf.Variable saved in checkpoint instead. [worker-2]: INFO:tensorflow:Start training at 0 [worker-1]: INFO:tensorflow:Start training at 0 [worker-2]: I0328 06:08:19.275386 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-1]: I0328 06:08:19.288561 281473374647168 gce_failure_handler_test.py:194] Start training at 0 [worker-2]: INFO:tensorflow:['workertemp_2', 'workertemp_1', 'workertemp_3'] [worker-2]: I0328 06:08:19.289302 281473374647168 gce_failure_handler_test.py:203] ['workertemp_2', 'workertemp_1', 'workertemp_3'] [worker-1]: INFO:tensorflow:['workertemp_2', 'workertemp_1', 'workertemp_3'] [worker-1]: I0328 06:08:19.321839 281473374647168 gce_failure_handler_test.py:203] ['workertemp_2', 'workertemp_1', 'workertemp_3'] [worker-3]: Process _Process-41: [worker-3]: Traceback (most recent call last): [worker-3]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-3]: self.run() [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755, in _run_with_setenv [worker-3]: return self._actual_run() [worker-3]: ^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in _run_with_absl [worker-3]: app.run(lambda _: self._run_impl()) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 312, in run [worker-3]: _run_main(main, args) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 258, in _run_main [worker-3]: sys.exit(main(argv)) [worker-3]: ^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in [worker-3]: app.run(lambda _: self._run_impl()) [worker-3]: ^^^^^^^^^^^^^^^^ [worker-3]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-3]: self._target(*self._args, **self._kwargs) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 866, in __call__ [worker-3]: six.reraise(*info.exc_info) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/six_archive/six.py", line 719, in reraise [worker-3]: raise value [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained [worker-3]: return_value = fn(*args, **kwargs) [worker-3]: ^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 211, in worker_fn [worker-3]: self.assertNotEmpty(checkpoint_index) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/testing/absltest.py", line 972, in assertNotEmpty [worker-3]: self.fail('{!r} has length of 0.'.format(container), msg) [worker-3]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/testing/absltest.py", line 1814, in fail [worker-3]: return super(TestCase, self).fail(self._formatMessage(prefix, msg)) [worker-3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-3]: File "/usr/lib/python3.11/unittest/case.py", line 703, in fail [worker-3]: raise self.failureException(msg) [worker-3]: AssertionError: [] has length of 0. [worker-2]: Process _Process-36: [worker-2]: Traceback (most recent call last): [worker-2]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-2]: self.run() [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755, in _run_with_setenv [worker-2]: return self._actual_run() [worker-2]: ^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in _run_with_absl [worker-2]: app.run(lambda _: self._run_impl()) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 312, in run [worker-2]: _run_main(main, args) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 258, in _run_main [worker-2]: sys.exit(main(argv)) [worker-2]: ^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in [worker-2]: app.run(lambda _: self._run_impl()) [worker-2]: ^^^^^^^^^^^^^^^^ [worker-2]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-2]: self._target(*self._args, **self._kwargs) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 866, in __call__ [worker-1]: Process _Process-35: [worker-2]: six.reraise(*info.exc_info) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/six_archive/six.py", line 719, in reraise [worker-2]: raise value [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained [worker-2]: return_value = fn(*args, **kwargs) [worker-2]: ^^^^^^^^^^^^^^^^^^^ [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 211, in worker_fn [worker-2]: self.assertNotEmpty(checkpoint_index) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/testing/absltest.py", line 972, in assertNotEmpty [worker-2]: self.fail('{!r} has length of 0.'.format(container), msg) [worker-2]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/testing/absltest.py", line 1814, in fail [worker-2]: return super(TestCase, self).fail(self._formatMessage(prefix, msg)) [worker-2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-2]: File "/usr/lib/python3.11/unittest/case.py", line 703, in fail [worker-2]: raise self.failureException(msg) [worker-2]: AssertionError: [] has length of 0. [worker-1]: Traceback (most recent call last): [worker-1]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-1]: self.run() [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755, in _run_with_setenv [worker-0]: Process _Process-34: [worker-1]: return self._actual_run() [worker-1]: ^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in _run_with_absl [worker-1]: app.run(lambda _: self._run_impl()) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 312, in run [worker-1]: _run_main(main, args) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 258, in _run_main [worker-1]: sys.exit(main(argv)) [worker-1]: ^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in [worker-1]: app.run(lambda _: self._run_impl()) [worker-1]: ^^^^^^^^^^^^^^^^ [worker-1]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-1]: self._target(*self._args, **self._kwargs) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 866, in __call__ [worker-1]: six.reraise(*info.exc_info) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/six_archive/six.py", line 719, in reraise [worker-1]: raise value [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained [worker-1]: return_value = fn(*args, **kwargs) [worker-1]: ^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 211, in worker_fn [worker-1]: self.assertNotEmpty(checkpoint_index) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/testing/absltest.py", line 972, in assertNotEmpty [worker-1]: self.fail('{!r} has length of 0.'.format(container), msg) [worker-1]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/testing/absltest.py", line 1814, in fail [worker-1]: return super(TestCase, self).fail(self._formatMessage(prefix, msg)) [worker-1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-1]: File "/usr/lib/python3.11/unittest/case.py", line 703, in fail [worker-1]: raise self.failureException(msg) [worker-1]: AssertionError: [] has length of 0. [worker-0]: Traceback (most recent call last): [worker-0]: File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap [worker-0]: self.run() [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 755, in _run_with_setenv [worker-0]: return self._actual_run() [worker-0]: ^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in _run_with_absl [worker-0]: app.run(lambda _: self._run_impl()) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 312, in run [worker-0]: _run_main(main, args) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/app.py", line 258, in _run_main [worker-0]: sys.exit(main(argv)) [worker-0]: ^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_lib.py", line 54, in [worker-0]: app.run(lambda _: self._run_impl()) [worker-0]: ^^^^^^^^^^^^^^^^ [worker-0]: File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run [worker-0]: self._target(*self._args, **self._kwargs) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 866, in __call__ [worker-0]: six.reraise(*info.exc_info) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/six_archive/six.py", line 719, in reraise [worker-0]: raise value [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained [worker-0]: return_value = fn(*args, **kwargs) [worker-0]: ^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 211, in worker_fn [worker-0]: self.assertNotEmpty(checkpoint_index) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/testing/absltest.py", line 972, in assertNotEmpty [worker-0]: self.fail('{!r} has length of 0.'.format(container), msg) [worker-0]: File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/testing/absltest.py", line 1814, in fail [worker-0]: return super(TestCase, self).fail(self._formatMessage(prefix, msg)) [worker-0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [worker-0]: File "/usr/lib/python3.11/unittest/case.py", line 703, in fail [worker-0]: raise self.failureException(msg) [worker-0]: AssertionError: [] has length of 0. I0328 06:08:20.305244 281472867201920 multi_process_runner.py:646] worker-0 exit code: 1 I0328 06:08:20.305476 281472867201920 multi_process_runner.py:646] worker-1 exit code: 1 I0328 06:08:20.305591 281472867201920 multi_process_runner.py:646] worker-2 exit code: 1 I0328 06:08:20.305696 281472867201920 multi_process_runner.py:646] worker-3 exit code: 1 [ FAILED ] GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_True_graceperiod_7_inputarg_manager_strategyoption_MWMSmultiworker INFO:tensorflow:time(__main__.GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_True_graceperiod_7_inputarg_manager_strategyoption_MWMSmultiworker): 33.35s I0328 06:08:20.667817 281472867201920 test_util.py:2462] time(__main__.GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_True_graceperiod_7_inputarg_manager_strategyoption_MWMSmultiworker): 33.35s ====================================================================== FAIL: test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_True_graceperiod_7_inputarg_manager_strategyoption_MWMSmultiworker (__main__.GceFailureHandlingTest) GceFailureHandlingTest.test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_True_graceperiod_7_inputarg_manager_strategyoption_MWMSmultiworker test_multiple_workers_preempted_consecutively_test_apiwrappingtrain_True_graceperiod_7_inputarg_manager_strategyoption_MWMSmultiworker(api_wrapping_train=True, grace_period=7, input_arg='manager', strategy_option='MWMS_multi_worker') ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/testing/parameterized.py", line 314, in bound_param_test return test_method(self, **testcase_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 360, in decorated execute_test_method() File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/framework/test_combinations.py", line 343, in execute_test_method test_method(**kwargs_to_pass) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/combinations.py", line 559, in decorator test_method(self, **kwargs) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 417, in test_multiple_workers_preempted_consecutively mpr.join(timeout=250) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 649, in join self._reraise_if_subprocess_error(process_statuses) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 565, in _reraise_if_subprocess_error six.reraise(*process_status.exc_info) File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/six_archive/six.py", line 719, in reraise raise value File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/multi_process_runner.py", line 1060, in _run_contained return_value = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/org_tensorflow/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.py", line 211, in worker_fn self.assertNotEmpty(checkpoint_index) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/testing/absltest.py", line 972, in assertNotEmpty self.fail('{!r} has length of 0.'.format(container), msg) ^^^^^^^^^^^^^^^^^ File "/home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/bin/tensorflow/python/distribute/failure_handling/gce_failure_handler_test.runfiles/absl_py/absl/testing/absltest.py", line 1814, in fail return super(TestCase, self).fail(self._formatMessage(prefix, msg)) ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/unittest/case.py", line 703, in fail raise self.failureException(msg) ^^^^^^^^^^^^^^^^^ AssertionError: [] has length of 0. ---------------------------------------------------------------------- Ran 7 tests in 113.923s FAILED (failures=1) ================================================================================ ==================== Test output for //tensorflow/core/grappler/clusters:single_machine_test: 2023-03-28 05:57:47.647186: I tensorflow/core/util/port.cc:116] Experimental oneDNN custom operations are on. If you experience issues, please turn them off by setting the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. [==========] Running 9 tests from 1 test suite. [----------] Global test environment set-up. [----------] 9 tests from SingleMachineTest [ RUN ] SingleMachineTest.ClusterType 2023-03-28 05:57:47.697523: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session [ OK ] SingleMachineTest.ClusterType (8 ms) [ RUN ] SingleMachineTest.CostModel 2023-03-28 05:57:47.706372: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2023-03-28 05:57:47.742682: I tensorflow/core/grappler/clusters/single_machine.cc:348] Cleaning up previous session 2023-03-28 05:57:47.746376: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2023-03-28 05:57:47.761273: I tensorflow/tsl/profiler/lib/profiler_session.cc:104] Profiler session initializing. 2023-03-28 05:57:47.762588: I tensorflow/tsl/profiler/lib/profiler_session.cc:119] Profiler session started. 2023-03-28 05:57:47.763241: I tensorflow/tsl/profiler/lib/profiler_session.cc:70] Profiler session collecting data. 2023-03-28 05:57:47.763296: W tensorflow/core/profiler/convert/xplane_to_step_stats.cc:75] GPU trace was not collected. 2023-03-28 05:57:47.763501: I tensorflow/tsl/profiler/lib/profiler_session.cc:131] Profiler session tear down. [ OK ] SingleMachineTest.CostModel (58 ms) [ RUN ] SingleMachineTest.Queue 2023-03-28 05:57:47.765102: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2023-03-28 05:57:47.766417: I tensorflow/core/grappler/clusters/single_machine.cc:348] Cleaning up previous session 2023-03-28 05:57:47.769768: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2023-03-28 05:57:47.780322: I tensorflow/tsl/profiler/lib/profiler_session.cc:104] Profiler session initializing. 2023-03-28 05:57:47.780374: I tensorflow/tsl/profiler/lib/profiler_session.cc:119] Profiler session started. 2023-03-28 05:57:47.792545: I tensorflow/tsl/profiler/lib/profiler_session.cc:70] Profiler session collecting data. 2023-03-28 05:57:47.796539: W tensorflow/core/profiler/convert/xplane_to_step_stats.cc:75] GPU trace was not collected. 2023-03-28 05:57:47.797010: I tensorflow/tsl/profiler/lib/profiler_session.cc:131] Profiler session tear down. [ OK ] SingleMachineTest.Queue (1034 ms) [ RUN ] SingleMachineTest.MultipleItems 2023-03-28 05:57:55.748414: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2023-03-28 05:57:55.750810: I tensorflow/core/grappler/clusters/single_machine.cc:348] Cleaning up previous session 2023-03-28 05:57:55.753651: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2023-03-28 05:57:55.791739: I tensorflow/tsl/profiler/lib/profiler_session.cc:104] Profiler session initializing. 2023-03-28 05:57:55.791799: I tensorflow/tsl/profiler/lib/profiler_session.cc:119] Profiler session started. 2023-03-28 05:57:55.792089: I tensorflow/tsl/profiler/lib/profiler_session.cc:70] Profiler session collecting data. 2023-03-28 05:57:55.792129: W tensorflow/core/profiler/convert/xplane_to_step_stats.cc:75] GPU trace was not collected. 2023-03-28 05:57:55.792249: I tensorflow/tsl/profiler/lib/profiler_session.cc:131] Profiler session tear down. 2023-03-28 05:57:55.792466: I tensorflow/tsl/profiler/lib/profiler_session.cc:104] Profiler session initializing. 2023-03-28 05:57:55.792482: I tensorflow/tsl/profiler/lib/profiler_session.cc:119] Profiler session started. 2023-03-28 05:57:55.792637: I tensorflow/tsl/profiler/lib/profiler_session.cc:70] Profiler session collecting data. 2023-03-28 05:57:55.792659: W tensorflow/core/profiler/convert/xplane_to_step_stats.cc:75] GPU trace was not collected. 2023-03-28 05:57:55.792720: I tensorflow/tsl/profiler/lib/profiler_session.cc:131] Profiler session tear down. 2023-03-28 05:57:55.794692: I tensorflow/tsl/profiler/lib/profiler_session.cc:104] Profiler session initializing. 2023-03-28 05:57:55.794713: I tensorflow/tsl/profiler/lib/profiler_session.cc:119] Profiler session started. 2023-03-28 05:57:55.794845: I tensorflow/tsl/profiler/lib/profiler_session.cc:70] Profiler session collecting data. 2023-03-28 05:57:55.794867: W tensorflow/core/profiler/convert/xplane_to_step_stats.cc:75] GPU trace was not collected. 2023-03-28 05:57:55.794920: I tensorflow/tsl/profiler/lib/profiler_session.cc:131] Profiler session tear down. 2023-03-28 05:57:55.795045: I tensorflow/tsl/profiler/lib/profiler_session.cc:104] Profiler session initializing. 2023-03-28 05:57:55.795060: I tensorflow/tsl/profiler/lib/profiler_session.cc:119] Profiler session started. 2023-03-28 05:57:55.795182: I tensorflow/tsl/profiler/lib/profiler_session.cc:70] Profiler session collecting data. 2023-03-28 05:57:55.795202: W tensorflow/core/profiler/convert/xplane_to_step_stats.cc:75] GPU trace was not collected. 2023-03-28 05:57:55.795250: I tensorflow/tsl/profiler/lib/profiler_session.cc:131] Profiler session tear down. 2023-03-28 05:57:55.795890: I tensorflow/tsl/profiler/lib/profiler_session.cc:104] Profiler session initializing. 2023-03-28 05:57:55.795908: I tensorflow/tsl/profiler/lib/profiler_session.cc:119] Profiler session started. 2023-03-28 05:57:55.796040: I tensorflow/tsl/profiler/lib/profiler_session.cc:70] Profiler session collecting data. 2023-03-28 05:57:55.796060: W tensorflow/core/profiler/convert/xplane_to_step_stats.cc:75] GPU trace was not collected. 2023-03-28 05:57:55.796106: I tensorflow/tsl/profiler/lib/profiler_session.cc:131] Profiler session tear down. 2023-03-28 05:57:55.796250: I tensorflow/tsl/profiler/lib/profiler_session.cc:104] Profiler session initializing. 2023-03-28 05:57:55.796266: I tensorflow/tsl/profiler/lib/profiler_session.cc:119] Profiler session started. 2023-03-28 05:57:55.796386: I tensorflow/tsl/profiler/lib/profiler_session.cc:70] Profiler session collecting data. 2023-03-28 05:57:55.796406: W tensorflow/core/profiler/convert/xplane_to_step_stats.cc:75] GPU trace was not collected. 2023-03-28 05:57:55.796454: I tensorflow/tsl/profiler/lib/profiler_session.cc:131] Profiler session tear down. [ OK ] SingleMachineTest.MultipleItems (49 ms) [ RUN ] SingleMachineTest.GraphOptimizations 2023-03-28 05:57:55.797673: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2023-03-28 05:57:55.799286: I tensorflow/core/grappler/clusters/single_machine.cc:348] Cleaning up previous session 2023-03-28 05:57:55.799667: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2023-03-28 05:57:55.800125: I tensorflow/core/grappler/clusters/single_machine.cc:348] Cleaning up previous session 2023-03-28 05:57:55.800711: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2023-03-28 05:57:55.811943: I tensorflow/tsl/profiler/lib/profiler_session.cc:104] Profiler session initializing. 2023-03-28 05:57:55.811984: I tensorflow/tsl/profiler/lib/profiler_session.cc:119] Profiler session started. 2023-03-28 05:57:55.812965: I tensorflow/tsl/profiler/lib/profiler_session.cc:70] Profiler session collecting data. 2023-03-28 05:57:55.813003: W tensorflow/core/profiler/convert/xplane_to_step_stats.cc:75] GPU trace was not collected. 2023-03-28 05:57:55.813145: I tensorflow/tsl/profiler/lib/profiler_session.cc:131] Profiler session tear down. [ OK ] SingleMachineTest.GraphOptimizations (16 ms) [ RUN ] SingleMachineTest.TimeOuts 2023-03-28 05:57:55.814455: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2023-03-28 05:57:55.815239: I tensorflow/core/grappler/clusters/single_machine.cc:348] Cleaning up previous session 2023-03-28 05:57:55.815765: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2023-03-28 05:57:55.817834: I tensorflow/tsl/profiler/lib/profiler_session.cc:104] Profiler session initializing. 2023-03-28 05:57:55.817875: I tensorflow/tsl/profiler/lib/profiler_session.cc:119] Profiler session started. 2023-03-28 05:58:00.826202: I tensorflow/core/grappler/clusters/single_machine.cc:348] Cleaning up previous session 2023-03-28 05:58:00.826441: I tensorflow/core/common_runtime/executor.cc:1210] [/job:localhost/replica:0/task:0/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): CANCELLED: Dequeue operation was cancelled [[{{node dequeue}}]] 2023-03-28 05:58:00.826489: W tensorflow/core/kernels/queue_base.cc:285] _1_queue: Skipping cancelled dequeue attempt with queue not closed 2023-03-28 05:58:00.836272: I tensorflow/tsl/profiler/lib/profiler_session.cc:70] Profiler session collecting data. 2023-03-28 05:58:00.836351: W tensorflow/core/profiler/convert/xplane_to_step_stats.cc:75] GPU trace was not collected. 2023-03-28 05:58:00.836376: I tensorflow/tsl/profiler/lib/profiler_session.cc:131] Profiler session tear down. 2023-03-28 05:58:00.856406: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session 2023-03-28 05:58:00.858636: I tensorflow/tsl/profiler/lib/profiler_session.cc:104] Profiler session initializing. 2023-03-28 05:58:00.858670: I tensorflow/tsl/profiler/lib/profiler_session.cc:119] Profiler session started. 2023-03-28 05:58:05.859659: I tensorflow/core/common_runtime/executor.cc:1210] [/job:localhost/replica:0/task:0/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): CANCELLED: Dequeue operation was cancelled [[{{node dequeue}}]] 2023-03-28 05:58:05.859744: W tensorflow/core/kernels/queue_base.cc:285] _2_queue: Skipping cancelled dequeue attempt with queue not closed 2023-03-28 05:58:05.866279: I tensorflow/tsl/profiler/lib/profiler_session.cc:70] Profiler session collecting data. 2023-03-28 05:58:05.866365: W tensorflow/core/profiler/convert/xplane_to_step_stats.cc:75] GPU trace was not collected. 2023-03-28 05:58:05.866391: I tensorflow/tsl/profiler/lib/profiler_session.cc:131] Profiler session tear down. [ OK ] SingleMachineTest.TimeOuts (10099 ms) [ RUN ] SingleMachineTest.InfiniteLoops 2023-03-28 05:58:05.914017: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session [WARNING] external/com_google_googletest/googletest/src/gtest-death-test.cc:1108:: Death tests use fork(), which is unsafe particularly in a threaded context. For this test, Google Test detected 10 threads. See https://github.com/google/googletest/blob/master/docs/advanced.md#death-tests-and-threads for more explanation and suggested solutions, especially if this is the last message you see before your test times out. -- Test timed out at 2023-03-28 06:12:45 UTC -- ================================================================================ //tensorflow/c:c_api_experimental_test PASSED in 29.8s //tensorflow/c:c_api_function_test PASSED in 32.6s //tensorflow/c:c_api_test_cpu PASSED in 37.0s //tensorflow/c:c_test PASSED in 30.3s //tensorflow/c:env_test_cpu PASSED in 29.5s //tensorflow/c:kernels_test_cpu PASSED in 42.8s //tensorflow/c:ops_test PASSED in 40.5s //tensorflow/c:while_loop_test PASSED in 34.5s //tensorflow/c/eager:c_api_cluster_test_cpu PASSED in 39.5s //tensorflow/c/eager:c_api_remote_function_test_cpu PASSED in 38.2s //tensorflow/c/eager:c_api_remote_test_cpu PASSED in 38.8s //tensorflow/c/eager:c_api_test_cpu PASSED in 38.7s //tensorflow/c/eager:custom_device_test PASSED in 34.6s //tensorflow/c/eager/parallel_device:parallel_device_lib_test PASSED in 33.0s //tensorflow/c/eager/parallel_device:parallel_device_remote_test PASSED in 33.5s //tensorflow/c/eager/parallel_device:parallel_device_test PASSED in 36.6s //tensorflow/c/experimental/filesystem/plugins/gcs:expiring_lru_cache_test PASSED in 0.2s //tensorflow/c/experimental/filesystem/plugins/gcs:ram_file_block_cache_test PASSED in 2.3s //tensorflow/c/experimental/grappler:grappler_test PASSED in 29.2s //tensorflow/c/experimental/ops/gen/common:case_format_test PASSED in 0.7s //tensorflow/c/experimental/ops/gen/cpp:cpp_generator_test PASSED in 0.8s //tensorflow/c/experimental/ops/gen/cpp/renderers:renderer_test PASSED in 1.0s //tensorflow/c/experimental/saved_model/core:constant_loading_test PASSED in 21.3s //tensorflow/c/experimental/saved_model/core:object_graph_traversal_test PASSED in 18.5s //tensorflow/c/experimental/saved_model/core:saved_variable_loading_test PASSED in 36.4s //tensorflow/c/experimental/saved_model/core:signature_flattening_test PASSED in 13.5s //tensorflow/c/experimental/saved_model/core:tf_concrete_function_loading_test PASSED in 14.8s //tensorflow/c/experimental/saved_model/core/ops:restore_ops_test PASSED in 15.5s //tensorflow/c/experimental/saved_model/core/ops:variable_ops_test PASSED in 19.3s //tensorflow/c/experimental/saved_model/internal:saved_model_api_test PASSED in 36.6s //tensorflow/c/experimental/stream_executor:stream_executor_test PASSED in 0.2s //tensorflow/c/kernels:bitcast_op_test PASSED in 0.7s //tensorflow/c/kernels:summary_op_benchmark_test PASSED in 0.7s //tensorflow/c/kernels:summary_op_test PASSED in 0.8s //tensorflow/c/kernels:tensor_shape_utils_test PASSED in 0.1s //tensorflow/cc:cc_op_gen_test PASSED in 0.1s //tensorflow/cc:client_client_session_test PASSED in 3.6s //tensorflow/cc:coordinator_test PASSED in 5.6s //tensorflow/cc:framework_cc_ops_test PASSED in 3.3s //tensorflow/cc:framework_gradient_checker_test PASSED in 3.6s //tensorflow/cc:framework_gradients_test PASSED in 5.4s //tensorflow/cc:framework_scope_test PASSED in 0.5s //tensorflow/cc:framework_while_gradients_test PASSED in 3.3s //tensorflow/cc:gradients_array_grad_test PASSED in 29.0s //tensorflow/cc:gradients_data_flow_grad_test PASSED in 4.1s //tensorflow/cc:gradients_functional_grad_test PASSED in 2.6s //tensorflow/cc:gradients_image_grad_test PASSED in 7.3s //tensorflow/cc:gradients_linalg_grad_test PASSED in 4.0s //tensorflow/cc:gradients_manip_grad_test PASSED in 2.1s //tensorflow/cc:gradients_math_grad_test PASSED in 7.9s //tensorflow/cc:gradients_nn_grad_test PASSED in 14.3s //tensorflow/cc:gradients_resource_variable_grad_test PASSED in 2.1s //tensorflow/cc:ops_const_op_test PASSED in 0.8s //tensorflow/cc:ops_while_loop_test PASSED in 3.7s //tensorflow/cc:queue_runner_test PASSED in 12.3s //tensorflow/cc/experimental/base/tests:tensor_test PASSED in 0.1s //tensorflow/cc/experimental/base/tests:tensorhandle_test PASSED in 28.6s //tensorflow/cc/experimental/libexport:load_test PASSED in 0.5s //tensorflow/cc/experimental/libexport:save_test PASSED in 0.1s //tensorflow/cc/experimental/libtf:libtf_module_test PASSED in 30.8s //tensorflow/cc/experimental/libtf:libtf_object_test PASSED in 0.1s //tensorflow/cc/experimental/libtf:libtf_perf_test PASSED in 0.2s //tensorflow/cc/experimental/libtf:libtf_runtime_test PASSED in 33.5s //tensorflow/cc/experimental/libtf:libtf_transform_test PASSED in 34.5s //tensorflow/cc/experimental/libtf:libtf_value_test PASSED in 0.6s //tensorflow/cc/experimental/libtf:libtf_visit_test PASSED in 0.2s //tensorflow/cc/experimental/libtf/impl:iostream_test PASSED in 0.1s //tensorflow/cc/experimental/libtf/impl:none_test PASSED in 0.6s //tensorflow/cc/experimental/libtf/impl:scalars_test PASSED in 0.2s //tensorflow/cc/experimental/libtf/impl:string_test PASSED in 0.1s //tensorflow/cc/experimental/libtf/impl:tensor_spec_test PASSED in 0.2s //tensorflow/cc/saved_model:bundle_v2_test PASSED in 0.7s //tensorflow/cc/saved_model:fingerprinting_test PASSED in 1.1s //tensorflow/cc/saved_model:metrics_test PASSED in 0.2s //tensorflow/cc/saved_model:reader_test PASSED in 0.1s //tensorflow/cc/saved_model:saved_model_bundle_lite_test PASSED in 7.8s //tensorflow/cc/saved_model:saved_model_bundle_test PASSED in 8.3s //tensorflow/cc/saved_model:util_test PASSED in 0.1s //tensorflow/cc/saved_model/experimental/tests:saved_model_api_test PASSED in 34.2s //tensorflow/cc/tools:freeze_saved_model_test PASSED in 2.6s //tensorflow/compiler/aot:codegen_test PASSED in 35.9s //tensorflow/compiler/jit:compilability_check_util_test PASSED in 20.1s //tensorflow/compiler/jit:deadness_analysis_test PASSED in 12.4s //tensorflow/compiler/jit:device_compilation_cache_test PASSED in 5.2s //tensorflow/compiler/jit:device_compilation_cluster_signature_test PASSED in 7.7s //tensorflow/compiler/jit:device_compilation_profiler_test PASSED in 24.1s //tensorflow/compiler/jit:device_compiler_client_test PASSED in 4.7s //tensorflow/compiler/jit:device_compiler_disable_test PASSED in 20.1s //tensorflow/compiler/jit:device_executable_persistor_test PASSED in 22.8s //tensorflow/compiler/jit:device_util_test PASSED in 5.2s //tensorflow/compiler/jit:encapsulate_util_test PASSED in 1.5s //tensorflow/compiler/jit:node_matchers_test PASSED in 0.5s //tensorflow/compiler/jit:resource_operation_safety_analysis_test PASSED in 10.9s //tensorflow/compiler/jit:shape_inference_test PASSED in 1.6s //tensorflow/compiler/jit:xla_activity_listener_test PASSED in 26.4s //tensorflow/compiler/jit:xla_cluster_util_test PASSED in 13.2s //tensorflow/compiler/jit:xla_compile_util_test PASSED in 4.0s //tensorflow/compiler/jit:xla_kernel_creator_test PASSED in 10.3s //tensorflow/compiler/jit/tests:auto_clustering_test PASSED in 26.4s //tensorflow/compiler/mlir:mlir_graph_optimization_pass_test PASSED in 12.9s //tensorflow/compiler/mlir:register_common_dialects_test PASSED in 18.2s //tensorflow/compiler/mlir/lite:lstm_utils_test PASSED in 0.6s //tensorflow/compiler/mlir/lite:perception_ops_utils_test PASSED in 0.6s //tensorflow/compiler/mlir/lite:size_utils_test PASSED in 0.1s //tensorflow/compiler/mlir/lite:tftext_utils_test PASSED in 0.8s //tensorflow/compiler/mlir/lite/experimental/remat:rematerializer_test PASSED in 1.2s //tensorflow/compiler/mlir/lite/experimental/tac:execution_metadata_exporter_test PASSED in 4.1s //tensorflow/compiler/mlir/lite/experimental/tac/tests:compute-cost.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/experimental/tac/tests:device-transform-gpu.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/experimental/tac/tests:device-transform-nnapi.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/experimental/tac/tests:fold-constants-to-subgraph.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/experimental/tac/tests:get-alternative-subgraph.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/experimental/tac/tests:get-op-cost.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/experimental/tac/tests:pick-subgraphs.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/experimental/tac/tests:raise-target-subgraphs.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/experimental/tac/tests:target-annotation.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/experimental/tac/tests/e2e:device-transform-nnapi.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/experimental/tac/tests/e2e:simple-graph.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/metrics:error_collector_inst_test PASSED in 0.3s //tensorflow/compiler/mlir/lite/quantization:numerical_utils_test PASSED in 0.1s //tensorflow/compiler/mlir/lite/quantization/lite:quantize_model_test PASSED in 10.9s //tensorflow/compiler/mlir/lite/quantization/lite:quantize_weights_test PASSED in 10.9s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:fallback_to_flex_ops_default.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:fallback_to_flex_ops_legacy.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:tf_to_quant.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/quantization/tensorflow/tests:tf_to_quant_4bit.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/quantization/tests:import_quant_stats.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/sparsity:sparsify_model_test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:fold_broadcast.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/stablehlo/tests:fuse_mhlo_convolution.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-inplaceupdate.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-skip-quantization-ops.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tf-fb-tf.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-add.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-broadcast_in_dim.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-clamp.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-compare.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-concat.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-constant.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-conv.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-dot.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-gather.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-max.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-mul.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-pad.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-reshape.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-rsqrt.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-scatter.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl-sub.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-stablehlo-tfl.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-add.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-broadcast.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-clamp.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-concat.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-constant.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-conv.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-max.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-mul.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-pad.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-reshape.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-rsqrt.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo-sub.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:legalize-tfl-stablehlo.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:odml-to-stablehlo-allow-tf.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:odml-to-stablehlo-smuggle-resize.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/stablehlo/tests:optimize.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-clamp.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-concat.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-conv.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-division.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-logistic.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-multiply.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-reduce-window.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-resize-bilinear.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo-subtract.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-serialize-stablehlo.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/stablehlo/tests:tf-tfl-translate-tf-quantize.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/stablehlo/tests:unfuse_mhlo_batch_norm.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:analyze-variables.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests:canonicalize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:const-fold.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:decompose-hybrid-quantization.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests:default_quant_params.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests:dilated-conv.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:fuse-tftext.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/lite/tests:get-arithmetic-count.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:guarantee_func_has_one_use.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:inlining.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:insert_call_once_op.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:legalize-tf-assert.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:legalize-tf-hashtables.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:legalize-tf-no-runtime-verification.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests:legalize-tf-variables.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests:legalize-tf-while.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:legalize-tf.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests:legalize_jax_random.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:lift_tflite_flex_ops.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:lower-static-tensor-list-default-to-single-batch.mlir.test PASSED in 5.8s //tensorflow/compiler/mlir/lite/tests:lower-static-tensor-list-enable-dynamic-update-slice.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:lower-static-tensor-list.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests:modify_io_nodes.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:ops.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests:optimize-after-quantization.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests:optimize.mlir.test PASSED in 2.7s //tensorflow/compiler/mlir/lite/tests:optimize_functional_ops.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:optimize_no_verify.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:optimize_op_order.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:partitioned-topological-sort.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:pin-ops-with-side-effects.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests:post-quantize-dynamic-range.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests:post-quantize.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:prepare-composite-functions-tf.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-dynamic-range.mlir.test PASSED in 2.9s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-post-training-16bits.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-post-training.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests:prepare-quantize-signed.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests:prepare-quantize.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests:prepare-tf-fake-quant-4bit.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:prepare-tf-fake-quant.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests:prepare-tf-with-allowing-bf16-and-f16-type-legalization.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:prepare-tf.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests:quantize-dynamic-range.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/lite/tests:quantize-numeric-verify.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests:quantize-variables.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/lite/tests:quantize.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests:raise-custom-ops.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:reduce_while_operands.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests:shape-inference.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests:split-merged-operands.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:tfl_while_op_licm.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:tfl_while_outline.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests:trim-functions-tf.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests:unfold-large-splat-constant.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/debuginfo:v1_1.0_224_frozen.wrong_attr.line.part.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/debuginfo:v1_1.0_224_frozen.wrong_attr.stack.part.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/end2end:add.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/end2end:back2back_fake_quant.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/end2end:control_flow_v1.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/end2end:conv_2d.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/end2end:conv_2d_nchw.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests/end2end:custom_opdef.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/end2end:disallow_stateful_partitioned_call.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_per_channel.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_per_channel_4bit.pbtxt.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_without_identity.pbtxt.test PASSED in 1.9s //tensorflow/compiler/mlir/lite/tests/end2end:fake_quant_without_identity_4bit.pbtxt.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/end2end:graph-input-node.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests/end2end:graph_with_placeholder_with_default.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/end2end:if_op.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/end2end:quant_stats.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/end2end:unroll_batch_matmul.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/end2end:unroll_batch_matmul_disabled.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:basic_lstm.mlir.test PASSED in 5.8s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:bucketize.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:constants.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:control_edges.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:custom_op.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:dynamic_shape.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:external_constant.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:if_op.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:import_json.json.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:importer_test_min_max.cc.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:importer_test_min_max.cc.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:input_arrays.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:input_output_names_attr.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:legacy_reshape.json.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:lstm.json.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:lstm.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:many_attribute_op.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:math.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:matmul.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:multi_output_op.json.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:optional.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:optional_input.json.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:output_arrays.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:pruning.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:pruning_function_input_as_output.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:quant_stats.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:quantization.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:reshape.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:signature.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:signature_with_multiple_entry_points.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:simple.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:tf_variant_type.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:unranked_function_output.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:unranked_tensor.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/flatbuffer2mlir:while_op.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/mlir2exec:tfl_while_op.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:basic_lstm.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:bucketize.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:custom_op_with_tflite_op.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:depthwise_conv2d.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:depthwise_conv2d_v2.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_builtin.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_custom.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_flex.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:disable_flex_enable_builtin.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:dynamic_shape_constant.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:fake_quant.mlir.test PASSED in 3.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_exclusively.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_op_with_complex128.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_op_with_f64.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:flex_op_with_tflite_op.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:fully_connected.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:fully_connected_v2.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:hashtable_resource.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:if_op.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:logical.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:low_bit_packing.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:lstm.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:lstm_asym_attr.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:lstm_quantized.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:math.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:metadata.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:mul_v2.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:mul_v3.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:nn.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:numeric_verify.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:optional.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:quantization.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:reshape.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def_output_override.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def_with_multiple_entry_points.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:signature_def_with_no_inputs.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:simple.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:simple_with_connected_control_nodes.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:simple_with_unconnected_control_nodes.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:svdf.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:svdf_v2.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:tf_entry_function.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:tfl_while_op.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:transpose_conv_optional.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:type_attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unidirectional_sequence_lstm.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unidirectional_sequence_rnn.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unranked_tensor.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:unsorted_segment_prod.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:variant_type_on_func.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:variant_type_on_op.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/lite/tests/mlir2flatbuffer:while_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/calibrator:calibrator_singleton_test PASSED in 0.2s //tensorflow/compiler/mlir/quantization/tensorflow/calibrator:custom_aggregator_op_test PASSED in 14.4s //tensorflow/compiler/mlir/quantization/tensorflow/cc:const_op_size_test PASSED in 0.4s //tensorflow/compiler/mlir/quantization/tensorflow/cc:convert_asset_args_test PASSED in 6.2s //tensorflow/compiler/mlir/quantization/tensorflow/cc:save_variables_test PASSED in 0.4s //tensorflow/compiler/mlir/quantization/tensorflow/cc:status_macro_test PASSED in 0.1s //tensorflow/compiler/mlir/quantization/tensorflow/debugging:mlir_dump_test PASSED in 0.1s //tensorflow/compiler/mlir/quantization/tensorflow/python:concurrency_test PASSED in 38.3s //tensorflow/compiler/mlir/quantization/tensorflow/python:pywrap_quantize_model_test PASSED in 14.6s //tensorflow/compiler/mlir/quantization/tensorflow/python:representative_dataset_test PASSED in 7.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:cast_bf16_ops_to_f32.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_custom_aggregation_op_to_quant_stats.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_fake_quant_to_qdq.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_tf_quant_ops_to_mhlo.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:convert_tpu_model_to_cpu.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:duplicate_shape_determining_constants.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:fake_quant_e2e_flow.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:fake_quant_e2e_xla.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_custom_aggregation_ops.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_main_function.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_quantized_functions.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_quantized_functions_drq.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_quantized_functions_weight_only.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_restore_op.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/quantization/tensorflow/tests:insert_save_op.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:issue_ids_of_custom_aggregation_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_drq.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_drq_min_elements.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:lift_quantizable_spots_as_functions_xla.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:mark_functions_noinline.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:merge_initializer_function_ops_to_main.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:merge_save_function_ops_to_main.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:optimize.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_lifting.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_drq.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_drq_per_channel.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_ptq.mlir.test PASSED in 4.0s //tensorflow/compiler/mlir/quantization/tensorflow/tests:prepare_quantize_ptq_per_channel.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:preprocess_op.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions.mlir.test PASSED in 2.4s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions_drq.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions_weight_only.mlir.test PASSED in 3.3s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_composite_functions_xla.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_drq.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:quantize_xla.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/quantization/tensorflow/tests:remove_var_init_by_const.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/quantization/tensorflow/tests:replace_cast_hacks_with_tf_xla_ops.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/quantization/tensorflow/tests:replace_cast_hacks_with_tf_xla_ops_large_constants.mlir.test PASSED in 20.1s //tensorflow/compiler/mlir/quantization/tensorflow/tests:unfreeze_constants.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/quantization/tensorflow/utils:tf_to_xla_attribute_utils_test PASSED in 33.8s //tensorflow/compiler/mlir/tensorflow:bridge_logger_test PASSED in 4.0s //tensorflow/compiler/mlir/tensorflow:cluster_util_test PASSED in 0.3s //tensorflow/compiler/mlir/tensorflow:convert_tensor_test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow:convert_type_test PASSED in 0.2s //tensorflow/compiler/mlir/tensorflow:device_util_test PASSED in 0.3s //tensorflow/compiler/mlir/tensorflow:dump_graph_test PASSED in 0.4s //tensorflow/compiler/mlir/tensorflow:dump_mlir_util_test PASSED in 8.3s //tensorflow/compiler/mlir/tensorflow:error_util_test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow:tf_saved_model_test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow:tpu_rewrite_device_util_test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:add_functions_for_exported_names.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:annotate-parameter-replication.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:batchmatmul_to_einsum.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:breakup-islands.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:cannonicalize_ops_outside_compilation.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:canonicalize.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:canonicalize_compile_and_replicate_attributes.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:check_control_dependencies.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:cluster_formation.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:cluster_ops_by_policy.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:cluster_outlining.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:cluster_tf_ops_pass.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:constant-fold.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:constant_op_device_assignment.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:convert-tf-control-flow-to-scf.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:convert_control_to_data_outputs.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:convert_launch_func_to_tf_call.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tensorflow/tests:convert_session_initializer_to_function.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:convert_to_legacy_compile_and_replicate_attributes.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:decompose_reduce_dataset.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:decompose_resource_ops.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:device_assignment.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:device_assignment_by_func_attr.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:device_attribute_to_launch.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:device_canonicalize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:device_copy.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:drop_while_shape_invariant.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:einsum.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:empty-main.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:end-to-end-tpu-reshard-variables.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:executor_canonicalize.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:executor_island_coarsening.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:executor_island_materialize_const.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:extract_head_tail_outside_compilation.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:extract_outside_compilation.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:extract_tpu_copy_with_dynamic_shape_op.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests:fold-broadcast.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:freeze_variables.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:func-attr-invalid.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:func-attr.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:functional-control-flow-to-cfg.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:functional-control-flow-to-regions.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:functionalize-if-fail.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:functionalize-if.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:fused_kernel_matcher.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:gpu_fusion.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:graph_pruning.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:graph_pruning_preserve_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:group_by_dialect.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:guarantee-all-funcs-one-use.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:hoist_loop_invariant.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:hoist_replicate_invariant_resource_writes.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:host_launch_to_outside_compiled.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:init_text_file_to_import.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:init_text_file_to_import_invalid.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:init_text_file_to_import_saved_model.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:inlining.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:isolate-placer.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:launch_outlining.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:launch_to_device_attribute.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:launch_to_device_attribute_legacy.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_gpu_cc_60.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_gpu_cc_70.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_to_nchw.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_layout_assignment_to_nhwc.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_move_transposes_begin.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_move_transposes_end.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_to_nchw.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:layout_optimization_to_nhwc.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:legalize_hlo.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:legalize_tfg.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:legalize_tfg_arg_control_dep.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:legalize_tfg_with_control_flow.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:localize_var_handles.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:lower_globals_to_ml_program.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:lower_globals_to_ml_program_invalid.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:lower_quantized.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:lower_tf.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:lower_variable_ops_to_ml_program.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:mark_input_output_aliases.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:mark_ops_for_outside_compilation.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:materialize_passthrough_op.mlir.test PASSED in 2.0s //tensorflow/compiler/mlir/tensorflow/tests:merge_control_flow.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:mlprogram.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:name_anonymous_iterators.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:optimize-arg-operand-constraint.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:optimize.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests:order_by_dialect.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:outside_compiled_to_host_launch.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:parallel_execute_to_islands.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:parallel_execute_to_islands_legacy.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:prepare_tpu_computation_for_tf_export.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:promote_resources_to_args.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:promote_resources_to_args_functions.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:promote_var_handles_to_args.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:readonly_references_to_resources.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:region-control-flow-to-functional.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:remove_unused_arguments.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:remove_unused_while_results.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:replica_id_to_device_ordinal.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:replicate_invariant_op_hoisting.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:replicate_tensor_list_init_ops.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:replicate_to_island.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:replicate_to_island_legacy.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:resource-alias-analysis-test.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:resource-device-inference.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:resource_analyzer.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:resource_inlining.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:resource_op_lifting.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:rewrite_tpu_embedding_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:roundtrip-tf-executor.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:shape_inference.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:side-effect-analysis-test.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:sink_constant.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:split_into_island_per_op.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:stack_ops_decomposition.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:strip_noinline.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:strip_saved_module_metadata.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:strip_tf_attributes.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests:tensor_array_ops_decomposition.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:tensor_list_ops_decomposition.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:tf-executor-to-functional.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:tf-functional-to-executor.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:tf-ops.mlir.test PASSED in 3.0s //tensorflow/compiler/mlir/tensorflow/tests:tf-reduce-identity.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_data_fuse_map_and_batch.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_data_fuse_pmap_and_batch.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_device_index_selector.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:tf_device_ops.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:tf_device_ops_invalid.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_invalid.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_location_roundtrip.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_printer.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_executor_ops_side_effect.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_optimize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_deduplicate_bound_input_bindings.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_freeze_assets.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_freeze_global_tensors.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_freeze_global_tensors_mutable_tensors.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_initialize_variables_in_session_init.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_initialize_variables_in_session_init_fail.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_lift_variables.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_lift_variables_invalid_session.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_mark_initialized_variables.mlir.test PASSED in 1.8s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_ops.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_ops_invalid.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_optimize_global_tensors.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_optimize_global_tensors_interprocedural.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:tf_saved_model_remove_vars_in_session_initializer.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_side_effect.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tf_trait_folds.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu-annotate-dynamic-shape-inputs.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu-cluster-cleanup-attributes.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tpu-dynamic-layout-pass.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu-merge-variables-with-execute.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:tpu-multiple-while-body-func.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tpu-resource-read-for-write.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu-variable-runtime-reformatting.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tpu_cluster_formation.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu_colocate_composite_resource_ops.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:tpu_device_propagation.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tpu_host_computation_expansion.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_identity_pruning.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:tpu_parallel_execute_sink_resource_write.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:tpu_partitioned_op_conversion.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tpu_reorder_replicate_and_partitioned_inputs.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tpu_resource_partitioning.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:tpu_rewrite.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:tpu_sharding_identification.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tpu_space_to_depth_pass.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tpu_tail_with_tobool_op.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tpu_update_embedding_enqueue_op_inputs.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:tpu_validate_inputs-test.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:tpu_validate_inputs.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests:transpose-op.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests:unroll-batch-matmul.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests:update_control_dependencies.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:warn_when_using_deprecated_dumps.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests:while_licm.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests:xla_cluster_formation.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests:xla_inline_device_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests:xla_rewrite.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:add.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:argument-sharding-invalid.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:argument-sharding.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:constant-folding-hook.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:constant-folding.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:graph-resource.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:graph-resource.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:graph.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:mlir-module-serialized-str-attr.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:replicate-tensor-list-init-ops.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:result-sharding.mlir.test PASSED in 1.7s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:serialized-mlir-module-str-attr-invalid.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:serialized-mlir-module-str-attr.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:shape-inference-after-legalization.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:shape-inference.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/compile_mlir_util:stablehlo_add.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_coarsening:executor_tpuv1_island_coarsening.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_coarsening:while_op.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_inlining:executor_tpuv1_inline_tpu_island.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_island_inlining:while_op.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_outline_island:case_op.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_outline_island:executor_tpuv1_outline_tpu_island.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/executor_tpuv1_outline_island:while_op.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:add.pbtxt.test PASSED in 1.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-as-fetch.pbtxt.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-control-dep.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-data-type-with-subtype.pbtxt.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-data-type.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-multi-data-type-with-subtype.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:arg-retval-attrs.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:case_op.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:const-values.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:device-arg-retval-attr.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:empty-input-shapes.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:empty-value-attr.pbtxt.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:feed-as-fetch.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:feed-control-dep.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:force_shared_name_for_resource_ops.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:function-func-attr.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:functional-if-ops.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:functional-while-ops.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-as-function-control-ret.pbtxt.test PASSED in 2.3s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-as-function-retval-of-arg.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-as-function.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-custom-operation.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-default-attr.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-device-retval.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-empty-tensor-content.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-func-attr.pbtxt.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-call.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-control-ret-diff-island.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-control-ret-same-island.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-defs.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-input-shapes.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-name-bug.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-function-resource-args.pbtxt.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-gradient-def.pbtxt.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-input-func-arg-name-collision.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-library.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-malformed.pbtxt.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-scalar-input.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-uint8-return.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-undefined-output.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-version-info.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:graph-while-loop.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:invalid-output-index.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:legacy-fed-input-without-inputs.pbtxt.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:merge_node_with_function.pbtxt.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:mlir_passthrough_op.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:multi-output-feeds.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:multiple-use-next-iteration.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:node-locations.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:output-shapes-attr.pbtxt.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:output-shapes.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:parse_example.pbtxt.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:parse_example_v2.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:partial-device-name.pbtxt.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:prune_unused_nodes.pbtxt.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:quint8-const.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:shape-attrs.pbtxt.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:stateful-attribute.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:string-attr.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:switch_n.pbtxt.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:target.pbtxt.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:tensor-list.pbtxt.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:tf-data-pipeline.pbtxt.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir:unregistered_kernel.pbtxt.test PASSED in 2.3s //tensorflow/compiler/mlir/tensorflow/tests/graphdef2mlir/batch_use_same_function:saved_model.pbtxt.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:aliasing_arg_attr.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:case.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:convert_tensor.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:derived_shape_attr.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:derived_size_attr.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:device-arg-retval-attr.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:export_main_to_flib.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:fetch_feed_names.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:func_attr.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:func_list_attr.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-control-ret.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-order.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-resource-args-handle-info.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:function-resource-args.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:functional-if-ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:functional-while-ops.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:graph-as-function.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:infer_derived_attribute.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:invalid_input.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:legalized_name.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:missing-main.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:noop.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:optional_symbol_ref.mlir.test PASSED in 1.3s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:output-shapes-attr.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:parse_example.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:parse_example_v2.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:preserve-entry-func-names.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:ref-type-attr.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:ref-while-loop.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:shape_list_attr.mlir.test PASSED in 1.4s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:simple.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:simple_tf_dialect_op.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:stringescape.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:switchn.mlir.test PASSED in 1.2s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf-gradient-attr.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf-legacy-call.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf_add.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf_identity_n.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:tf_tpu_embedding_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:type_attr.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:type_list_attr.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:unique_name.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:unique_output_name.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tensorflow/tests/mlir2graphdef:while-loop.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tensorflow/tests/tf_to_hlo_pipeline:sccp-post-shape-inference.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tensorflow/tests/tpu_bridge_v1:end_to_end.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tf2xla:compile_mlir_util_test PASSED in 6.0s //tensorflow/compiler/mlir/tf2xla/api/v1:legalize_tf_test PASSED in 0.5s //tensorflow/compiler/mlir/tf2xla/tests:adjust-layout.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tf2xla/tests:convert-mhlo-quant-to-int.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tf2xla/tests:hlo_xla_runtime_pipeline.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tf2xla/tests:hlo_xla_sparsification.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-BatchMatMulV2.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-binary-elementwise.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-collective.mlir.test PASSED in 1.5s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-communication.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-include-tf2xla-fallback.mlir.test PASSED in 2.1s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-no-tf2xla-fallback.mlir.test PASSED in 4.8s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-prefer-tf2xla.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-types.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-with-tf2xla-hlo-importer.mlir.test PASSED in 1.9s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf-with-tf2xla.mlir.test PASSED in 1.6s //tensorflow/compiler/mlir/tf2xla/tests:legalize-tf.mlir.test PASSED in 12.2s //tensorflow/compiler/mlir/tf2xla/tests:tfxla_device_specific_transformations_cpu.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tf2xla/tests:tfxla_device_specific_transformations_gpu.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tf2xla/tests:verify-tfxla-legalization-no-chlo.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tf2xla/tests:verify-tfxla-legalization.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tf2xla/transforms:tf2xla_rewriter_test PASSED in 4.7s //tensorflow/compiler/mlir/tf2xla/transforms:verify_tfxla_legalization_test PASSED in 4.4s //tensorflow/compiler/mlir/tf2xla/transforms:xla_legalize_targets_test PASSED in 0.6s //tensorflow/compiler/mlir/tfr:graph_decompose_test PASSED in 8.0s //tensorflow/compiler/mlir/tfr:node_expansion_test PASSED in 9.5s //tensorflow/compiler/mlir/tfr:op_reg_gen_test PASSED in 14.3s //tensorflow/compiler/mlir/tfr:tfr_decompose_ctx_test PASSED in 4.7s //tensorflow/compiler/mlir/tfr:tfr_gen_test PASSED in 14.6s //tensorflow/compiler/mlir/tfr/examples/customization:test_ops_test PASSED in 16.3s //tensorflow/compiler/mlir/tfr/examples/pad:pad_ops_test PASSED in 22.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:buffer_deallocation.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tools/kernel_gen/tests:buffer_reuse.mlir.test PASSED in 1.1s //tensorflow/compiler/mlir/tools/kernel_gen/tests:bufferize.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:copy_cleanup.mlir.test PASSED in 0.4s //tensorflow/compiler/mlir/tools/kernel_gen/tests:embed_tf_framework.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tools/kernel_gen/tests:invalid.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tools/kernel_gen/tests:isinf.mlir.test PASSED in 0.7s //tensorflow/compiler/mlir/tools/kernel_gen/tests:ops.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tools/kernel_gen/tests:parallel_loops_to_sequential.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:rewrite_tf_framework_assert.mlir.test PASSED in 2.3s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tanh.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf-legalize-to-lmhlo.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_abi_knowledge.mlir.test PASSED in 1.0s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_framework_legalize_to_llvm.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_kernel_gpu_launch_to_llvm.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tools/kernel_gen/tests:tf_to_jit_invocations.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:convert-tfl-uint8.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tosa/tests:fuse-bias-tf.mlir.test PASSED in 0.5s //tensorflow/compiler/mlir/tosa/tests:lower-complex-types.mlir.test PASSED in 0.9s //tensorflow/compiler/mlir/tosa/tests:strip-quant-types.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:tf-tfl-to-tosa-pipeline.mlir.test PASSED in 0.8s //tensorflow/compiler/mlir/tosa/tests:tf-to-tosa-pipeline.mlir.test PASSED in 2.2s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-dequantize_softmax.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-pipeline-filtered.mlir.test PASSED in 0.6s //tensorflow/compiler/mlir/tosa/tests:tfl-to-tosa-pipeline.mlir.test PASSED in 7.3s //tensorflow/compiler/tests:adadelta_test_cpu PASSED in 15.4s //tensorflow/compiler/tests:adagrad_da_test_cpu PASSED in 12.2s //tensorflow/compiler/tests:adagrad_test_cpu PASSED in 8.9s //tensorflow/compiler/tests:adam_test_cpu PASSED in 13.5s //tensorflow/compiler/tests:add_n_test_cpu PASSED in 7.3s //tensorflow/compiler/tests:argminmax_test_cpu PASSED in 15.1s //tensorflow/compiler/tests:argminmax_test_cpu_mlir_bridge_test PASSED in 20.7s //tensorflow/compiler/tests:bucketize_op_test_cpu PASSED in 7.6s //tensorflow/compiler/tests:bucketize_op_test_cpu_mlir_bridge_test PASSED in 6.2s //tensorflow/compiler/tests:case_test_cpu PASSED in 6.2s //tensorflow/compiler/tests:cast_ops_test_cpu PASSED in 7.1s //tensorflow/compiler/tests:cast_ops_test_cpu_mlir_bridge_test PASSED in 8.0s //tensorflow/compiler/tests:categorical_op_test_cpu PASSED in 10.5s //tensorflow/compiler/tests:categorical_op_test_cpu_mlir_bridge_test PASSED in 11.2s //tensorflow/compiler/tests:cholesky_op_test_cpu PASSED in 16.4s //tensorflow/compiler/tests:cholesky_op_test_cpu_mlir_bridge_test PASSED in 15.5s //tensorflow/compiler/tests:clustering_test_cpu PASSED in 7.8s //tensorflow/compiler/tests:clustering_test_cpu_mlir_bridge_test PASSED in 6.6s //tensorflow/compiler/tests:concat_ops_test_cpu PASSED in 8.8s //tensorflow/compiler/tests:concat_ops_test_cpu_mlir_bridge_test PASSED in 9.2s //tensorflow/compiler/tests:cond_test_cpu PASSED in 8.6s //tensorflow/compiler/tests:const_arg_test_cpu PASSED in 6.3s //tensorflow/compiler/tests:const_test_cpu PASSED in 7.8s //tensorflow/compiler/tests:data_format_ops_test_cpu PASSED in 13.4s //tensorflow/compiler/tests:data_format_ops_test_cpu_mlir_bridge_test PASSED in 13.9s //tensorflow/compiler/tests:dense_layer_test_cpu PASSED in 13.8s //tensorflow/compiler/tests:dynamic_slice_ops_test_cpu PASSED in 8.8s //tensorflow/compiler/tests:dynamic_slice_ops_test_cpu_mlir_bridge_test PASSED in 10.1s //tensorflow/compiler/tests:dynamic_stitch_test_cpu PASSED in 7.0s //tensorflow/compiler/tests:dynamic_stitch_test_cpu_mlir_bridge_test PASSED in 6.9s //tensorflow/compiler/tests:eager_test_cpu PASSED in 15.6s //tensorflow/compiler/tests:einsum_op_test_cpu PASSED in 7.5s //tensorflow/compiler/tests:einsum_op_test_cpu_mlir_bridge_test PASSED in 9.8s //tensorflow/compiler/tests:ensure_shape_op_test_cpu PASSED in 7.2s //tensorflow/compiler/tests:extract_image_patches_op_test_cpu PASSED in 7.6s //tensorflow/compiler/tests:extract_image_patches_op_test_cpu_mlir_bridge_test PASSED in 8.9s //tensorflow/compiler/tests:fake_quant_ops_test_cpu PASSED in 14.2s //tensorflow/compiler/tests:fake_quant_ops_test_cpu_mlir_bridge_test PASSED in 15.5s //tensorflow/compiler/tests:fifo_queue_test_cpu PASSED in 8.3s //tensorflow/compiler/tests:fifo_queue_test_cpu_mlir_bridge_test PASSED in 8.6s //tensorflow/compiler/tests:ftrl_ops_test_cpu PASSED in 18.0s //tensorflow/compiler/tests:ftrl_ops_test_cpu_mlir_bridge_test PASSED in 16.8s //tensorflow/compiler/tests:ftrl_test_cpu PASSED in 18.8s //tensorflow/compiler/tests:function_test_cpu PASSED in 8.7s //tensorflow/compiler/tests:function_test_cpu_mlir_bridge_test PASSED in 7.5s //tensorflow/compiler/tests:gather_nd_op_test_cpu PASSED in 7.5s //tensorflow/compiler/tests:gather_nd_op_test_cpu_mlir_bridge_test PASSED in 7.6s //tensorflow/compiler/tests:gather_test_cpu PASSED in 42.9s //tensorflow/compiler/tests:gather_test_cpu_mlir_bridge_test PASSED in 56.1s //tensorflow/compiler/tests:jit_test_cpu PASSED in 48.2s //tensorflow/compiler/tests:listdiff_op_test_cpu PASSED in 14.2s //tensorflow/compiler/tests:listdiff_op_test_cpu_mlir_bridge_test PASSED in 13.4s //tensorflow/compiler/tests:lrn_ops_test_cpu PASSED in 6.7s //tensorflow/compiler/tests:lrn_ops_test_cpu_mlir_bridge_test PASSED in 7.1s //tensorflow/compiler/tests:lstm_test_cpu PASSED in 22.4s //tensorflow/compiler/tests:manip_ops_test_cpu PASSED in 10.7s //tensorflow/compiler/tests:manip_ops_test_cpu_mlir_bridge_test PASSED in 14.3s //tensorflow/compiler/tests:matrix_band_part_test_cpu PASSED in 42.5s //tensorflow/compiler/tests:matrix_band_part_test_cpu_mlir_bridge_test PASSED in 40.4s //tensorflow/compiler/tests:matrix_inverse_op_test_cpu PASSED in 19.7s //tensorflow/compiler/tests:matrix_inverse_op_test_cpu_mlir_bridge_test PASSED in 21.0s //tensorflow/compiler/tests:matrix_solve_op_test_cpu PASSED in 9.0s //tensorflow/compiler/tests:matrix_solve_op_test_cpu_mlir_bridge_test PASSED in 7.9s //tensorflow/compiler/tests:matrix_triangular_solve_op_test_cpu PASSED in 26.2s //tensorflow/compiler/tests:matrix_triangular_solve_op_test_cpu_mlir_bridge_test PASSED in 35.4s //tensorflow/compiler/tests:momentum_test_cpu PASSED in 12.2s //tensorflow/compiler/tests:nary_ops_test_cpu PASSED in 8.7s //tensorflow/compiler/tests:nary_ops_test_cpu_mlir_bridge_test PASSED in 10.4s //tensorflow/compiler/tests:nullary_ops_test_cpu PASSED in 9.3s //tensorflow/compiler/tests:nullary_ops_test_cpu_mlir_bridge_test PASSED in 6.5s //tensorflow/compiler/tests:placeholder_test_cpu PASSED in 6.7s //tensorflow/compiler/tests:placeholder_test_cpu_mlir_bridge_test PASSED in 9.6s //tensorflow/compiler/tests:proximal_adagrad_test_cpu PASSED in 8.9s //tensorflow/compiler/tests:proximal_gradient_descent_test_cpu PASSED in 8.5s //tensorflow/compiler/tests:quantized_ops_test_cpu PASSED in 8.8s //tensorflow/compiler/tests:reduce_window_test_cpu PASSED in 7.9s //tensorflow/compiler/tests:reduce_window_test_cpu_mlir_bridge_test PASSED in 7.4s //tensorflow/compiler/tests:reshape_op_test_cpu PASSED in 7.8s //tensorflow/compiler/tests:reshape_op_test_cpu_mlir_bridge_test PASSED in 8.4s //tensorflow/compiler/tests:reverse_ops_test_cpu PASSED in 10.5s //tensorflow/compiler/tests:reverse_ops_test_cpu_mlir_bridge_test PASSED in 10.7s //tensorflow/compiler/tests:reverse_sequence_op_test_cpu PASSED in 8.2s //tensorflow/compiler/tests:reverse_sequence_op_test_cpu_mlir_bridge_test PASSED in 8.1s //tensorflow/compiler/tests:risc_ops_test_cpu_mlir_bridge_test PASSED in 6.4s //tensorflow/compiler/tests:rmsprop_test_cpu PASSED in 14.4s //tensorflow/compiler/tests:scatter_nd_op_test_cpu PASSED in 20.1s //tensorflow/compiler/tests:scatter_nd_op_test_cpu_mlir_bridge_test PASSED in 28.9s //tensorflow/compiler/tests:searchsorted_op_test_cpu PASSED in 10.2s //tensorflow/compiler/tests:searchsorted_op_test_cpu_mlir_bridge_test PASSED in 10.1s //tensorflow/compiler/tests:segment_reduction_ops_test_cpu PASSED in 20.8s //tensorflow/compiler/tests:segment_reduction_ops_test_cpu_mlir_bridge_test PASSED in 43.8s //tensorflow/compiler/tests:self_adjoint_eig_op_test_cpu PASSED in 16.2s //tensorflow/compiler/tests:self_adjoint_eig_op_test_cpu_mlir_bridge_test PASSED in 16.7s //tensorflow/compiler/tests:slice_ops_test_cpu PASSED in 17.2s //tensorflow/compiler/tests:slice_ops_test_cpu_mlir_bridge_test PASSED in 27.8s //tensorflow/compiler/tests:sparse_to_dense_op_test_cpu PASSED in 8.1s //tensorflow/compiler/tests:sparse_to_dense_op_test_cpu_mlir_bridge_test PASSED in 7.8s //tensorflow/compiler/tests:stack_ops_test_cpu PASSED in 7.0s //tensorflow/compiler/tests:tensor_list_ops_test_cpu PASSED in 9.0s //tensorflow/compiler/tests:tridiagonal_matmul_ops_test_cpu PASSED in 14.2s //tensorflow/compiler/tests:tridiagonal_matmul_ops_test_cpu_mlir_bridge_test PASSED in 17.3s //tensorflow/compiler/tests:tridiagonal_solve_ops_test_cpu PASSED in 11.1s //tensorflow/compiler/tests:tridiagonal_solve_ops_test_cpu_mlir_bridge_test PASSED in 17.0s //tensorflow/compiler/tests:unique_ops_test_cpu PASSED in 6.0s //tensorflow/compiler/tests:variable_ops_test_cpu PASSED in 25.9s //tensorflow/compiler/tests:variable_ops_test_cpu_mlir_bridge_test PASSED in 16.3s //tensorflow/compiler/tests:where_op_test_cpu PASSED in 8.1s //tensorflow/compiler/tests:while_test_cpu PASSED in 9.8s //tensorflow/compiler/tests:xla_call_module_test_cpu PASSED in 11.5s //tensorflow/compiler/tests:xla_custom_call_ops_test_cpu PASSED in 6.4s //tensorflow/compiler/tests:xla_device_gpu_test_cpu PASSED in 7.1s //tensorflow/compiler/tests:xla_device_test_cpu PASSED in 13.1s //tensorflow/compiler/tests:xla_device_test_cpu_mlir_bridge_test PASSED in 15.2s //tensorflow/compiler/tests:xla_ops_test_cpu PASSED in 31.6s //tensorflow/compiler/tests:xla_ops_test_cpu_mlir_bridge_test PASSED in 40.7s //tensorflow/compiler/tests:xla_test_test PASSED in 6.0s //tensorflow/compiler/tf2xla:const_analysis_test PASSED in 6.9s //tensorflow/compiler/tf2xla:cpu_function_runtime_test PASSED in 0.3s //tensorflow/compiler/tf2xla:functionalize_cond_test PASSED in 0.7s //tensorflow/compiler/tf2xla:functionalize_control_flow_test PASSED in 0.7s //tensorflow/compiler/tf2xla:fused_batchnorm_reserve_space_test_cpu PASSED in 30.0s //tensorflow/compiler/tf2xla:graph_compiler_test PASSED in 5.1s //tensorflow/compiler/tf2xla:literal_util_test PASSED in 1.0s //tensorflow/compiler/tf2xla:resource_operation_table_test PASSED in 7.0s //tensorflow/compiler/tf2xla:resource_util_test_cpu PASSED in 2.6s //tensorflow/compiler/tf2xla:sharding_util_test PASSED in 0.8s //tensorflow/compiler/tf2xla:tf2xla_test PASSED in 17.0s //tensorflow/compiler/tf2xla:tf2xla_util_test PASSED in 1.4s //tensorflow/compiler/tf2xla:xla_compiler_test PASSED in 15.9s //tensorflow/compiler/tf2xla:xla_jit_compiled_cpu_function_test PASSED in 12.7s //tensorflow/compiler/tf2xla:xla_op_registry_test PASSED in 8.3s //tensorflow/compiler/tf2xla/kernels:rng_converter_utils_test PASSED in 2.1s //tensorflow/compiler/xla:array2d_test PASSED in 0.4s //tensorflow/compiler/xla:array3d_test PASSED in 0.1s //tensorflow/compiler/xla:array4d_test PASSED in 0.3s //tensorflow/compiler/xla:array_test PASSED in 0.1s //tensorflow/compiler/xla:bit_cast_test PASSED in 0.2s //tensorflow/compiler/xla:comparison_util_test PASSED in 0.1s //tensorflow/compiler/xla:debug_options_parsers_test PASSED in 0.4s //tensorflow/compiler/xla:index_util_test PASSED in 0.7s //tensorflow/compiler/xla:iterator_util_test PASSED in 0.3s //tensorflow/compiler/xla:layout_test PASSED in 0.6s //tensorflow/compiler/xla:layout_util_test PASSED in 0.2s //tensorflow/compiler/xla:literal_test PASSED in 0.2s //tensorflow/compiler/xla:parse_flags_from_env_test PASSED in 0.4s //tensorflow/compiler/xla:permutation_util_test PASSED in 0.7s //tensorflow/compiler/xla:primitive_util_test PASSED in 0.2s //tensorflow/compiler/xla:refcounting_hash_map_test PASSED in 0.5s //tensorflow/compiler/xla:reference_util_test PASSED in 0.3s //tensorflow/compiler/xla:shape_test PASSED in 0.1s //tensorflow/compiler/xla:shape_tree_test PASSED in 0.7s //tensorflow/compiler/xla:shape_util_test PASSED in 3.2s //tensorflow/compiler/xla:status_macros_test PASSED in 0.7s //tensorflow/compiler/xla:text_literal_reader_test PASSED in 0.2s //tensorflow/compiler/xla:text_literal_writer_test PASSED in 0.2s //tensorflow/compiler/xla:types_test PASSED in 0.1s //tensorflow/compiler/xla:util_test PASSED in 0.1s //tensorflow/compiler/xla:window_util_test PASSED in 0.2s //tensorflow/compiler/xla/client:padding_test PASSED in 0.2s //tensorflow/compiler/xla/client:xla_builder_test PASSED in 0.5s //tensorflow/compiler/xla/client/lib:arithmetic_test_cpu PASSED in 9.5s //tensorflow/compiler/xla/client/lib:comparators_test_cpu PASSED in 11.8s //tensorflow/compiler/xla/client/lib:constants_test_cpu PASSED in 8.5s //tensorflow/compiler/xla/client/lib:logdet_test_cpu PASSED in 9.5s //tensorflow/compiler/xla/client/lib:math_test_cpu PASSED in 16.4s //tensorflow/compiler/xla/client/lib:matrix_test_cpu PASSED in 12.9s //tensorflow/compiler/xla/client/lib:pooling_test_cpu PASSED in 9.1s //tensorflow/compiler/xla/client/lib:qr_test_cpu PASSED in 15.3s //tensorflow/compiler/xla/client/lib:slicing_test_cpu PASSED in 8.5s //tensorflow/compiler/xla/client/lib:sorting_test_cpu PASSED in 9.4s //tensorflow/compiler/xla/examples/axpy:stablehlo_compile_test PASSED in 10.1s //tensorflow/compiler/xla/experimental/conv_emitter:conv_emitter_test PASSED in 1.5s //tensorflow/compiler/xla/hlo/evaluator:hlo_evaluator_test PASSED in 6.5s //tensorflow/compiler/xla/hlo/transforms:hlo_constant_splitter_test PASSED in 1.0s //tensorflow/compiler/xla/hlo/utils:hlo_live_range_test PASSED in 0.9s //tensorflow/compiler/xla/hlo/utils:hlo_matchers_test PASSED in 1.4s //tensorflow/compiler/xla/hlo/utils:hlo_sharding_util_test PASSED in 0.3s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:collective_ops.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:collective_ops_to_cpu_runtime.mlir.test PASSED in 1.1s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:fft.mlir.test PASSED in 1.5s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:legalize_i1_vector_transfers.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:lmhlo_custom_call.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:lmhlo_infeed.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:remove_copies_to_out_params.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:rng_bit_generator.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:xla_abi_legalization.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:xla_cpu_memref_element_cast_to_llvm.mlir.test PASSED in 1.4s //tensorflow/compiler/xla/mlir/backends/cpu/transforms/tests:xla_cpu_outfeed.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:add_hlo_trace.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:gpu_launch.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:gpu_memcpy.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:gpu_memset.mlir.test PASSED in 1.5s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_case.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_custom_call.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_fft.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_gpu_cholesky.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_gpu_conv.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_gpu_cublas_lt_matmul.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_gpu_gemm.mlir.test PASSED in 1.2s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_infeed.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_outfeed.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_send_recv.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:lmhlo_while.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:memref_get_global_to_arg.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/backends/gpu/transforms/tests:outline_cuda_graphs.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir/framework/tests:legalize-xla-framework.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/framework/tests:outline-with-xla-framework.mlir.test PASSED in 1.1s //tensorflow/compiler/xla/mlir/framework/tests:xla-framework.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/math/transforms/tests:math_optimization.mlir.test PASSED in 1.5s //tensorflow/compiler/xla/mlir/memref/transforms/tests:aligned_allocations.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/runtime/ir/tests:ops.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir/runtime/ir/tests:ops_verify.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/runtime/ir/tests:testlib.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir/runtime/transforms:calling_convention_test PASSED in 0.4s //tensorflow/compiler/xla/mlir/runtime/transforms:type_converter_test PASSED in 0.5s //tensorflow/compiler/xla/mlir/runtime/transforms/tests:compilation_pipeline.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/runtime/transforms/tests:convert_asserts.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/runtime/transforms/tests:convert_custom_calls.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/runtime/transforms/tests:export_functions.mlir.test PASSED in 1.4s //tensorflow/compiler/xla/mlir/runtime/transforms/tests:ordinal_assignment.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir/runtime/transforms/tests:rt_to_llvm.mlir.test PASSED in 1.1s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:erase-op-without-results.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:inline-scf-while.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:reduce-scf-forall-bounds.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:replace-op-with-constant.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:replace-op-with-value.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:replace-operand-with-constant.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:return-operands-of-terminator-operands.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/rewrites/tests:truncate-function.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/tests:bisect.mlir.test PASSED in 1.1s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/tests:no-bug.mlir.test PASSED in 2.6s //tensorflow/compiler/xla/mlir/tools/mlir_bisect/tests:snapshot.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/tools/mlir_replay/public:execution_trace_utils_test PASSED in 0.3s //tensorflow/compiler/xla/mlir/utils:error_util_test PASSED in 0.1s //tensorflow/compiler/xla/mlir/xla_cpu/tests:bufferize.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir/xla_cpu/tests:invalid.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir/xla_cpu/tests:ops.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/chlo/chlo_legalize_to_hlo_broadcasts.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/chlo/chlo_legalize_to_hlo_no_broadcasts.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/chlo/chlo_legalize_to_mhlo.mlir.test PASSED in 1.3s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/chlo/sparse_chlo_legalize_to_linalg.mlir.test PASSED in 1.2s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/buffer_reuse.mlir.test PASSED in 1.3s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/canonicalize.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/convert_deallocation_ops_to_llvm.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/deallocate.mlir.test PASSED in 2.2s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/deallocation_ops.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/deallocation_to_scf.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/deallocation/split_alloc_tensors.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/add_debug_info.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/bufferization.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/collapse-shape.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/collect_stats.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/compose_extract_insert_slice.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/conv_2d_nhwc_hwcf.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/dot.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/duplicate_fusions.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/fibonacci.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/fusion_outlining.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/fusion_planning_for_cpu.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/inline_fusion_clusters.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/map_bcast_map.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/map_matmul.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/map_reduce.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/map_reduce_map.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/map_reshape_map.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/matmul.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/reduce_1d.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/reduce_2d.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/reduce_window.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/reverse.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/scatter.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/sort.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/cpu_tiling/transpose.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/greedy_fusion.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/invalid.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/lower_vectors.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/nested_tiling_softmax.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/ops.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/rewrite_forall_to_for.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/simplify_dead_copy.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/tile_by_one.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/tiling_softmax.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/vectorize_copy.mlir.test PASSED in 1.2s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/gml_st/vectorize_for_cpu.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo/lhlo-legalize-select-and-scatter.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo/lhlo-legalize-to-affine.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo/lhlo-legalize-to-gpu.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo/lhlo-legalize-to-parallel-loops.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo/lhlo-legalize-to-tensor-op.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo/ops.mlir.test PASSED in 2.3s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/lhlo_gpu/lhlo_gpu_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/attrs.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/broadcast_propagation.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/bitcast.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/canonicalize.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/concatenate.mlir.test PASSED in 1.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/convert.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/convolution.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/custom_call.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/folder_limit.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/reduce.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/reshape.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/reverse.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/scatter.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/transpose.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/tuple.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/canonicalize/while.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/constraint_fusion.mlir.test PASSED in 1.2s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/convert_to_signless.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/expand_hlo_tuples.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/expand_ops_simplifier.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/group_reduction_dimensions.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-collapse-elementwise-map.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-einsum-to-dot-general.mlir.test PASSED in 1.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-gather-to-torch-index-select.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-rng-to-linalg.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-shape-ops-to-standard.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-sort.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-arithmetic.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-lhlo-only-dynamic.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-lhlo-unranked.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-lhlo.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-linalg.mlir.test PASSED in 3.0s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-memref-unranked.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-memref.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-stablehlo-experimental.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/hlo-legalize-to-stablehlo.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/inlining.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/invalid.mlir.test PASSED in 1.2s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/legalize-control-flow.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/legalize-hlo-shape-computations.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/legalize-mhlo-to-thlo.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/legalize-to-std.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/lower-complex.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/lower-general-dot.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/materialize-broadcasts.mlir.test PASSED in 1.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/merge_assuming_ops.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_bytecode_customizations.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_canonicalize_dot.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_canonicalize_gather.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_canonicalize_reduction.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_canonicalize_scatter.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_flatten_tuple.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_infer_shape_type_methods.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_ops_prettyprint.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/mhlo_reduce_pretty_print.mlir.test PASSED in 2.1s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/ops.mlir.test PASSED in 1.3s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/optimize-hlo.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/prepare-for-export.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/reify-result-types.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/restrict_max_rank.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/shape_legalize_to_hlo.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/shape_reification.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/sink-constants-to-control-flow.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/sparse_gendot_lower.mlir.test PASSED in 5.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/sparse_lower.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/sparse_ops.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/sparse_rewriting.mlir.test PASSED in 1.3s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/sparse_transpose.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/stablehlo-legalize-to-hlo.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/symbolic-shape-optimization.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/unfuse_batch_norm.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_bounds.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_conv_op.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_reduce_op.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_reduce_window_op.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_scatter_op.mlir.test PASSED in 1.3s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_select_and_scatter_op.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/verifier_while_op.mlir.test PASSED in 8.3s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/mhlo/while_prettyprint.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/thlo/bufferize.mlir.test PASSED in 1.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/thlo/canonicalize.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/thlo/invalid.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/thlo/legalize_sort.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/thlo/ops.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:Dialect/thlo/tiling.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:alloc_to_arg.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:assuming-structural-propagation.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:buffer_packing.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:bufferize.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:bufferize_one_shot.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:collapse_parallel_loops_to_1d_pass.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:detensorize_scf_ops.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:index_type_llvm_lowering.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/mlir_hlo/tests:legalize-trigonometric-to-approximation.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:lower_index_cast.mlir.test PASSED in 1.0s //tensorflow/compiler/xla/mlir_hlo/tests:propagate_static_shapes.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:rank-specialization.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/mlir_hlo/tests:scalarization.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:shape-component-analysis.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/mlir_hlo/tests:shape_simplification.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:test_userange.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/mlir_hlo/tests:tile_loops.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tests:unbufferize.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tests:unroll-loops.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tools/mlir_interpreter/framework/tests:interpreter_value_test PASSED in 0.1s //tensorflow/compiler/xla/mlir_hlo/tools/mlir_interpreter/framework/tests:tensor_or_memref_test PASSED in 0.1s //tensorflow/compiler/xla/mlir_hlo/tosa/tests:binary.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tosa/tests:nullary.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tosa/tests:prepare-mhlo.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/mlir_hlo/tosa/tests:ternary.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/mlir_hlo/tosa/tests:unary.mlir.test PASSED in 0.7s //tensorflow/compiler/xla/pjrt:host_callback_test PASSED in 0.3s //tensorflow/compiler/xla/pjrt:lru_cache_test PASSED in 0.2s //tensorflow/compiler/xla/pjrt:pjrt_api_test PASSED in 0.3s //tensorflow/compiler/xla/pjrt:pjrt_client_test_cpu PASSED in 9.8s //tensorflow/compiler/xla/pjrt:pjrt_compiler_test PASSED in 0.6s //tensorflow/compiler/xla/pjrt:pjrt_executable_test PASSED in 0.4s //tensorflow/compiler/xla/pjrt:pjrt_stream_executor_client_test PASSED in 12.0s //tensorflow/compiler/xla/pjrt:semaphore_test PASSED in 0.2s //tensorflow/compiler/xla/pjrt:tf_pjrt_client_test PASSED in 9.9s //tensorflow/compiler/xla/pjrt:tfrt_cpu_pjrt_client_test PASSED in 7.0s //tensorflow/compiler/xla/pjrt:tracked_device_buffer_test PASSED in 8.6s //tensorflow/compiler/xla/pjrt:tracked_tfrt_cpu_device_buffer_test PASSED in 0.2s //tensorflow/compiler/xla/pjrt:transpose_test PASSED in 62.7s //tensorflow/compiler/xla/pjrt/c:pjrt_c_api_cpu_test PASSED in 7.8s //tensorflow/compiler/xla/pjrt/c:pjrt_c_api_helpers_test PASSED in 0.4s //tensorflow/compiler/xla/pjrt/distributed:client_server_test PASSED in 45.4s //tensorflow/compiler/xla/pjrt/distributed:service_test PASSED in 6.8s //tensorflow/compiler/xla/pjrt/gpu:se_gpu_pjrt_client_test PASSED in 3.1s //tensorflow/compiler/xla/python:outfeed_receiver_test_cpu PASSED in 11.3s //tensorflow/compiler/xla/python/ifrt:array_test PASSED in 0.3s //tensorflow/compiler/xla/python/ifrt:array_test_no_impl PASSED in 0.4s //tensorflow/compiler/xla/python/ifrt:client_test_no_impl PASSED in 0.2s //tensorflow/compiler/xla/python/ifrt:executable_test_no_impl PASSED in 1.0s //tensorflow/compiler/xla/python/ifrt:future_test PASSED in 0.2s //tensorflow/compiler/xla/python/ifrt:index_domain_test PASSED in 0.9s //tensorflow/compiler/xla/python/ifrt:index_test PASSED in 0.5s //tensorflow/compiler/xla/python/ifrt:shape_test PASSED in 0.8s //tensorflow/compiler/xla/python/ifrt:sharding_test PASSED in 0.3s //tensorflow/compiler/xla/python/ifrt:tuple_test_no_impl PASSED in 0.3s //tensorflow/compiler/xla/python/pjrt_ifrt:pjrt_array_impl_test_tfrt_cpu PASSED in 15.8s //tensorflow/compiler/xla/python/pjrt_ifrt:pjrt_client_impl_test_tfrt_cpu PASSED in 7.1s //tensorflow/compiler/xla/python/pjrt_ifrt:pjrt_executable_impl_test_tfrt_cpu PASSED in 8.3s //tensorflow/compiler/xla/python/pjrt_ifrt:pjrt_tuple_impl_test_tfrt_cpu PASSED in 7.6s //tensorflow/compiler/xla/python_api:xla_literal_test PASSED in 0.8s //tensorflow/compiler/xla/python_api:xla_shape_test PASSED in 1.0s //tensorflow/compiler/xla/rpc:grpc_client_test PASSED in 1.8s //tensorflow/compiler/xla/runtime:arguments_test PASSED in 0.2s //tensorflow/compiler/xla/runtime:async_runtime_test PASSED in 0.2s //tensorflow/compiler/xla/runtime:custom_call_test PASSED in 1.7s //tensorflow/compiler/xla/runtime:diagnostics_test PASSED in 0.1s //tensorflow/compiler/xla/runtime:executable_test PASSED in 1.6s //tensorflow/compiler/xla/runtime:ffi_test PASSED in 1.1s //tensorflow/compiler/xla/runtime:map_by_type_test PASSED in 0.4s //tensorflow/compiler/xla/runtime:module_test PASSED in 0.2s //tensorflow/compiler/xla/runtime:results_test PASSED in 0.2s //tensorflow/compiler/xla/runtime:state_test PASSED in 0.1s //tensorflow/compiler/xla/runtime:symbolic_shape_test PASSED in 0.3s //tensorflow/compiler/xla/runtime:type_id_test PASSED in 0.2s //tensorflow/compiler/xla/service:algebraic_simplifier_overflow_test_cpu PASSED in 8.4s //tensorflow/compiler/xla/service:algebraic_simplifier_test PASSED in 2.6s //tensorflow/compiler/xla/service:all_gather_broadcast_reorder_test PASSED in 1.6s //tensorflow/compiler/xla/service:all_gather_combiner_test PASSED in 0.8s //tensorflow/compiler/xla/service:all_gather_decomposer_test PASSED in 1.4s //tensorflow/compiler/xla/service:all_reduce_combiner_test PASSED in 1.5s //tensorflow/compiler/xla/service:all_reduce_contiguous_test PASSED in 0.9s //tensorflow/compiler/xla/service:all_reduce_folder_test PASSED in 1.3s //tensorflow/compiler/xla/service:all_reduce_promotion_test PASSED in 4.0s //tensorflow/compiler/xla/service:all_reduce_reassociate_test PASSED in 1.1s //tensorflow/compiler/xla/service:all_reduce_simplifier_test PASSED in 1.2s //tensorflow/compiler/xla/service:ar_crs_combiner_test PASSED in 1.0s //tensorflow/compiler/xla/service:async_collective_creator_test PASSED in 1.4s //tensorflow/compiler/xla/service:async_op_canonicalizer_test PASSED in 1.9s //tensorflow/compiler/xla/service:batch_dot_simplification_test PASSED in 2.2s //tensorflow/compiler/xla/service:batchnorm_expander_test_cpu PASSED in 7.9s //tensorflow/compiler/xla/service:bfloat16_conversion_folding_test PASSED in 1.3s //tensorflow/compiler/xla/service:bfloat16_propagation_test PASSED in 1.5s //tensorflow/compiler/xla/service:bitcast_dtypes_expander_test PASSED in 1.4s //tensorflow/compiler/xla/service:broadcast_canonicalizer_test PASSED in 1.2s //tensorflow/compiler/xla/service:buffer_assignment_test PASSED in 11.2s //tensorflow/compiler/xla/service:call_graph_test PASSED in 0.9s //tensorflow/compiler/xla/service:call_inliner_test PASSED in 2.5s //tensorflow/compiler/xla/service:change_op_data_type_test PASSED in 0.9s //tensorflow/compiler/xla/service:collective_ops_utils_test PASSED in 0.3s //tensorflow/compiler/xla/service:collectives_schedule_linearizer_test PASSED in 1.1s //tensorflow/compiler/xla/service:compilation_environments_test PASSED in 0.1s //tensorflow/compiler/xla/service:conditional_canonicalizer_test PASSED in 1.1s //tensorflow/compiler/xla/service:conditional_code_motion_test PASSED in 2.1s //tensorflow/compiler/xla/service:conditional_simplifier_test PASSED in 1.2s //tensorflow/compiler/xla/service:conditional_to_select_test PASSED in 1.7s //tensorflow/compiler/xla/service:convert_async_collectives_to_sync_test PASSED in 1.3s //tensorflow/compiler/xla/service:convert_mover_test PASSED in 1.2s //tensorflow/compiler/xla/service:convert_operand_folding_test PASSED in 1.9s //tensorflow/compiler/xla/service:convolution_4d_expander_test PASSED in 1.4s //tensorflow/compiler/xla/service:convolution_group_converter_test PASSED in 1.2s //tensorflow/compiler/xla/service:convolution_pred_expander_test PASSED in 1.0s //tensorflow/compiler/xla/service:copy_insertion_test PASSED in 1.1s //tensorflow/compiler/xla/service:custom_call_status_test PASSED in 0.3s //tensorflow/compiler/xla/service:defuser_test PASSED in 1.4s //tensorflow/compiler/xla/service:despecializer_test PASSED in 1.0s //tensorflow/compiler/xla/service:dfs_hlo_visitor_with_default_test PASSED in 0.9s //tensorflow/compiler/xla/service:dot_decomposer_test PASSED in 2.0s //tensorflow/compiler/xla/service:dot_merger_test PASSED in 1.2s //tensorflow/compiler/xla/service:dynamic_dimension_inference_test PASSED in 1.3s //tensorflow/compiler/xla/service:dynamic_dimension_simplifier_test PASSED in 1.2s //tensorflow/compiler/xla/service:dynamic_index_splitter_test PASSED in 0.9s //tensorflow/compiler/xla/service:dynamic_padder_test_cpu PASSED in 15.7s //tensorflow/compiler/xla/service:dynamic_parameter_binding_test PASSED in 1.0s //tensorflow/compiler/xla/service:dynamic_update_slice_test_cpu PASSED in 9.4s //tensorflow/compiler/xla/service:elemental_ir_emitter_test_cpu PASSED in 9.8s //tensorflow/compiler/xla/service:flatten_call_graph_test PASSED in 1.2s //tensorflow/compiler/xla/service:float_normalization_test PASSED in 0.9s //tensorflow/compiler/xla/service:fusion_node_indexing_evaluation_test PASSED in 2.3s //tensorflow/compiler/xla/service:gather_expander_test PASSED in 1.1s //tensorflow/compiler/xla/service:gather_simplifier_test PASSED in 1.6s //tensorflow/compiler/xla/service:heap_simulator_test PASSED in 1.4s //tensorflow/compiler/xla/service:hlo_activation_analysis_test PASSED in 0.8s //tensorflow/compiler/xla/service:hlo_alias_analysis_test PASSED in 2.8s //tensorflow/compiler/xla/service:hlo_casting_utils_test PASSED in 8.8s //tensorflow/compiler/xla/service:hlo_computation_deduplicator_test PASSED in 1.0s //tensorflow/compiler/xla/service:hlo_computation_test PASSED in 4.8s //tensorflow/compiler/xla/service:hlo_constant_folding_test PASSED in 5.7s //tensorflow/compiler/xla/service:hlo_cost_analysis_test PASSED in 8.5s //tensorflow/compiler/xla/service:hlo_creation_utils_test PASSED in 4.9s //tensorflow/compiler/xla/service:hlo_cse_test PASSED in 11.8s //tensorflow/compiler/xla/service:hlo_dataflow_analysis_test PASSED in 1.7s //tensorflow/compiler/xla/service:hlo_dce_test PASSED in 1.5s //tensorflow/compiler/xla/service:hlo_domain_test PASSED in 1.2s //tensorflow/compiler/xla/service:hlo_element_type_converter_test PASSED in 1.4s //tensorflow/compiler/xla/service:hlo_execution_profile_test PASSED in 8.2s //tensorflow/compiler/xla/service:hlo_graph_dumper_test PASSED in 1.9s //tensorflow/compiler/xla/service:hlo_input_output_alias_config_test PASSED in 1.2s //tensorflow/compiler/xla/service:hlo_instruction_test PASSED in 1.4s //tensorflow/compiler/xla/service:hlo_liveness_analysis_test PASSED in 1.2s //tensorflow/compiler/xla/service:hlo_memory_scheduler_test PASSED in 1.8s //tensorflow/compiler/xla/service:hlo_module_dce_test PASSED in 2.2s //tensorflow/compiler/xla/service:hlo_module_metadata_test PASSED in 0.3s //tensorflow/compiler/xla/service:hlo_module_test PASSED in 1.3s //tensorflow/compiler/xla/service:hlo_opcode_test PASSED in 0.3s //tensorflow/compiler/xla/service:hlo_ordering_test PASSED in 1.0s //tensorflow/compiler/xla/service:hlo_parser_test PASSED in 0.7s //tensorflow/compiler/xla/service:hlo_pass_pipeline_test PASSED in 1.4s //tensorflow/compiler/xla/service:hlo_phi_graph_test PASSED in 0.2s //tensorflow/compiler/xla/service:hlo_proto_util_test PASSED in 1.3s //tensorflow/compiler/xla/service:hlo_reachability_test PASSED in 1.3s //tensorflow/compiler/xla/service:hlo_rematerialization_test PASSED in 1.5s //tensorflow/compiler/xla/service:hlo_rematerialization_test_utils_test PASSED in 1.4s //tensorflow/compiler/xla/service:hlo_replication_analysis_test PASSED in 1.3s //tensorflow/compiler/xla/service:hlo_schedule_test PASSED in 1.7s //tensorflow/compiler/xla/service:hlo_sharding_test PASSED in 1.2s //tensorflow/compiler/xla/service:hlo_verifier_test PASSED in 4.6s //tensorflow/compiler/xla/service:indexed_array_analysis_test PASSED in 0.9s //tensorflow/compiler/xla/service:instruction_fusion_test PASSED in 3.3s //tensorflow/compiler/xla/service:latency_hiding_scheduler_test PASSED in 1.3s //tensorflow/compiler/xla/service:layout_assignment_test PASSED in 8.3s //tensorflow/compiler/xla/service:layout_normalization_test PASSED in 5.4s //tensorflow/compiler/xla/service:logistic_expander_test PASSED in 0.9s //tensorflow/compiler/xla/service:loop_schedule_linearizer_test PASSED in 1.1s //tensorflow/compiler/xla/service:map_inliner_test PASSED in 1.7s //tensorflow/compiler/xla/service:mapped_ptr_container_sorter_test PASSED in 0.1s //tensorflow/compiler/xla/service:memory_space_assignment_best_fit_repacker_test PASSED in 0.3s //tensorflow/compiler/xla/service:memory_space_assignment_test PASSED in 6.9s //tensorflow/compiler/xla/service:memory_space_propagation_test PASSED in 1.1s //tensorflow/compiler/xla/service:name_uniquer_test PASSED in 0.1s //tensorflow/compiler/xla/service:operand_upcaster_test PASSED in 1.3s //tensorflow/compiler/xla/service:optimize_input_output_buffer_alias_test PASSED in 1.3s //tensorflow/compiler/xla/service:pattern_matcher_gmock_test PASSED in 0.2s //tensorflow/compiler/xla/service:pattern_matcher_test PASSED in 1.0s //tensorflow/compiler/xla/service:profile_guided_latency_estimator_test PASSED in 1.5s //tensorflow/compiler/xla/service:real_imag_expander_test PASSED in 1.1s //tensorflow/compiler/xla/service:reduce_decomposer_test PASSED in 1.6s //tensorflow/compiler/xla/service:reduce_scatter_combiner_test PASSED in 1.5s //tensorflow/compiler/xla/service:reduce_scatter_decomposer_test PASSED in 1.6s //tensorflow/compiler/xla/service:reduce_scatter_reassociate_test PASSED in 1.0s //tensorflow/compiler/xla/service:reshape_decomposer_test PASSED in 1.3s //tensorflow/compiler/xla/service:reshape_mover_test PASSED in 0.8s //tensorflow/compiler/xla/service:result_caster_test PASSED in 1.3s //tensorflow/compiler/xla/service:root_instruction_sinker_test PASSED in 1.5s //tensorflow/compiler/xla/service:scatter_expander_test PASSED in 1.1s //tensorflow/compiler/xla/service:scatter_simplifier_test PASSED in 1.9s //tensorflow/compiler/xla/service:select_and_scatter_expander_test PASSED in 0.9s //tensorflow/compiler/xla/service:shape_inference_test PASSED in 0.2s //tensorflow/compiler/xla/service:shaped_buffer_test PASSED in 8.6s //tensorflow/compiler/xla/service:sharding_propagation_test PASSED in 5.5s //tensorflow/compiler/xla/service:sharding_remover_test PASSED in 2.3s //tensorflow/compiler/xla/service:simplify_fp_conversions_test PASSED in 0.9s //tensorflow/compiler/xla/service:slice_sinker_test PASSED in 2.2s //tensorflow/compiler/xla/service:sort_simplifier_test PASSED in 1.4s //tensorflow/compiler/xla/service:space_to_batch_converter_test PASSED in 1.0s //tensorflow/compiler/xla/service:stable_sort_expander_test PASSED in 0.9s //tensorflow/compiler/xla/service:stochastic_convert_decomposer_test PASSED in 0.9s //tensorflow/compiler/xla/service:stream_pool_test PASSED in 0.3s //tensorflow/compiler/xla/service:topk_rewriter_test PASSED in 4.8s //tensorflow/compiler/xla/service:transpose_folding_test PASSED in 2.0s //tensorflow/compiler/xla/service:tuple_points_to_analysis_test PASSED in 1.9s //tensorflow/compiler/xla/service:tuple_simplifier_test PASSED in 1.0s //tensorflow/compiler/xla/service:tuple_util_test PASSED in 1.6s //tensorflow/compiler/xla/service:while_loop_all_reduce_code_motion_test PASSED in 1.4s //tensorflow/compiler/xla/service:while_loop_analysis_test PASSED in 0.8s //tensorflow/compiler/xla/service:while_loop_concat_code_motion_test PASSED in 0.8s //tensorflow/compiler/xla/service:while_loop_constant_sinking_test PASSED in 1.2s //tensorflow/compiler/xla/service:while_loop_expensive_invariant_code_motion_test PASSED in 1.0s //tensorflow/compiler/xla/service:while_loop_invariant_code_motion_test PASSED in 1.0s //tensorflow/compiler/xla/service:while_loop_simplifier_test PASSED in 1.2s //tensorflow/compiler/xla/service:while_loop_trip_count_annotator_test PASSED in 1.0s //tensorflow/compiler/xla/service:while_util_test PASSED in 2.6s //tensorflow/compiler/xla/service:xla_aot_compile_stablehlo_cpu_test PASSED in 7.8s //tensorflow/compiler/xla/service:xla_debug_info_manager_test PASSED in 1.1s //tensorflow/compiler/xla/service:zero_sized_hlo_elimination_test PASSED in 1.2s //tensorflow/compiler/xla/service/cpu:conv_canonicalization_test PASSED in 2.2s //tensorflow/compiler/xla/service/cpu:cpu_eigen_tensor_alignment_test PASSED in 2.4s //tensorflow/compiler/xla/service/cpu:cpu_instruction_fusion_test PASSED in 1.4s //tensorflow/compiler/xla/service/cpu:cpu_layout_assignment_test PASSED in 2.7s //tensorflow/compiler/xla/service/cpu:ir_emission_utils_test PASSED in 2.5s //tensorflow/compiler/xla/service/cpu:parallel_task_assignment_test PASSED in 3.3s //tensorflow/compiler/xla/service/cpu:runtime_fft_test PASSED in 0.2s //tensorflow/compiler/xla/service/cpu:shape_partition_test PASSED in 1.4s //tensorflow/compiler/xla/service/cpu:xfeed_manager_test PASSED in 1.2s //tensorflow/compiler/xla/service/cpu/tests:cpu_bytesizeof_test PASSED in 0.4s //tensorflow/compiler/xla/service/cpu/tests:cpu_dyn_shape_test PASSED in 8.1s //tensorflow/compiler/xla/service/cpu/tests:cpu_eigen_dot_operation_test PASSED in 9.0s //tensorflow/compiler/xla/service/cpu/tests:cpu_external_constants_test PASSED in 27.5s //tensorflow/compiler/xla/service/cpu/tests:cpu_fusion_test PASSED in 8.2s //tensorflow/compiler/xla/service/cpu/tests:cpu_infeed_test PASSED in 9.7s //tensorflow/compiler/xla/service/cpu/tests:cpu_intrinsic_test PASSED in 11.8s //tensorflow/compiler/xla/service/cpu/tests:cpu_key_value_sort_test PASSED in 6.9s //tensorflow/compiler/xla/service/cpu/tests:cpu_literal_caching_test PASSED in 8.2s //tensorflow/compiler/xla/service/cpu/tests:cpu_noalias_test PASSED in 11.0s //tensorflow/compiler/xla/service/cpu/tests:cpu_outfeed_test PASSED in 9.8s //tensorflow/compiler/xla/service/cpu/tests:cpu_profiling_test PASSED in 10.4s //tensorflow/compiler/xla/service/cpu/tests:cpu_spmd_compile_test PASSED in 9.5s //tensorflow/compiler/xla/service/cpu/tests:cpu_topk_test PASSED in 10.6s //tensorflow/compiler/xla/service/cpu/tests:cpu_vectorization_test PASSED in 12.7s //tensorflow/compiler/xla/service/cpu/tests:cpu_while_test PASSED in 9.1s //tensorflow/compiler/xla/service/cpu/tests:tree_reduction_rewriter_test PASSED in 10.1s //tensorflow/compiler/xla/service/gpu:alias_passthrough_params_test PASSED in 1.2s //tensorflow/compiler/xla/service/gpu:all_reduce_blueconnect_test PASSED in 2.2s //tensorflow/compiler/xla/service/gpu:cublas_pad_for_gemms_test PASSED in 3.1s //tensorflow/compiler/xla/service/gpu:cudnn_pad_for_convolutions_test PASSED in 1.7s //tensorflow/compiler/xla/service/gpu:cudnn_simplify_padding_test PASSED in 1.7s //tensorflow/compiler/xla/service/gpu:cudnn_support_utils_test PASSED in 1.2s //tensorflow/compiler/xla/service/gpu:cudnn_vectorize_convolutions_test PASSED in 2.0s //tensorflow/compiler/xla/service/gpu:fusion_merger_test PASSED in 1.8s //tensorflow/compiler/xla/service/gpu:gemm_rewriter_triton_test PASSED in 1.8s //tensorflow/compiler/xla/service/gpu:gpu_conv_padding_legalization_test PASSED in 1.9s //tensorflow/compiler/xla/service/gpu:gpu_conv_rewriter_test PASSED in 1.3s //tensorflow/compiler/xla/service/gpu:gpu_fusible_test PASSED in 4.2s //tensorflow/compiler/xla/service/gpu:gpu_hlo_cost_analysis_test PASSED in 3.0s //tensorflow/compiler/xla/service/gpu:gpu_performance_model_test PASSED in 1.4s //tensorflow/compiler/xla/service/gpu:gpu_sanitize_constant_names_test PASSED in 2.1s //tensorflow/compiler/xla/service/gpu:hlo_algorithm_denylist_test PASSED in 0.2s //tensorflow/compiler/xla/service/gpu:hlo_fusion_stats_test PASSED in 0.9s //tensorflow/compiler/xla/service/gpu:instruction_fusion_test PASSED in 1.7s //tensorflow/compiler/xla/service/gpu:ir_emission_utils_test PASSED in 1.3s //tensorflow/compiler/xla/service/gpu:matmul_utils_test PASSED in 1.1s //tensorflow/compiler/xla/service/gpu:move_copy_to_users_test PASSED in 2.0s //tensorflow/compiler/xla/service/gpu:multi_output_fusion_test PASSED in 2.4s //tensorflow/compiler/xla/service/gpu:non_atomically_upgradeable_rw_lock_test PASSED in 0.1s //tensorflow/compiler/xla/service/gpu:reduction_splitter_test PASSED in 1.5s //tensorflow/compiler/xla/service/gpu:scatter_slice_simplifier_test PASSED in 1.1s //tensorflow/compiler/xla/service/gpu:target_util_test PASSED in 0.5s //tensorflow/compiler/xla/service/gpu:variadic_op_splitter_test PASSED in 2.2s //tensorflow/compiler/xla/service/gpu:while_transformer_test PASSED in 1.9s //tensorflow/compiler/xla/service/gpu/llvm_gpu_backend:utils_test PASSED in 0.4s //tensorflow/compiler/xla/service/gpu/tests:gpu_reduce_scatter_creator_test PASSED in 1.5s //tensorflow/compiler/xla/service/gpu/tests:reduction_degenerate_dim_remover_test PASSED in 2.5s //tensorflow/compiler/xla/service/gpu/tests:reduction_dimension_grouper_test PASSED in 1.8s //tensorflow/compiler/xla/service/gpu/tests:tree_reduction_rewriter_test PASSED in 2.1s //tensorflow/compiler/xla/service/graphcycles:graphcycles_test PASSED in 1.4s //tensorflow/compiler/xla/service/graphcycles:ordered_set_test PASSED in 0.1s //tensorflow/compiler/xla/service/llvm_ir:alias_analysis_test PASSED in 10.3s //tensorflow/compiler/xla/service/llvm_ir:ir_array_test PASSED in 0.6s //tensorflow/compiler/xla/service/spmd:canonicalize_all_gather_for_cse_test PASSED in 2.0s //tensorflow/compiler/xla/service/spmd:collective_permute_motion_test PASSED in 0.9s //tensorflow/compiler/xla/service/spmd:partition_assignment_test PASSED in 0.8s //tensorflow/compiler/xla/service/spmd:schedule_aware_collective_ops_cse_test PASSED in 2.1s //tensorflow/compiler/xla/service/spmd:spmd_partitioner_test PASSED in 4.8s //tensorflow/compiler/xla/service/spmd:stateful_rng_spmd_partitioner_test PASSED in 1.0s //tensorflow/compiler/xla/stream_executor:dnn_test PASSED in 0.2s //tensorflow/compiler/xla/stream_executor:stream_test PASSED in 0.5s //tensorflow/compiler/xla/stream_executor/host:host_stream_test PASSED in 0.2s //tensorflow/compiler/xla/tests:all_reduce_test_cpu PASSED in 10.1s //tensorflow/compiler/xla/tests:axpy_simple_test_cpu PASSED in 7.1s //tensorflow/compiler/xla/tests:bad_rng_shape_validation_test_cpu PASSED in 7.3s //tensorflow/compiler/xla/tests:binop_scaling_test_cpu PASSED in 8.9s //tensorflow/compiler/xla/tests:bitcast_convert_test_cpu PASSED in 8.6s //tensorflow/compiler/xla/tests:broadcast_simple_test_cpu PASSED in 11.4s //tensorflow/compiler/xla/tests:broadcast_test_cpu PASSED in 10.0s //tensorflow/compiler/xla/tests:buffer_donation_test_cpu PASSED in 10.3s //tensorflow/compiler/xla/tests:call_test_cpu PASSED in 9.3s //tensorflow/compiler/xla/tests:check_execution_arity_test_cpu PASSED in 8.4s //tensorflow/compiler/xla/tests:cholesky_test_cpu PASSED in 17.9s //tensorflow/compiler/xla/tests:client_test_cpu PASSED in 8.4s //tensorflow/compiler/xla/tests:collective_ops_test_cpu PASSED in 19.3s //tensorflow/compiler/xla/tests:compilation_cache_test_cpu PASSED in 11.5s //tensorflow/compiler/xla/tests:compute_constant_test_cpu PASSED in 8.3s //tensorflow/compiler/xla/tests:concat_test_cpu PASSED in 8.4s //tensorflow/compiler/xla/tests:constant_reduction_function_test_cpu PASSED in 8.3s //tensorflow/compiler/xla/tests:constants_test_cpu PASSED in 7.8s //tensorflow/compiler/xla/tests:convert_test_cpu PASSED in 9.3s //tensorflow/compiler/xla/tests:copy_test_cpu PASSED in 14.5s //tensorflow/compiler/xla/tests:cpu_gpu_fusion_test_cpu PASSED in 21.1s //tensorflow/compiler/xla/tests:custom_call_test_cpu PASSED in 8.4s //tensorflow/compiler/xla/tests:deallocation_test_cpu PASSED in 10.8s //tensorflow/compiler/xla/tests:deconstruct_tuple_test_cpu PASSED in 8.4s //tensorflow/compiler/xla/tests:deep_graph_test_cpu PASSED in 10.0s //tensorflow/compiler/xla/tests:execution_profile_test_cpu PASSED in 7.7s //tensorflow/compiler/xla/tests:fft_test_cpu PASSED in 8.2s //tensorflow/compiler/xla/tests:float8_test_cpu PASSED in 8.5s //tensorflow/compiler/xla/tests:floor_ceil_test_cpu PASSED in 8.4s //tensorflow/compiler/xla/tests:fmax_fmin_test_cpu PASSED in 7.8s //tensorflow/compiler/xla/tests:gather_operation_test_cpu PASSED in 21.3s //tensorflow/compiler/xla/tests:get_dimension_size_test_cpu PASSED in 8.0s //tensorflow/compiler/xla/tests:half_test_cpu PASSED in 11.8s //tensorflow/compiler/xla/tests:hlo_metadata_test PASSED in 8.7s //tensorflow/compiler/xla/tests:literal_test_util_test PASSED in 6.0s //tensorflow/compiler/xla/tests:local_client_allocation_test_cpu PASSED in 9.4s //tensorflow/compiler/xla/tests:local_client_aot_test PASSED in 0.0s //tensorflow/compiler/xla/tests:log_test_cpu PASSED in 11.2s //tensorflow/compiler/xla/tests:map_test_cpu PASSED in 10.0s //tensorflow/compiler/xla/tests:matrix_ops_simple_test_cpu PASSED in 17.5s //tensorflow/compiler/xla/tests:multidimensional_slice_test_cpu PASSED in 9.4s //tensorflow/compiler/xla/tests:multiple_devices_on_host_test PASSED in 9.3s //tensorflow/compiler/xla/tests:multithreaded_compilation_test_cpu PASSED in 9.8s //tensorflow/compiler/xla/tests:outfeed_in_nested_computation_test_cpu PASSED in 8.0s //tensorflow/compiler/xla/tests:pad_test_cpu PASSED in 11.9s //tensorflow/compiler/xla/tests:pred_test_cpu PASSED in 9.5s //tensorflow/compiler/xla/tests:query_inferred_shape_test_cpu PASSED in 8.0s //tensorflow/compiler/xla/tests:reduce_hlo_test_cpu PASSED in 9.3s //tensorflow/compiler/xla/tests:reduce_precision_test_cpu PASSED in 8.6s //tensorflow/compiler/xla/tests:replay_test_cpu PASSED in 8.2s //tensorflow/compiler/xla/tests:reshape_motion_test_cpu PASSED in 7.9s //tensorflow/compiler/xla/tests:reverse_test_cpu PASSED in 7.8s //tensorflow/compiler/xla/tests:round_trip_packed_literal_test_cpu PASSED in 8.3s //tensorflow/compiler/xla/tests:round_trip_transfer_test_cpu PASSED in 10.7s //tensorflow/compiler/xla/tests:sample_text_test_cpu PASSED in 9.4s //tensorflow/compiler/xla/tests:scatter_test_cpu PASSED in 13.2s //tensorflow/compiler/xla/tests:select_test_cpu PASSED in 7.6s //tensorflow/compiler/xla/tests:test_utils_test_cpu PASSED in 8.4s //tensorflow/compiler/xla/tests:token_hlo_test_cpu PASSED in 10.0s //tensorflow/compiler/xla/tests:transfer_manager_test_cpu PASSED in 15.8s //tensorflow/compiler/xla/tests:transpose_test_cpu PASSED in 8.1s //tensorflow/compiler/xla/tests:tuple_test_cpu PASSED in 7.7s //tensorflow/compiler/xla/tests:unary_op_test_cpu PASSED in 9.0s //tensorflow/compiler/xla/tests:value_inference_test_cpu PASSED in 9.9s //tensorflow/compiler/xla/tests:vector_ops_reduce_test_cpu PASSED in 9.1s //tensorflow/compiler/xla/tests:vector_ops_simple_test_cpu PASSED in 7.4s //tensorflow/compiler/xla/tests:while_test_cpu PASSED in 9.0s //tensorflow/compiler/xla/tools:hlo_control_flow_flattening_test PASSED in 2.0s //tensorflow/compiler/xla/tools:hlo_extractor_test PASSED in 1.1s //tensorflow/compiler/xla/tools:hlo_module_loader_test PASSED in 1.7s //tensorflow/compiler/xla/tools:interactive_graphviz_bin_test PASSED in 1.1s //tensorflow/compiler/xla/tools:run_hlo_module_bin_test PASSED in 0.7s //tensorflow/compiler/xla/tools/hlo_bisect:hlo_bisect_state_test PASSED in 0.9s //tensorflow/compiler/xla/translate/hlo_to_mhlo:hlo_utils_test PASSED in 0.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo:mlir_hlo_builder_test PASSED in 0.8s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:bool_compare.hlotxt.test PASSED in 0.4s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:case_conditional.hlotxt.test PASSED in 0.7s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:dynamic_param.hlo.test PASSED in 0.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:entry_computation_layout.hlotxt.test PASSED in 1.0s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:frontend_attributes.hlotxt.test PASSED in 0.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:fully_connected_reference_model.hlotxt.test PASSED in 0.8s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:fusion.hlotxt.test PASSED in 1.3s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:if_conditional.hlotxt.test PASSED in 0.7s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:import.hlotxt.test PASSED in 0.7s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:import_async.hlotxt.test PASSED in 0.4s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:layouts_and_names.hlotxt.test PASSED in 1.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:location.hlotxt.test PASSED in 0.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:module_attributes.hlo.test PASSED in 0.7s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:simple.hlo.test PASSED in 0.7s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:spmd_module_sharding.hlo.test PASSED in 0.9s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:types.hlotxt.test PASSED in 0.5s //tensorflow/compiler/xla/translate/hlo_to_mhlo/tests:while.hlotxt.test PASSED in 0.9s //tensorflow/compiler/xla/translate/mhlo_to_hlo:type_to_shape_test PASSED in 1.0s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:add.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:case.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:dynamic.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:export-with-layouts.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:export.mlir.test PASSED in 1.9s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:export_and_check_layouts.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:export_large_constants.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:export_replicas.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:frontend_attributes.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:fusion.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:if.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:input_output_aliasing.mlir.test PASSED in 1.2s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:layouts_and_names.mlir.test PASSED in 1.6s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:location_to_op_metadata.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:missing_main.mlir.test PASSED in 0.5s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:module_attributes.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:multiple_return_tuple.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:opaque_elements_attr.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:rng_get_and_update_state.mlir.test PASSED in 0.6s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:sharding.mlir.test PASSED in 0.8s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:simple.mlir.test PASSED in 0.9s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:unsupported_type.mlir.test PASSED in 6.2s //tensorflow/compiler/xla/translate/mhlo_to_hlo/tests:while.mlir.test PASSED in 0.4s //tensorflow/compiler/xla/translate/mhlo_to_lhlo_with_xla/tests:hlo_text_to_lhlo_no_opt.hlotxt.test PASSED in 2.8s //tensorflow/compiler/xla/translate/mhlo_to_lhlo_with_xla/tests:no_opt_ops.hlotxt.test PASSED in 0.6s //tensorflow/compiler/xla/translate/mhlo_to_lhlo_with_xla/tests:non_identity_layouts.hlotxt.test PASSED in 0.4s //tensorflow/compiler/xla/translate/mhlo_to_lhlo_with_xla/tests:ops.mlir.test PASSED in 6.6s //tensorflow/compiler/xla/translate/mhlo_to_lhlo_with_xla/tests:passthrough.mlir.test PASSED in 0.8s //tensorflow/core:__tensorflow_core_lib_core_legacy_lib_core_all_tests PASSED in 9.9s //tensorflow/core:__tensorflow_core_lib_gtl_legacy_lib_gtl_tests PASSED in 0.5s //tensorflow/core:__tensorflow_core_lib_monitoring_cell_reader_test PASSED in 37.2s //tensorflow/core:__tensorflow_core_lib_monitoring_collection_registry_test PASSED in 0.2s //tensorflow/core:__tensorflow_core_lib_monitoring_counter_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_monitoring_gauge_test PASSED in 0.3s //tensorflow/core:__tensorflow_core_lib_monitoring_metric_def_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_monitoring_percentile_sampler_test PASSED in 0.3s //tensorflow/core:__tensorflow_core_lib_monitoring_sampler_test PASSED in 0.2s //tensorflow/core:__tensorflow_core_lib_monitoring_test_utils_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_strings_legacy_low_level_library_tests PASSED in 0.1s //tensorflow/core:__tensorflow_core_lib_wav_wav_io_test PASSED in 0.1s //tensorflow/core:__tensorflow_core_util_mkl_util_test_srcs PASSED in 0.1s //tensorflow/core:__tensorflow_tsl_lib_core_legacy_lib_core_all_tests PASSED in 0.4s //tensorflow/core:lib_strings_ordered_code_test PASSED in 2.0s //tensorflow/core:lib_strings_proto_serialization_test PASSED in 0.4s //tensorflow/core/api_def:api_test PASSED in 3.5s //tensorflow/core/api_def:update_api_def_test PASSED in 0.5s //tensorflow/core/common_runtime:all_to_all_test_cpu PASSED in 0.9s //tensorflow/core/common_runtime:arg_ret_placement_test PASSED in 0.7s //tensorflow/core/common_runtime:buf_rendezvous_test PASSED in 0.8s //tensorflow/core/common_runtime:collective_executor_mgr_test PASSED in 0.8s //tensorflow/core/common_runtime:collective_param_resolver_local_test PASSED in 6.3s //tensorflow/core/common_runtime:collective_rma_local_test PASSED in 1.3s //tensorflow/core/common_runtime:composite_device_test PASSED in 0.4s //tensorflow/core/common_runtime:cost_measurement_registry_test PASSED in 2.2s //tensorflow/core/common_runtime:cost_util_test PASSED in 0.2s //tensorflow/core/common_runtime:device_mgr_test PASSED in 1.2s //tensorflow/core/common_runtime:device_propagation_test PASSED in 0.5s //tensorflow/core/common_runtime:device_resolver_local_test PASSED in 1.8s //tensorflow/core/common_runtime:device_set_test PASSED in 1.4s //tensorflow/core/common_runtime:direct_session_test_cpu PASSED in 3.8s //tensorflow/core/common_runtime:direct_session_with_debug_test PASSED in 3.0s //tensorflow/core/common_runtime:direct_session_with_tracking_alloc_test PASSED in 2.2s //tensorflow/core/common_runtime:dynamic_device_mgr_test PASSED in 1.0s //tensorflow/core/common_runtime:eval_const_tensor_test PASSED in 0.7s //tensorflow/core/common_runtime:executor_test PASSED in 1.7s //tensorflow/core/common_runtime:function_optimization_registration_test PASSED in 1.2s //tensorflow/core/common_runtime:function_optimization_registry_no_pass_test PASSED in 1.0s //tensorflow/core/common_runtime:function_optimization_registry_pass_failure_test PASSED in 1.1s //tensorflow/core/common_runtime:function_optimization_registry_test PASSED in 2.8s //tensorflow/core/common_runtime:function_threadpool_test PASSED in 1.1s //tensorflow/core/common_runtime:graph_constructor_test PASSED in 3.1s //tensorflow/core/common_runtime:graph_runner_test PASSED in 0.7s //tensorflow/core/common_runtime:hierarchical_tree_broadcaster_test_cpu PASSED in 3.6s //tensorflow/core/common_runtime:inline_function_utils_test PASSED in 0.5s //tensorflow/core/common_runtime:input_colocation_exemption_registry_test PASSED in 0.7s //tensorflow/core/common_runtime:int32_fulltype_test PASSED in 0.6s //tensorflow/core/common_runtime:isolate_placer_inspection_required_ops_pass_test PASSED in 0.8s //tensorflow/core/common_runtime:lower_case_op_test PASSED in 3.0s //tensorflow/core/common_runtime:lower_function_call_test PASSED in 2.8s //tensorflow/core/common_runtime:lower_functional_ops_test PASSED in 10.1s //tensorflow/core/common_runtime:lower_if_op_test PASSED in 2.5s //tensorflow/core/common_runtime:lower_while_op_test PASSED in 2.9s //tensorflow/core/common_runtime:mkl_cpu_allocator_test PASSED in 0.7s //tensorflow/core/common_runtime:mkl_threadpool_device_test PASSED in 0.9s //tensorflow/core/common_runtime:no_op_cost_measurement_test PASSED in 0.3s //tensorflow/core/common_runtime:null_request_cost_accessor_test PASSED in 0.2s //tensorflow/core/common_runtime:optimization_registry_test PASSED in 0.9s //tensorflow/core/common_runtime:optimize_cross_host_control_deps_test PASSED in 16.5s //tensorflow/core/common_runtime:optimize_function_graph_utils_test PASSED in 0.7s //tensorflow/core/common_runtime:partitioning_utils_test PASSED in 1.1s //tensorflow/core/common_runtime:pending_counts_test PASSED in 0.7s //tensorflow/core/common_runtime:permuter_test_cpu PASSED in 9.2s //tensorflow/core/common_runtime:placer_inspection_required_ops_utils_test PASSED in 0.8s //tensorflow/core/common_runtime:placer_test PASSED in 0.7s //tensorflow/core/common_runtime:process_function_library_runtime_test_cpu PASSED in 1.0s //tensorflow/core/common_runtime:process_util_test PASSED in 0.1s //tensorflow/core/common_runtime:quantize_training_test PASSED in 2.3s //tensorflow/core/common_runtime:rendezvous_util_test PASSED in 0.5s //tensorflow/core/common_runtime:replicate_per_replica_nodes_test PASSED in 8.2s //tensorflow/core/common_runtime:request_cost_accessor_registry_test PASSED in 2.3s //tensorflow/core/common_runtime:request_cost_test PASSED in 0.1s //tensorflow/core/common_runtime:ring_gatherer_test_cpu PASSED in 2.6s //tensorflow/core/common_runtime:ring_reducer_test_cpu PASSED in 6.3s //tensorflow/core/common_runtime:scoped_allocator_mgr_test PASSED in 5.1s //tensorflow/core/common_runtime:session_test PASSED in 1.5s //tensorflow/core/common_runtime:shape_refiner_test PASSED in 0.9s //tensorflow/core/common_runtime:single_threaded_executor_test PASSED in 1.3s //tensorflow/core/common_runtime:threadpool_device_test PASSED in 1.1s //tensorflow/core/common_runtime:type_inference_test PASSED in 2.4s //tensorflow/core/common_runtime/eager:attr_builder_test PASSED in 29.0s //tensorflow/core/common_runtime/eager:context_test PASSED in 17.2s //tensorflow/core/common_runtime/eager:custom_device_test PASSED in 20.6s //tensorflow/core/common_runtime/eager:eager_executor_test PASSED in 20.9s //tensorflow/core/common_runtime/eager:eager_op_rewrite_registry_test PASSED in 1.4s //tensorflow/core/common_runtime/eager:eager_operation_test PASSED in 11.3s //tensorflow/core/common_runtime/eager:execute_node_test PASSED in 12.8s //tensorflow/core/common_runtime/eager:execute_test PASSED in 50.3s //tensorflow/core/common_runtime/eager:kernel_and_device_test PASSED in 1.5s //tensorflow/core/common_runtime/eager:mkl_eager_op_rewrite_test PASSED in 15.3s //tensorflow/core/common_runtime/eager:placement_test PASSED in 12.9s //tensorflow/core/common_runtime/eager:placement_utils_test PASSED in 13.6s //tensorflow/core/common_runtime/eager:tensor_handle_data_test PASSED in 13.7s //tensorflow/core/common_runtime/eager:tensor_handle_test PASSED in 16.0s //tensorflow/core/common_runtime/gpu:gpu_device_on_non_gpu_machine_test PASSED in 1.0s //tensorflow/core/common_runtime/next_pluggable_device/c:plugin_c_api_test PASSED in 37.1s //tensorflow/core/config:flags_py_test PASSED in 6.5s //tensorflow/core/config:flags_test PASSED in 0.4s //tensorflow/core/data:compression_utils_test PASSED in 1.8s //tensorflow/core/data:dataset_utils_test PASSED in 1.5s //tensorflow/core/data:hash_utils_test PASSED in 1.3s //tensorflow/core/data:metric_utils_test PASSED in 6.2s //tensorflow/core/data:name_utils_test PASSED in 0.4s //tensorflow/core/data:rewrite_utils_test PASSED in 0.6s //tensorflow/core/data:serialization_utils_test PASSED in 0.6s //tensorflow/core/data:snapshot_utils_test PASSED in 0.9s //tensorflow/core/data:split_utils_test PASSED in 0.7s //tensorflow/core/data:standalone_save_restore_test PASSED in 2.7s //tensorflow/core/data:standalone_test PASSED in 2.1s //tensorflow/core/data:tfdataz_metrics_test PASSED in 3.0s //tensorflow/core/data:unbounded_thread_pool_test PASSED in 0.8s //tensorflow/core/data/service:auto_shard_rewriter_test PASSED in 1.0s //tensorflow/core/data/service:common_test PASSED in 0.2s //tensorflow/core/data/service:credentials_factory_test PASSED in 1.5s //tensorflow/core/data/service:cross_trainer_cache_test PASSED in 1.8s //tensorflow/core/data/service:data_service_test PASSED in 16.9s //tensorflow/core/data/service:data_transfer_test PASSED in 1.4s //tensorflow/core/data/service:dataset_store_test PASSED in 0.9s //tensorflow/core/data/service:dispatcher_client_test PASSED in 6.7s //tensorflow/core/data/service:dispatcher_state_test PASSED in 0.9s //tensorflow/core/data/service:grpc_dispatcher_impl_test PASSED in 2.9s //tensorflow/core/data/service:grpc_util_test PASSED in 0.8s //tensorflow/core/data/service:grpc_worker_impl_test PASSED in 11.4s //tensorflow/core/data/service:journal_test PASSED in 1.0s //tensorflow/core/data/service:logging_utils_test PASSED in 0.3s //tensorflow/core/data/service:task_runner_test PASSED in 4.1s //tensorflow/core/data/service:test_util_test PASSED in 4.8s //tensorflow/core/data/service:url_test PASSED in 0.6s //tensorflow/core/data/service:utils_test PASSED in 0.6s //tensorflow/core/data/service:validate_utils_test PASSED in 0.2s //tensorflow/core/data/service:worker_client_test PASSED in 5.8s //tensorflow/core/data/service:worker_impl_test PASSED in 5.1s //tensorflow/core/data/service/client:data_service_client_test PASSED in 4.3s //tensorflow/core/data/service/client:utils_test PASSED in 3.3s //tensorflow/core/data/service/client:validate_utils_test PASSED in 8.9s //tensorflow/core/data/service/snapshot:distributed_snapshot_test PASSED in 20.0s //tensorflow/core/data/service/snapshot:file_utils_test PASSED in 0.7s //tensorflow/core/data/service/snapshot:path_utils_test PASSED in 0.4s //tensorflow/core/data/service/snapshot:snapshot_manager_test PASSED in 4.3s //tensorflow/core/data/service/snapshot:snapshot_split_provider_test PASSED in 1.9s //tensorflow/core/data/service/snapshot:snapshot_stream_writer_checkpoint_test PASSED in 7.3s //tensorflow/core/data/service/snapshot:snapshot_stream_writer_test PASSED in 7.5s //tensorflow/core/data/service/snapshot:utils_test PASSED in 0.4s //tensorflow/core/debug:debug_graph_utils_test PASSED in 0.7s //tensorflow/core/distributed_runtime:call_options_test PASSED in 0.1s //tensorflow/core/distributed_runtime:cluster_function_library_runtime_test PASSED in 14.4s //tensorflow/core/distributed_runtime:collective_param_resolver_distributed_test PASSED in 0.9s //tensorflow/core/distributed_runtime:collective_rma_distributed_test PASSED in 0.8s //tensorflow/core/distributed_runtime:device_resolver_distributed_test PASSED in 1.2s //tensorflow/core/distributed_runtime:message_wrappers_test PASSED in 0.1s //tensorflow/core/distributed_runtime:partial_run_mgr_test PASSED in 0.6s //tensorflow/core/distributed_runtime:recent_request_ids_test PASSED in 0.2s //tensorflow/core/distributed_runtime:request_id_test PASSED in 0.4s //tensorflow/core/distributed_runtime:rpc_collective_executor_mgr_test PASSED in 1.3s //tensorflow/core/distributed_runtime:server_lib_test PASSED in 0.1s //tensorflow/core/distributed_runtime:session_mgr_test PASSED in 0.8s //tensorflow/core/distributed_runtime:tensor_coding_test PASSED in 0.4s //tensorflow/core/distributed_runtime/coordination:coordination_service_barrier_proxy_test PASSED in 2.8s //tensorflow/core/distributed_runtime/eager:eager_service_impl_test PASSED in 23.9s //tensorflow/core/distributed_runtime/eager:remote_mgr_test PASSED in 23.5s //tensorflow/core/distributed_runtime/integration_test:c_api_coordination_test_cpu PASSED in 58.7s //tensorflow/core/distributed_runtime/integration_test:c_api_multi_client_test_cpu PASSED in 44.9s //tensorflow/core/distributed_runtime/integration_test:c_api_recoverable_jobs_test_cpu PASSED in 41.5s //tensorflow/core/distributed_runtime/integration_test:c_api_session_coordination_test_cpu PASSED in 33.2s //tensorflow/core/distributed_runtime/rpc:grpc_tensor_coding_test PASSED in 4.0s //tensorflow/core/distributed_runtime/rpc:grpc_worker_cache_test PASSED in 2.0s //tensorflow/core/distributed_runtime/rpc/eager:grpc_eager_client_test PASSED in 0.6s //tensorflow/core/example:example_parser_configuration_test PASSED in 0.9s //tensorflow/core/example:feature_util_test PASSED in 0.2s //tensorflow/core/framework:allocator_test PASSED in 3.8s //tensorflow/core/framework:attr_value_util_test PASSED in 0.9s //tensorflow/core/framework:batch_util_test PASSED in 1.3s //tensorflow/core/framework:bfloat16_test PASSED in 0.8s //tensorflow/core/framework:common_shape_fns_test PASSED in 0.8s //tensorflow/core/framework:dataset_test PASSED in 1.2s //tensorflow/core/framework:device_base_test PASSED in 1.2s //tensorflow/core/framework:disable_jit_test PASSED in 0.8s //tensorflow/core/framework:framework_op_gen_lib_test PASSED in 0.5s //tensorflow/core/framework:framework_op_segment_test PASSED in 0.7s //tensorflow/core/framework:framework_resource_var_test PASSED in 0.1s //tensorflow/core/framework:framework_run_handler_test PASSED in 3.1s //tensorflow/core/framework:framework_run_handler_util_test PASSED in 2.3s //tensorflow/core/framework:full_type_inference_util_test PASSED in 0.8s //tensorflow/core/framework:full_type_util_test PASSED in 0.8s //tensorflow/core/framework:function_test PASSED in 0.8s //tensorflow/core/framework:graph_def_util_test PASSED in 0.8s //tensorflow/core/framework:graph_to_functiondef_test PASSED in 1.3s //tensorflow/core/framework:kernel_def_builder_test PASSED in 11.2s //tensorflow/core/framework:kernel_def_util_test PASSED in 1.0s //tensorflow/core/framework:memory_types_test PASSED in 0.7s //tensorflow/core/framework:model_test PASSED in 0.7s //tensorflow/core/framework:node_def_builder_test PASSED in 1.1s //tensorflow/core/framework:node_def_util_test PASSED in 1.4s //tensorflow/core/framework:node_properties_test PASSED in 0.6s //tensorflow/core/framework:op_compatibility_test PASSED in 1.0s //tensorflow/core/framework:op_def_builder_test PASSED in 1.0s //tensorflow/core/framework:op_def_util_test PASSED in 0.8s //tensorflow/core/framework:op_kernel_test PASSED in 1.0s //tensorflow/core/framework:op_registration_test PASSED in 1.4s //tensorflow/core/framework:partial_tensor_shape_test PASSED in 1.6s //tensorflow/core/framework:rendezvous_test PASSED in 4.4s //tensorflow/core/framework:resource_handle_test PASSED in 0.2s //tensorflow/core/framework:resource_mgr_test PASSED in 2.2s //tensorflow/core/framework:resource_op_kernel_test PASSED in 1.3s //tensorflow/core/framework:shape_inference_test PASSED in 0.9s //tensorflow/core/framework:shape_inference_testutil_test PASSED in 0.8s //tensorflow/core/framework:tensor_shape_test PASSED in 7.5s //tensorflow/core/framework:tensor_slice_test PASSED in 0.9s //tensorflow/core/framework:tensor_test PASSED in 33.4s //tensorflow/core/framework:tensor_testutil_test PASSED in 0.9s //tensorflow/core/framework:tensor_util_test PASSED in 1.2s //tensorflow/core/framework:tracking_allocator_test PASSED in 0.8s //tensorflow/core/framework:types_test PASSED in 1.4s //tensorflow/core/framework:variant_op_registry_test PASSED in 20.1s //tensorflow/core/framework:variant_test PASSED in 2.0s //tensorflow/core/framework/registration:registration_test PASSED in 0.6s //tensorflow/core/function/capture:by_ref_capture_test PASSED in 8.3s //tensorflow/core/function/capture:capture_container_test PASSED in 7.3s //tensorflow/core/function/integration_test:side_inputs_manual_api_test PASSED in 21.8s //tensorflow/core/function/integration_test:side_inputs_test PASSED in 16.9s //tensorflow/core/function/polymorphism:function_cache_test PASSED in 6.0s //tensorflow/core/function/polymorphism:function_type_test PASSED in 6.4s //tensorflow/core/function/polymorphism:type_dispatch_test PASSED in 9.1s //tensorflow/core/function/runtime_client:runtime_client_cc_test PASSED in 48.9s //tensorflow/core/function/trace_type:default_types_test PASSED in 6.5s //tensorflow/core/function/trace_type:serialization_test PASSED in 5.9s //tensorflow/core/function/trace_type:trace_type_test PASSED in 10.0s //tensorflow/core/graph:algorithm_test PASSED in 0.7s //tensorflow/core/graph:collective_order_test PASSED in 0.5s //tensorflow/core/graph:control_flow_test PASSED in 0.7s //tensorflow/core/graph:costmodel_test PASSED in 1.1s //tensorflow/core/graph:edgeset_test PASSED in 1.0s //tensorflow/core/graph:graph_def_builder_test PASSED in 0.9s //tensorflow/core/graph:graph_partition_test PASSED in 0.8s //tensorflow/core/graph:graph_test PASSED in 0.9s //tensorflow/core/graph:node_builder_test PASSED in 1.2s //tensorflow/core/graph:optimizer_cse_test PASSED in 1.1s //tensorflow/core/graph:subgraph_test PASSED in 0.8s //tensorflow/core/graph:tensor_id_test PASSED in 0.9s //tensorflow/core/graph:validate_test PASSED in 5.4s //tensorflow/core/graph/regularization:simple_delete_test PASSED in 0.6s //tensorflow/core/graph/regularization:util_test PASSED in 0.2s //tensorflow/core/grappler:graph_topology_view_test PASSED in 0.2s //tensorflow/core/grappler:graph_view_test PASSED in 3.4s //tensorflow/core/grappler:grappler_item_builder_test PASSED in 1.7s //tensorflow/core/grappler:grappler_item_test PASSED in 2.1s //tensorflow/core/grappler:mutable_graph_view_test PASSED in 1.9s //tensorflow/core/grappler:utils_test PASSED in 3.3s //tensorflow/core/grappler/clusters:virtual_cluster_test PASSED in 2.0s //tensorflow/core/grappler/costs:analytical_cost_estimator_test PASSED in 2.3s //tensorflow/core/grappler/costs:cost_estimator_test PASSED in 0.4s //tensorflow/core/grappler/costs:graph_memory_test PASSED in 2.3s //tensorflow/core/grappler/costs:graph_properties_test PASSED in 5.0s //tensorflow/core/grappler/costs:robust_stats_test PASSED in 0.3s //tensorflow/core/grappler/costs:utils_test PASSED in 1.8s //tensorflow/core/grappler/costs:virtual_placer_test PASSED in 0.4s //tensorflow/core/grappler/costs:virtual_scheduler_test PASSED in 2.6s //tensorflow/core/grappler/graph_analyzer:gen_node_test PASSED in 3.2s //tensorflow/core/grappler/graph_analyzer:graph_analyzer_test PASSED in 2.6s //tensorflow/core/grappler/graph_analyzer:hash_tools_test PASSED in 2.9s //tensorflow/core/grappler/graph_analyzer:sig_node_test PASSED in 3.3s //tensorflow/core/grappler/graph_analyzer:subgraph_test PASSED in 2.6s //tensorflow/core/grappler/inputs:utils_test PASSED in 0.1s //tensorflow/core/grappler/optimizers:arithmetic_optimizer_test_cpu PASSED in 8.9s //tensorflow/core/grappler/optimizers:auto_parallel_test_cpu PASSED in 2.4s //tensorflow/core/grappler/optimizers:common_subgraph_elimination_test_cpu PASSED in 2.0s //tensorflow/core/grappler/optimizers:custom_graph_optimizer_registry_test_cpu PASSED in 4.9s //tensorflow/core/grappler/optimizers:debug_stripper_test_cpu PASSED in 2.3s //tensorflow/core/grappler/optimizers:dependency_optimizer_test_cpu PASSED in 1.6s //tensorflow/core/grappler/optimizers:evaluation_utils_test PASSED in 0.6s //tensorflow/core/grappler/optimizers:function_api_info_test PASSED in 0.9s //tensorflow/core/grappler/optimizers:function_optimizer_test_cpu PASSED in 3.8s //tensorflow/core/grappler/optimizers:generic_layout_optimizer_test_cpu PASSED in 4.6s //tensorflow/core/grappler/optimizers:generic_layout_optimizer_transposer_factory_test PASSED in 0.6s //tensorflow/core/grappler/optimizers:generic_layout_optimizer_transposer_test_cpu PASSED in 3.0s //tensorflow/core/grappler/optimizers:graph_optimizer_stage_test_cpu PASSED in 2.5s //tensorflow/core/grappler/optimizers:implementation_selector_test PASSED in 2.2s //tensorflow/core/grappler/optimizers:loop_optimizer_test_cpu PASSED in 2.0s //tensorflow/core/grappler/optimizers:memory_optimizer_test_cpu PASSED in 2.2s //tensorflow/core/grappler/optimizers:meta_optimizer_test_cpu PASSED in 8.1s //tensorflow/core/grappler/optimizers:mkl_remapper_test PASSED in 2.3s //tensorflow/core/grappler/optimizers:model_pruner_test_cpu PASSED in 9.8s //tensorflow/core/grappler/optimizers:pin_to_host_optimizer_test_cpu PASSED in 2.8s //tensorflow/core/grappler/optimizers:scoped_allocator_optimizer_test PASSED in 3.6s //tensorflow/core/grappler/optimizers:shape_optimizer_test_cpu PASSED in 2.0s //tensorflow/core/grappler/optimizers:static_schedule_test_cpu PASSED in 2.5s //tensorflow/core/grappler/optimizers:tfg_optimizer_hook_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:auto_shard_test PASSED in 0.5s //tensorflow/core/grappler/optimizers/data:autotune_buffer_sizes_test PASSED in 0.9s //tensorflow/core/grappler/optimizers/data:batch_parallelization_test PASSED in 0.5s //tensorflow/core/grappler/optimizers/data:disable_intra_op_parallelism_test PASSED in 0.8s //tensorflow/core/grappler/optimizers/data:disable_prefetch_legacy_autotune_test PASSED in 0.9s //tensorflow/core/grappler/optimizers/data:enable_gradient_descent_test PASSED in 2.9s //tensorflow/core/grappler/optimizers/data:filter_fusion_test PASSED in 0.5s //tensorflow/core/grappler/optimizers/data:filter_parallelization_test PASSED in 0.6s //tensorflow/core/grappler/optimizers/data:function_utils_test PASSED in 0.9s //tensorflow/core/grappler/optimizers/data:fusion_utils_test PASSED in 0.8s //tensorflow/core/grappler/optimizers/data:graph_utils_test PASSED in 0.9s //tensorflow/core/grappler/optimizers/data:inject_prefetch_test PASSED in 1.0s //tensorflow/core/grappler/optimizers/data:make_deterministic_test PASSED in 1.1s //tensorflow/core/grappler/optimizers/data:make_sloppy_test PASSED in 11.4s //tensorflow/core/grappler/optimizers/data:map_and_batch_fusion_test PASSED in 0.5s //tensorflow/core/grappler/optimizers/data:map_and_filter_fusion_test PASSED in 0.7s //tensorflow/core/grappler/optimizers/data:map_fusion_test PASSED in 0.6s //tensorflow/core/grappler/optimizers/data:map_parallelization_test PASSED in 0.6s //tensorflow/core/grappler/optimizers/data:noop_elimination_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:parallel_batch_test PASSED in 1.0s //tensorflow/core/grappler/optimizers/data:replicate_on_split_test PASSED in 0.6s //tensorflow/core/grappler/optimizers/data:shuffle_and_repeat_fusion_test PASSED in 0.4s //tensorflow/core/grappler/optimizers/data:slack_test PASSED in 1.7s //tensorflow/core/grappler/optimizers/data:split_utils_test PASSED in 2.5s //tensorflow/core/grappler/optimizers/data:use_private_thread_pool_test PASSED in 0.9s //tensorflow/core/grappler/optimizers/inference:batch_op_rewriter_test PASSED in 0.3s //tensorflow/core/grappler/utils:canonicalizer_test PASSED in 11.6s //tensorflow/core/grappler/utils:colocation_test PASSED in 0.7s //tensorflow/core/grappler/utils:frame_test PASSED in 0.3s //tensorflow/core/grappler/utils:functions_test PASSED in 1.9s //tensorflow/core/grappler/utils:graph_view_internal_test PASSED in 0.9s //tensorflow/core/grappler/utils:graph_view_test PASSED in 1.5s //tensorflow/core/grappler/utils:grappler_test_test PASSED in 6.8s //tensorflow/core/grappler/utils:pattern_utils_test PASSED in 0.9s //tensorflow/core/grappler/utils:scc_test PASSED in 1.7s //tensorflow/core/grappler/utils:symbolic_shapes_test PASSED in 0.2s //tensorflow/core/grappler/utils:topological_sort_test PASSED in 0.7s //tensorflow/core/grappler/utils:tpu_test PASSED in 0.2s //tensorflow/core/grappler/utils:transitive_fanin_test PASSED in 1.5s //tensorflow/core/grappler/utils:traversal_test PASSED in 0.6s //tensorflow/core/grappler/verifiers:structure_verifier_test PASSED in 9.7s //tensorflow/core/ir:interfaces_test PASSED in 0.3s //tensorflow/core/ir:ops_test PASSED in 0.3s //tensorflow/core/ir:shape_inference_utils_test PASSED in 0.3s //tensorflow/core/ir:tf_op_registry_test PASSED in 0.2s //tensorflow/core/ir:tf_op_wrapper_test PASSED in 0.1s //tensorflow/core/ir:utility_test PASSED in 0.1s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:arg_as_control_ret.pbtxt.test PASSED in 0.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:backedge_segment.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:empty.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:error_during_backedge.pbtxt.test PASSED in 5.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_case_with_attr_inference.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_if_with_attr_inference.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_iterator_get_next_attr_inference.pbtxt.test PASSED in 0.4s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_underscore_output_shapes.pbtxt.test PASSED in 0.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:import_while_with_attr_inference.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:infeed_dequeue.pbtxt.test PASSED in 1.0s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:infer_arg_handle_type.pbtxt.test PASSED in 1.0s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:infer_with_output_shapes.pbtxt.test PASSED in 1.0s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_arg_name.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_backedge_input_size.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_duplicated_node_name.pbtxt.test PASSED in 1.2s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_edge_index.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_edge_name.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_attr_key.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_func_attr_key.pbtxt.test PASSED in 0.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_func_attr_name.pbtxt.test PASSED in 2.2s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_empty_op_type.pbtxt.test PASSED in 0.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_func_with_empty_name.pbtxt.test PASSED in 1.4s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_function_import.pbtxt.test PASSED in 1.3s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_control_result.pbtxt.test PASSED in 0.4s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_input.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_name.pbtxt.test PASSED in 1.2s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_func_with_empty_result.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_function_attr_name.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_generic_function_named_edge_index.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_handle_data.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_control_input.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_control_result.pbtxt.test PASSED in 0.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_control_result_value.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_data_result.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_data_result_value.pbtxt.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_input.pbtxt.test PASSED in 1.1s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_missing_two_inputs.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_named_edge_index.pbtxt.test PASSED in 0.4s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_op_name.pbtxt.test PASSED in 0.6s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:invalid_type_list.pbtxt.test PASSED in 0.4s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:legacy_call.pbtxt.test PASSED in 0.4s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:negative_shape.pbtxt.test PASSED in 0.7s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:negative_zero_constant.pbtxt.test PASSED in 0.8s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:three_nodes_with_attrs.pbtxt.test PASSED in 0.9s //tensorflow/core/ir/importexport/tests/graphdef_to_mlir:version.pbtxt.test PASSED in 1.2s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:empty.mlir.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:fulltype.mlir.test PASSED in 0.5s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:func_with_no_args_or_results.mlir.test PASSED in 0.9s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:negative_zero_constant.mlir.test PASSED in 1.4s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:nested_legacy_call.mlir.test PASSED in 0.4s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:three_nodes_with_attrs.mlir.test PASSED in 1.2s //tensorflow/core/ir/importexport/tests/mlir_to_graphdef:version.mlir.test PASSED in 1.3s //tensorflow/core/ir/importexport/tests/saved_model:saved_model_roundtrip_test PASSED in 0.9s //tensorflow/core/ir/tests:attributes.mlir.test PASSED in 0.6s //tensorflow/core/ir/tests:canonicalize.mlir.test PASSED in 0.4s //tensorflow/core/ir/tests:compatible_types.mlir.test PASSED in 0.7s //tensorflow/core/ir/tests:concrete-ops.mlir.test PASSED in 0.5s //tensorflow/core/ir/tests:generic_concrete_ops.mlir.test PASSED in 0.8s //tensorflow/core/ir/tests:invalid-concrete-ops.mlir.test PASSED in 0.5s //tensorflow/core/ir/tests:invalid-preserved-attrs.mlir.test PASSED in 8.1s //tensorflow/core/ir/tests:invalid.mlir.test PASSED in 0.4s //tensorflow/core/ir/tests:invalid_types.mlir.test PASSED in 0.8s //tensorflow/core/ir/tests:ops.mlir.test PASSED in 0.6s //tensorflow/core/ir/tests:region-invalid-ops.mlir.test PASSED in 1.5s //tensorflow/core/ir/tests:region-ops-graph.mlir.test PASSED in 1.2s //tensorflow/core/ir/tests:region-ops.mlir.test PASSED in 1.1s //tensorflow/core/ir/tests:types.mlir.test PASSED in 1.3s //tensorflow/core/ir/types:dialect_test PASSED in 1.3s //tensorflow/core/kernels:as_string_op_test PASSED in 1.0s //tensorflow/core/kernels:basic_ops_benchmark_test PASSED in 0.6s //tensorflow/core/kernels:batch_kernels_env_test PASSED in 0.7s //tensorflow/core/kernels:batch_kernels_test PASSED in 0.7s //tensorflow/core/kernels:bias_op_test PASSED in 0.7s //tensorflow/core/kernels:bincount_op_test_cpu PASSED in 0.5s //tensorflow/core/kernels:broadcast_to_op_test_cpu PASSED in 0.4s //tensorflow/core/kernels:cast_op_test_cpu PASSED in 1.3s //tensorflow/core/kernels:checkpoint_callback_manager_test PASSED in 1.5s //tensorflow/core/kernels:clustering_ops_test PASSED in 8.5s //tensorflow/core/kernels:composite_tensor_variant_test PASSED in 0.4s //tensorflow/core/kernels:concat_op_test PASSED in 0.5s //tensorflow/core/kernels:constant_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:control_flow_ops_test PASSED in 7.9s //tensorflow/core/kernels:conv_grad_filter_ops_benchmark_test_cpu PASSED in 1.2s //tensorflow/core/kernels:conv_grad_input_ops_benchmark_test_cpu PASSED in 0.6s //tensorflow/core/kernels:conv_ops_benchmark_test_cpu PASSED in 1.4s //tensorflow/core/kernels:conv_ops_test_cpu PASSED in 10.2s //tensorflow/core/kernels:count_ops_test PASSED in 0.9s //tensorflow/core/kernels:cross_op_test PASSED in 0.6s //tensorflow/core/kernels:cwise_ops_test_cpu PASSED in 0.7s //tensorflow/core/kernels:debug_ops_test PASSED in 1.8s //tensorflow/core/kernels:decode_wav_op_test PASSED in 3.6s //tensorflow/core/kernels:deep_conv2d_test PASSED in 0.4s //tensorflow/core/kernels:dequantize_op_test PASSED in 0.7s //tensorflow/core/kernels:diag_op_test_cpu PASSED in 0.8s //tensorflow/core/kernels:dynamic_partition_op_test_cpu PASSED in 1.2s //tensorflow/core/kernels:dynamic_stitch_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:eigen_activations_test PASSED in 0.2s //tensorflow/core/kernels:eigen_attention_test PASSED in 0.2s //tensorflow/core/kernels:eigen_backward_cuboid_convolutions_test PASSED in 2.5s //tensorflow/core/kernels:eigen_backward_spatial_convolutions_test PASSED in 0.3s //tensorflow/core/kernels:eigen_benchmark_cpu_test PASSED in 0.1s //tensorflow/core/kernels:eigen_mkldnn_contraction_kernel_test PASSED in 0.6s //tensorflow/core/kernels:eigen_pooling_test PASSED in 0.3s //tensorflow/core/kernels:encode_wav_op_test PASSED in 4.0s //tensorflow/core/kernels:fingerprint_op_test PASSED in 1.1s //tensorflow/core/kernels:fused_batch_norm_ex_op_test_cpu PASSED in 0.8s //tensorflow/core/kernels:fused_batch_norm_op_test_cpu PASSED in 0.8s //tensorflow/core/kernels:gather_nd_op_test_cpu PASSED in 0.7s //tensorflow/core/kernels:gather_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:guarantee_const_op_test PASSED in 0.8s //tensorflow/core/kernels:identity_n_op_test PASSED in 0.6s //tensorflow/core/kernels:identity_op_test PASSED in 1.2s //tensorflow/core/kernels:immutable_constant_op_test PASSED in 1.0s //tensorflow/core/kernels:in_topk_op_test PASSED in 0.5s //tensorflow/core/kernels:isotonic_regression_op_test PASSED in 8.1s //tensorflow/core/kernels:logging_ops_test PASSED in 1.8s //tensorflow/core/kernels:lookup_ops_test PASSED in 0.8s //tensorflow/core/kernels:loss_test PASSED in 0.5s //tensorflow/core/kernels:lrn_op_test_cpu PASSED in 0.5s //tensorflow/core/kernels:matmul_op_test_cpu PASSED in 3.3s //tensorflow/core/kernels:merge_v2_checkpoints_op_test PASSED in 0.9s //tensorflow/core/kernels:mfcc_dct_test PASSED in 0.4s //tensorflow/core/kernels:mfcc_mel_filterbank_test PASSED in 0.1s //tensorflow/core/kernels:mfcc_op_test_cpu PASSED in 2.7s //tensorflow/core/kernels:mfcc_test PASSED in 0.4s //tensorflow/core/kernels:multinomial_op_test_cpu PASSED in 0.7s //tensorflow/core/kernels:nn_ops_test_cpu PASSED in 0.7s //tensorflow/core/kernels:one_hot_op_test PASSED in 1.1s //tensorflow/core/kernels:ops_testutil_test PASSED in 1.5s //tensorflow/core/kernels:ops_util_test PASSED in 0.3s //tensorflow/core/kernels:parameterized_truncated_normal_op_test_cpu PASSED in 0.5s //tensorflow/core/kernels:parse_tensor_test PASSED in 1.0s //tensorflow/core/kernels:quantization_utils_test PASSED in 1.2s //tensorflow/core/kernels:quantize_and_dequantize_op_test_cpu PASSED in 0.9s //tensorflow/core/kernels:quantize_down_and_shrink_range_op_test PASSED in 1.1s //tensorflow/core/kernels:quantize_op_test PASSED in 0.7s //tensorflow/core/kernels:quantized_activation_ops_test PASSED in 0.7s //tensorflow/core/kernels:quantized_add_op_test PASSED in 1.2s //tensorflow/core/kernels:quantized_batch_norm_op_test PASSED in 0.8s //tensorflow/core/kernels:quantized_bias_add_op_test PASSED in 1.5s //tensorflow/core/kernels:quantized_concat_op_test PASSED in 0.7s //tensorflow/core/kernels:quantized_conv_ops_test PASSED in 0.6s //tensorflow/core/kernels:quantized_instance_norm_test PASSED in 0.9s //tensorflow/core/kernels:quantized_matmul_op_test PASSED in 0.9s //tensorflow/core/kernels:quantized_mul_op_test PASSED in 1.8s //tensorflow/core/kernels:quantized_pooling_ops_test PASSED in 1.2s //tensorflow/core/kernels:quantized_reshape_op_test PASSED in 0.6s //tensorflow/core/kernels:quantized_resize_bilinear_op_test PASSED in 2.8s //tensorflow/core/kernels:ragged_fill_empty_rows_op_test PASSED in 1.2s //tensorflow/core/kernels:ragged_gather_op_test PASSED in 0.8s //tensorflow/core/kernels:ragged_range_op_test PASSED in 0.7s //tensorflow/core/kernels:ragged_tensor_from_variant_op_test PASSED in 0.8s //tensorflow/core/kernels:ragged_tensor_to_sparse_kernel_test PASSED in 0.8s //tensorflow/core/kernels:ragged_tensor_to_tensor_op_test PASSED in 1.2s //tensorflow/core/kernels:ragged_tensor_to_variant_op_test PASSED in 0.8s //tensorflow/core/kernels:random_binomial_op_test_cpu PASSED in 0.5s //tensorflow/core/kernels:random_index_shuffle_test PASSED in 0.2s //tensorflow/core/kernels:random_op_test_cpu PASSED in 0.8s //tensorflow/core/kernels:random_poisson_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:range_sampler_test PASSED in 0.6s //tensorflow/core/kernels:reduction_ops_test_cpu PASSED in 1.4s //tensorflow/core/kernels:regex_replace_op_test PASSED in 0.8s //tensorflow/core/kernels:requantization_range_op_test PASSED in 1.1s //tensorflow/core/kernels:requantize_op_test PASSED in 0.8s //tensorflow/core/kernels:resource_ops_test PASSED in 1.4s //tensorflow/core/kernels:restore_op_test PASSED in 1.3s //tensorflow/core/kernels:restore_v2_op_test PASSED in 0.8s //tensorflow/core/kernels:reverse_op_test PASSED in 1.2s //tensorflow/core/kernels:roll_op_test PASSED in 1.4s //tensorflow/core/kernels:save_op_test PASSED in 1.3s //tensorflow/core/kernels:save_v2_op_test PASSED in 0.6s //tensorflow/core/kernels:scan_ops_test_cpu PASSED in 1.0s //tensorflow/core/kernels:scatter_nd_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:scatter_op_test PASSED in 0.6s //tensorflow/core/kernels:scoped_allocator_ops_test_cpu PASSED in 8.5s //tensorflow/core/kernels:sdca_ops_test PASSED in 2.4s //tensorflow/core/kernels:segment_reduction_ops_test PASSED in 0.5s //tensorflow/core/kernels:sendrecv_ops_test PASSED in 0.6s //tensorflow/core/kernels:sequence_ops_test PASSED in 1.3s //tensorflow/core/kernels:shape_ops_test PASSED in 0.5s //tensorflow/core/kernels:slice_op_test PASSED in 0.8s //tensorflow/core/kernels:spacetobatch_benchmark_test_cpu PASSED in 0.6s //tensorflow/core/kernels:sparse_add_op_test PASSED in 0.7s //tensorflow/core/kernels:sparse_dense_binary_op_shared_test PASSED in 0.9s //tensorflow/core/kernels:sparse_fill_empty_rows_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:sparse_matmul_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:sparse_reduce_sum_op_test PASSED in 1.5s //tensorflow/core/kernels:sparse_tensor_dense_matmul_op_test_cpu PASSED in 1.0s //tensorflow/core/kernels:sparse_to_dense_op_test_cpu PASSED in 1.6s //tensorflow/core/kernels:sparse_utils_test PASSED in 0.4s //tensorflow/core/kernels:sparse_xent_op_test_cpu PASSED in 0.8s //tensorflow/core/kernels:spectrogram_op_test_cpu PASSED in 2.4s //tensorflow/core/kernels:spectrogram_test PASSED in 0.2s //tensorflow/core/kernels:split_op_test_cpu PASSED in 0.8s //tensorflow/core/kernels:split_v_op_test_cpu PASSED in 0.6s //tensorflow/core/kernels:strided_slice_op_test PASSED in 1.0s //tensorflow/core/kernels:string_format_op_test PASSED in 1.5s //tensorflow/core/kernels:string_ngrams_op_test PASSED in 0.6s //tensorflow/core/kernels:string_split_op_test PASSED in 0.5s //tensorflow/core/kernels:substr_op_test PASSED in 1.1s //tensorflow/core/kernels:summary_audio_op_test PASSED in 0.6s //tensorflow/core/kernels:summary_image_op_test PASSED in 0.5s //tensorflow/core/kernels:summary_op_test PASSED in 1.8s //tensorflow/core/kernels:summary_tensor_op_test PASSED in 0.9s //tensorflow/core/kernels:tensor_cord_test PASSED in 0.2s //tensorflow/core/kernels:tensor_flag_utils_test PASSED in 0.5s //tensorflow/core/kernels:tensor_map_test PASSED in 0.4s //tensorflow/core/kernels:training_ops_test PASSED in 0.7s //tensorflow/core/kernels:transpose_util_test PASSED in 0.5s //tensorflow/core/kernels:unary_ops_composition_test_cpu PASSED in 2.4s //tensorflow/core/kernels:unique_op_test PASSED in 0.5s //tensorflow/core/kernels:variable_ops_test PASSED in 2.9s //tensorflow/core/kernels:while_op_test PASSED in 0.8s //tensorflow/core/kernels:xent_op_test_cpu PASSED in 0.8s //tensorflow/core/kernels/batching_util:basic_batch_scheduler_test PASSED in 0.2s //tensorflow/core/kernels/batching_util:batch_input_task_test PASSED in 1.1s //tensorflow/core/kernels/batching_util:batch_resource_base_test PASSED in 0.1s //tensorflow/core/kernels/batching_util:batch_scheduler_test PASSED in 0.9s //tensorflow/core/kernels/batching_util:bounded_executor_test PASSED in 20.6s //tensorflow/core/kernels/batching_util:input_split_metadata_test PASSED in 0.5s //tensorflow/core/kernels/batching_util:periodic_function_test PASSED in 2.8s //tensorflow/core/kernels/batching_util:serial_device_batch_scheduler_test PASSED in 3.0s //tensorflow/core/kernels/batching_util:shared_batch_scheduler_test PASSED in 3.8s //tensorflow/core/kernels/batching_util:threadsafe_status_test PASSED in 0.2s //tensorflow/core/kernels/data:batch_dataset_op_test PASSED in 2.2s //tensorflow/core/kernels/data:cache_dataset_ops_test PASSED in 1.6s //tensorflow/core/kernels/data:concatenate_dataset_op_test PASSED in 0.7s //tensorflow/core/kernels/data:filter_dataset_op_test PASSED in 2.4s //tensorflow/core/kernels/data:finalize_dataset_op_test PASSED in 1.4s //tensorflow/core/kernels/data:fixed_length_record_dataset_op_test PASSED in 1.1s //tensorflow/core/kernels/data:flat_map_dataset_op_test PASSED in 3.2s //tensorflow/core/kernels/data:get_options_op_test PASSED in 0.9s //tensorflow/core/kernels/data:interleave_dataset_op_test PASSED in 1.1s //tensorflow/core/kernels/data:iterator_ops_test PASSED in 1.2s //tensorflow/core/kernels/data:map_dataset_op_test PASSED in 0.9s //tensorflow/core/kernels/data:map_defun_op_test PASSED in 1.0s //tensorflow/core/kernels/data:optimize_dataset_op_test PASSED in 1.4s //tensorflow/core/kernels/data:options_dataset_op_test PASSED in 1.7s //tensorflow/core/kernels/data:padded_batch_dataset_op_test PASSED in 1.3s //tensorflow/core/kernels/data:parallel_batch_dataset_op_test PASSED in 1.2s //tensorflow/core/kernels/data:parallel_filter_dataset_op_test PASSED in 2.6s //tensorflow/core/kernels/data:parallel_interleave_dataset_op_test PASSED in 4.5s //tensorflow/core/kernels/data:parallel_map_dataset_op_test PASSED in 3.0s //tensorflow/core/kernels/data:prefetch_autotuner_test PASSED in 0.1s //tensorflow/core/kernels/data:prefetch_dataset_op_test PASSED in 1.2s //tensorflow/core/kernels/data:range_dataset_op_test PASSED in 0.7s //tensorflow/core/kernels/data:reduce_dataset_op_test PASSED in 2.3s //tensorflow/core/kernels/data:repeat_dataset_op_test PASSED in 1.3s //tensorflow/core/kernels/data:rewrite_dataset_op_test PASSED in 1.1s //tensorflow/core/kernels/data:shard_dataset_op_test PASSED in 1.3s //tensorflow/core/kernels/data:shuffle_dataset_op_test PASSED in 1.6s //tensorflow/core/kernels/data:skip_dataset_op_test PASSED in 1.0s //tensorflow/core/kernels/data:sparse_tensor_slice_dataset_op_test PASSED in 1.5s //tensorflow/core/kernels/data:take_dataset_op_test PASSED in 0.8s //tensorflow/core/kernels/data:tensor_dataset_op_test PASSED in 1.7s //tensorflow/core/kernels/data:tensor_slice_dataset_op_test PASSED in 0.8s //tensorflow/core/kernels/data:text_line_dataset_op_test PASSED in 2.3s //tensorflow/core/kernels/data:tf_record_dataset_op_test PASSED in 2.0s //tensorflow/core/kernels/data:window_dataset_op_test PASSED in 3.0s //tensorflow/core/kernels/data:zip_dataset_op_test PASSED in 1.1s //tensorflow/core/kernels/data/experimental:assert_next_dataset_op_test PASSED in 1.1s //tensorflow/core/kernels/data/experimental:assert_prev_dataset_op_test PASSED in 1.7s //tensorflow/core/kernels/data/experimental:auto_shard_dataset_op_test PASSED in 1.6s //tensorflow/core/kernels/data/experimental:directed_interleave_dataset_op_test PASSED in 1.6s //tensorflow/core/kernels/data/experimental:list_dataset_op_test PASSED in 1.7s //tensorflow/core/kernels/data/experimental:map_and_batch_dataset_op_test PASSED in 1.5s //tensorflow/core/kernels/data/experimental:parallel_interleave_dataset_op_test PASSED in 1.8s //tensorflow/core/kernels/data/experimental:random_dataset_op_test PASSED in 1.8s //tensorflow/core/kernels/data/experimental:sampling_dataset_op_test PASSED in 0.6s //tensorflow/core/kernels/data/experimental:save_dataset_op_test PASSED in 1.0s //tensorflow/core/kernels/data/experimental:unique_dataset_op_test PASSED in 1.3s //tensorflow/core/kernels/image:adjust_contrast_op_benchmark_test_cpu PASSED in 1.7s //tensorflow/core/kernels/image:adjust_contrast_op_test PASSED in 1.2s //tensorflow/core/kernels/image:colorspace_op_test PASSED in 0.9s //tensorflow/core/kernels/image:crop_and_resize_op_benchmark_test_cpu PASSED in 0.7s //tensorflow/core/kernels/image:crop_and_resize_op_test PASSED in 0.6s //tensorflow/core/kernels/image:encode_jpeg_op_test PASSED in 1.3s //tensorflow/core/kernels/image:mirror_pad_op_benchmark_test_cpu PASSED in 0.8s //tensorflow/core/kernels/image:mirror_pad_op_test PASSED in 1.2s //tensorflow/core/kernels/image:non_max_suppression_op_benchmark_test PASSED in 0.8s //tensorflow/core/kernels/image:non_max_suppression_op_test PASSED in 2.9s //tensorflow/core/kernels/image:resize_area_op_test PASSED in 1.9s //tensorflow/core/kernels/image:resize_benchmark_test_cpu PASSED in 0.5s //tensorflow/core/kernels/image:resize_bicubic_op_test PASSED in 4.3s //tensorflow/core/kernels/image:resize_ops_test_cpu PASSED in 2.6s //tensorflow/core/kernels/image:sampling_kernels_test PASSED in 0.5s //tensorflow/core/kernels/image:scale_and_translate_op_test PASSED in 2.0s //tensorflow/core/kernels/linalg:banded_triangular_solve_op_test_cpu PASSED in 0.5s //tensorflow/core/kernels/linalg:matrix_triangular_solve_op_test_cpu PASSED in 1.8s //tensorflow/core/kernels/mkl:mkl_conv_ops_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_dequantize_op_test PASSED in 0.2s //tensorflow/core/kernels/mkl:mkl_fused_batch_norm_op_test PASSED in 0.8s //tensorflow/core/kernels/mkl:mkl_fused_ops_test PASSED in 2.7s //tensorflow/core/kernels/mkl:mkl_matmul_op_benchmark PASSED in 0.3s //tensorflow/core/kernels/mkl:mkl_qmatmul_op_test PASSED in 0.2s //tensorflow/core/kernels/mkl:mkl_quantize_op_test PASSED in 0.3s //tensorflow/core/kernels/mkl:mkl_quantized_concat_op_test PASSED in 0.7s //tensorflow/core/kernels/mkl:mkl_quantized_conv_ops_perchannel_test PASSED in 0.3s //tensorflow/core/kernels/mkl:mkl_quantized_conv_ops_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_quantized_pooling_ops_test PASSED in 0.2s //tensorflow/core/kernels/mkl:mkl_relu_op_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_requantize_ops_test PASSED in 0.1s //tensorflow/core/kernels/mkl:mkl_swish_op_test PASSED in 0.2s //tensorflow/core/kernels/mkl:onednn_nn_ops_benchmark PASSED in 0.1s //tensorflow/core/kernels/sparse:kernels_test PASSED in 1.7s //tensorflow/core/kernels/uniform_quant_ops:math_utils_test PASSED in 0.7s //tensorflow/core/kernels/uniform_quant_ops:tensor_utils_test PASSED in 0.8s //tensorflow/core/kernels/uniform_quant_ops:uniform_dequantize_op_test PASSED in 0.7s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantize_op_test PASSED in 1.4s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_add_op_test PASSED in 0.9s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_clip_by_value_op_test PASSED in 1.6s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_convolution_ops_test PASSED in 1.4s //tensorflow/core/kernels/uniform_quant_ops:uniform_quantized_dot_ops_test PASSED in 0.7s //tensorflow/core/kernels/uniform_quant_ops:uniform_requantize_op_test PASSED in 0.8s //tensorflow/core/lib/db:sqlite_test PASSED in 0.4s //tensorflow/core/lib/gif:lib_gif_io_test PASSED in 1.6s //tensorflow/core/lib/jpeg:lib_jpeg_jpeg_mem_unittest PASSED in 1.1s //tensorflow/core/ops:cudnn_rnn_ops_test_cc PASSED in 0.6s //tensorflow/core/ops:ops_array_grad_test PASSED in 2.3s //tensorflow/core/ops:ops_math_grad_test PASSED in 4.7s //tensorflow/core/ops:ops_tests PASSED in 0.8s //tensorflow/core/ops/compat:backwards_compatibility_test PASSED in 0.6s //tensorflow/core/platform:__tensorflow_tsl_platform_profile_utils_cpu_utils_test PASSED in 0.2s //tensorflow/core/platform:enable_tf2_utils_test PASSED in 0.1s //tensorflow/core/platform:env_test PASSED in 3.0s //tensorflow/core/platform:fake_python_env_test PASSED in 0.3s //tensorflow/core/platform:file_system_test PASSED in 1.4s //tensorflow/core/platform:platform_strings_test PASSED in 0.1s //tensorflow/core/platform:ram_file_system_test PASSED in 25.2s //tensorflow/core/platform:resource_loader_test PASSED in 0.2s //tensorflow/core/platform:vmodule_benchmark_test PASSED in 0.7s //tensorflow/core/platform:vmodule_test PASSED in 0.6s //tensorflow/core/profiler/backends/cpu:host_tracer_test PASSED in 0.1s //tensorflow/core/profiler/convert:hlo_proto_to_graph_view_test PASSED in 0.7s //tensorflow/core/profiler/convert:hlo_proto_to_memory_visualization_utils_test PASSED in 0.3s //tensorflow/core/profiler/convert:op_stats_to_pod_stats_test PASSED in 0.1s //tensorflow/core/profiler/convert:op_stats_to_pod_viewer_test PASSED in 0.2s //tensorflow/core/profiler/convert:op_stats_to_tf_stats_test PASSED in 0.7s //tensorflow/core/profiler/convert:xplane_to_kernel_stats_db_test PASSED in 0.2s //tensorflow/core/profiler/convert:xplane_to_memory_profile_test PASSED in 0.2s //tensorflow/core/profiler/convert:xplane_to_op_metrics_db_test PASSED in 0.1s //tensorflow/core/profiler/convert:xplane_to_op_stats_test PASSED in 0.7s //tensorflow/core/profiler/convert:xplane_to_step_events_test PASSED in 1.1s //tensorflow/core/profiler/convert:xplane_to_tf_functions_test PASSED in 1.0s //tensorflow/core/profiler/convert:xplane_to_tool_names_test PASSED in 0.2s //tensorflow/core/profiler/internal:tfprof_show_test PASSED in 1.5s //tensorflow/core/profiler/internal:tfprof_stats_test PASSED in 0.7s //tensorflow/core/profiler/internal:tfprof_tensor_test PASSED in 0.5s //tensorflow/core/profiler/internal:tfprof_timeline_test PASSED in 1.0s //tensorflow/core/profiler/internal/advisor:tfprof_advisor_test PASSED in 0.8s //tensorflow/core/profiler/lib:profiler_disabled_test PASSED in 0.2s //tensorflow/core/profiler/utils:derived_timeline_test PASSED in 0.2s //tensorflow/core/profiler/utils:kernel_stats_utils_test PASSED in 0.1s //tensorflow/core/profiler/utils:op_metrics_db_utils_test PASSED in 0.2s //tensorflow/core/profiler/utils:step_intersection_test PASSED in 0.5s //tensorflow/core/summary:schema_test PASSED in 0.3s //tensorflow/core/summary:summary_db_writer_test PASSED in 0.5s //tensorflow/core/summary:summary_file_writer_test PASSED in 0.5s //tensorflow/core/tfrt/common:pjrt_state_test PASSED in 7.4s //tensorflow/core/tfrt/common:pjrt_util_test PASSED in 6.2s //tensorflow/core/tfrt/fallback:cost_recorder_test PASSED in 0.4s //tensorflow/core/tfrt/fallback:fallback_state_test PASSED in 0.7s //tensorflow/core/transforms:eval_utils_test PASSED in 2.1s //tensorflow/core/transforms:graph_transform_wrapper_test PASSED in 0.2s //tensorflow/core/util:bcast_test PASSED in 1.1s //tensorflow/core/util:command_line_flags_test PASSED in 1.2s //tensorflow/core/util:debug_data_dumper_test PASSED in 1.2s //tensorflow/core/util:debug_events_writer_test PASSED in 0.6s //tensorflow/core/util:dump_graph_test PASSED in 1.6s //tensorflow/core/util:equal_graph_def_test PASSED in 0.9s //tensorflow/core/util:events_writer_test PASSED in 4.3s //tensorflow/core/util:example_proto_fast_parsing_test PASSED in 1.7s //tensorflow/core/util:example_proto_helper_test PASSED in 1.1s //tensorflow/core/util:exec_on_stall_test PASSED in 2.2s //tensorflow/core/util:fake_clock_env_test PASSED in 2.1s //tensorflow/core/util:incremental_barrier_test PASSED in 0.9s //tensorflow/core/util:matmul_bcast_test PASSED in 0.8s //tensorflow/core/util:memmapped_file_system_test PASSED in 0.9s //tensorflow/core/util:overflow_test PASSED in 0.9s //tensorflow/core/util:presized_cuckoo_map_test PASSED in 2.3s //tensorflow/core/util:ragged_to_dense_util_test PASSED in 1.1s //tensorflow/core/util:reffed_status_callback_test PASSED in 1.2s //tensorflow/core/util:reporter_test PASSED in 1.8s //tensorflow/core/util:saved_tensor_slice_util_test PASSED in 1.2s //tensorflow/core/util:semver_test PASSED in 0.8s //tensorflow/core/util:stat_summarizer_test PASSED in 1.0s //tensorflow/core/util:strided_slice_op_test PASSED in 0.8s //tensorflow/core/util:tensor_format_test PASSED in 1.6s //tensorflow/core/util:tensor_slice_reader_test PASSED in 1.3s //tensorflow/core/util:tensor_slice_set_test PASSED in 0.7s //tensorflow/core/util:tensor_slice_util_test PASSED in 1.0s //tensorflow/core/util:tensor_slice_writer_test PASSED in 2.4s //tensorflow/core/util:work_sharder_test PASSED in 1.1s //tensorflow/core/util/ctc:ctc_beam_search_test PASSED in 0.2s //tensorflow/core/util/proto:descriptor_pool_registry_test PASSED in 0.6s //tensorflow/core/util/proto:proto_utils_test PASSED in 1.3s //tensorflow/core/util/quantization:uniform_quant_ops_params_test PASSED in 0.9s //tensorflow/core/util/sparse:sparse_tensor_test PASSED in 0.3s //tensorflow/core/util/tensor_bundle:tensor_bundle_test PASSED in 41.4s //tensorflow/dtensor/mlir:dtensor_location_test PASSED in 0.1s //tensorflow/dtensor/mlir:group_assignment_test PASSED in 0.1s //tensorflow/dtensor/mlir/tests:annotate_global_shape.mlir.test PASSED in 0.8s //tensorflow/dtensor/mlir/tests:cluster_function_conversion.mlir.test PASSED in 0.9s //tensorflow/dtensor/mlir/tests:constant_folding.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:designate_resource_handle_mesh.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:device_mesh_cluster_coarsening.mlir.test PASSED in 0.8s //tensorflow/dtensor/mlir/tests:dtensor_all_gather.mlir.test PASSED in 1.3s //tensorflow/dtensor/mlir/tests:dtensor_all_scatter.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_combine_optimization.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_lowering.mlir.test PASSED in 1.0s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_scatter_optimization.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_allreduce_sum_optimization.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:dtensor_layout_must_execute.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:dtensor_layout_to_xla_sharding_op.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_mixed_precision_reduce.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:dtensor_reduce_scatter_lowering.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_remove_dtensorlayout.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_replace_auxiliary_layout_op.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:dtensor_replace_relayout_with_identity.mlir.test PASSED in 1.1s //tensorflow/dtensor/mlir/tests:dtensor_set_hlo_sharding.mlir.test PASSED in 1.0s //tensorflow/dtensor/mlir/tests:dtensor_set_hlo_sharding_default.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:dtensor_xla_spmd_integration.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:elide_identity_before_copy_to_mesh.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:function_renaming.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:handle_cross_cluster_dependencies.mlir.test PASSED in 1.3s //tensorflow/dtensor/mlir/tests:handle_sparsetensors.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:layout_propagation_v2.mlir.test PASSED in 3.1s //tensorflow/dtensor/mlir/tests:lower_send_recv.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:merge_clusters.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:mesh_propagation.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:op_to_device_cluster.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:propagate_default_layout.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:propagate_device_id_to_function.mlir.test PASSED in 0.4s //tensorflow/dtensor/mlir/tests:restore_and_assign.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:restore_shape_inference.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:set_default_sharding.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:sparse_expansion.mlir.test PASSED in 1.2s //tensorflow/dtensor/mlir/tests:spmd_batchparallel.mlir.test PASSED in 0.9s //tensorflow/dtensor/mlir/tests:spmd_concat.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:spmd_conv.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_einsum.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:spmd_expansion.mlir.test PASSED in 1.0s //tensorflow/dtensor/mlir/tests:spmd_io_ops.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:spmd_iterator.mlir.test PASSED in 1.1s //tensorflow/dtensor/mlir/tests:spmd_matmul.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:spmd_random.mlir.test PASSED in 1.4s //tensorflow/dtensor/mlir/tests:spmd_save_restore.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_segment_sum.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:spmd_slice.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:spmd_softmax_loss.mlir.test PASSED in 0.5s //tensorflow/dtensor/mlir/tests:spmd_squeeze.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:spmd_var_handle.mlir.test PASSED in 3.6s //tensorflow/dtensor/mlir/tests:tf_dtensor_ops.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:tpu_add_resource_device_attribute.mlir.test PASSED in 0.6s //tensorflow/dtensor/mlir/tests:tpu_integration.mlir.test PASSED in 0.7s //tensorflow/dtensor/mlir/tests:undo_merge_const_across_mesh.mlir.test PASSED in 1.3s //tensorflow/dtensor/mlir/tests:update_tpu_metadata.mlir.test PASSED in 0.9s //tensorflow/dtensor/python/tests:collective_combine_all_reduce_test_cpu PASSED in 16.9s //tensorflow/dtensor/python/tests:collective_test_cpu PASSED in 17.6s //tensorflow/dtensor/python/tests:config_test_cpu PASSED in 9.9s //tensorflow/dtensor/python/tests:layout_test_cpu PASSED in 8.4s //tensorflow/dtensor/python/tests:multi_client_test_cpu PASSED in 14.7s //tensorflow/dtensor/python/tests:numpy_util_test_cpu PASSED in 9.0s //tensorflow/dtensor/tests:executable_manager_test PASSED in 31.2s //tensorflow/dtensor/tests:layout_to_xla_sharding_test PASSED in 0.6s //tensorflow/dtensor/tests:tensor_layout_test PASSED in 0.3s //tensorflow/examples/adding_an_op:fact_test PASSED in 14.0s //tensorflow/examples/adding_an_op:zero_out_1_test PASSED in 15.8s //tensorflow/examples/adding_an_op:zero_out_2_test PASSED in 16.5s //tensorflow/examples/adding_an_op:zero_out_3_test PASSED in 15.4s //tensorflow/examples/custom_ops_doc/multiplex_1:multiplex_1_test PASSED in 15.9s //tensorflow/examples/custom_ops_doc/multiplex_2:multiplex_2_test_cpu PASSED in 14.7s //tensorflow/examples/custom_ops_doc/multiplex_3:multiplex_3_test PASSED in 15.8s //tensorflow/examples/custom_ops_doc/multiplex_4:multiplex_4_test PASSED in 17.2s //tensorflow/examples/custom_ops_doc/simple_hash_table:simple_hash_table_test PASSED in 15.7s //tensorflow/examples/custom_ops_doc/sleep:sleep_test PASSED in 15.6s //tensorflow/examples/speech_commands:accuracy_utils_test PASSED in 2.6s //tensorflow/examples/speech_commands:models_test PASSED in 13.8s //tensorflow/examples/speech_commands:recognize_commands_test PASSED in 2.0s //tensorflow/examples/wav_to_spectrogram:wav_to_spectrogram_test PASSED in 1.8s //tensorflow/js:ts_op_gen_test PASSED in 0.4s //tensorflow/python:array_grad_test_cpu PASSED in 10.1s //tensorflow/python:autograph_ops_test PASSED in 7.4s //tensorflow/python:batch_norm_benchmark_cpu PASSED in 6.7s //tensorflow/python:bincount_ops_test PASSED in 8.6s //tensorflow/python:bitwise_ops_test_cpu PASSED in 9.5s //tensorflow/python:clip_ops_test PASSED in 9.3s //tensorflow/python:clustering_ops_test PASSED in 22.6s //tensorflow/python:collective_ops_benchmark_cpu PASSED in 8.2s //tensorflow/python:collective_ops_gpu_test_2gpu PASSED in 10.1s //tensorflow/python:collective_ops_gpu_test_cpu PASSED in 9.2s //tensorflow/python:collective_ops_test PASSED in 16.8s //tensorflow/python:collective_ops_xla_test PASSED in 8.1s //tensorflow/python:compiled_collective_ops_gpu_test_2gpu PASSED in 8.1s //tensorflow/python:compiled_collective_ops_gpu_test_cpu PASSED in 8.3s //tensorflow/python:concat_benchmark_cpu PASSED in 7.6s //tensorflow/python:control_flow_ops_benchmark_cpu PASSED in 7.3s //tensorflow/python:control_flow_v2_enable_test PASSED in 8.2s //tensorflow/python:control_flow_v2_toggles_test PASSED in 8.9s //tensorflow/python:dequantize_op_test PASSED in 8.2s //tensorflow/python:embedding_ops_test_cpu PASSED in 8.7s //tensorflow/python:factory_ops_test_cpu PASSED in 9.7s //tensorflow/python:functional_ops_test PASSED in 6.5s //tensorflow/python:gradient_checker_v2_test_cpu PASSED in 29.6s //tensorflow/python:gradients_test_cpu PASSED in 13.4s //tensorflow/python:init_ops_test_cpu PASSED in 13.0s //tensorflow/python:init_ops_v2_test_cpu PASSED in 10.6s //tensorflow/python:math_grad_test_cpu PASSED in 16.3s //tensorflow/python:math_ops_linspace_test_cpu PASSED in 8.8s //tensorflow/python:math_ops_test_cpu PASSED in 22.6s //tensorflow/python:matmul_benchmark_cpu PASSED in 8.5s //tensorflow/python:nn_grad_test_cpu PASSED in 13.0s //tensorflow/python:nn_loss_scaling_utilities_test PASSED in 11.9s //tensorflow/python:nn_test_cpu PASSED in 52.5s //tensorflow/python:nn_xent_test_cpu PASSED in 11.4s //tensorflow/python:op_selector_test PASSED in 6.3s //tensorflow/python:ops/array_ops_test PASSED in 6.5s //tensorflow/python:quantized_conv_ops_test PASSED in 7.2s //tensorflow/python:quantized_ops_test PASSED in 10.0s //tensorflow/python:raw_ops_test_cpu PASSED in 9.8s //tensorflow/python:rnn_grad_test_cpu PASSED in 7.9s //tensorflow/python:script_ops_test PASSED in 9.2s //tensorflow/python:sort_ops_test PASSED in 8.4s //tensorflow/python:sparse_ops_test PASSED in 18.8s //tensorflow/python:split_benchmark_cpu PASSED in 9.3s //tensorflow/python:tensor_array_ops_test PASSED in 8.4s //tensorflow/python:transpose_benchmark_cpu PASSED in 6.8s //tensorflow/python:variable_spec_test PASSED in 6.8s //tensorflow/python/autograph/converters:asserts_test PASSED in 8.1s //tensorflow/python/autograph/converters:break_statements_test PASSED in 7.1s //tensorflow/python/autograph/converters:call_trees_test PASSED in 7.0s //tensorflow/python/autograph/converters:conditional_expressions_test PASSED in 5.7s //tensorflow/python/autograph/converters:continue_statements_test PASSED in 9.2s //tensorflow/python/autograph/converters:control_flow_test PASSED in 11.7s //tensorflow/python/autograph/converters:directives_test PASSED in 6.4s //tensorflow/python/autograph/converters:functions_test PASSED in 8.2s //tensorflow/python/autograph/converters:list_comprehensions_test PASSED in 7.1s //tensorflow/python/autograph/converters:lists_test PASSED in 7.5s //tensorflow/python/autograph/converters:logical_expressions_test PASSED in 7.1s //tensorflow/python/autograph/converters:return_statements_test PASSED in 7.4s //tensorflow/python/autograph/converters:slices_test PASSED in 6.0s //tensorflow/python/autograph/converters:variables_test PASSED in 7.9s //tensorflow/python/autograph/core:converter_test PASSED in 7.7s //tensorflow/python/autograph/core:function_wrappers_test PASSED in 6.9s //tensorflow/python/autograph/impl:api_test PASSED in 12.7s //tensorflow/python/autograph/impl:conversion_test PASSED in 10.1s //tensorflow/python/autograph/lang:special_functions_test PASSED in 6.1s //tensorflow/python/autograph/operators:conditional_expressions_test PASSED in 7.3s //tensorflow/python/autograph/operators:control_flow_test PASSED in 16.6s //tensorflow/python/autograph/operators:data_structures_test PASSED in 6.9s //tensorflow/python/autograph/operators:exceptions_test PASSED in 8.6s //tensorflow/python/autograph/operators:logical_test PASSED in 6.0s //tensorflow/python/autograph/operators:py_builtins_test PASSED in 15.9s //tensorflow/python/autograph/operators:slices_test PASSED in 6.0s //tensorflow/python/autograph/operators:variables_test PASSED in 5.9s //tensorflow/python/autograph/pyct:anno_test PASSED in 6.8s //tensorflow/python/autograph/pyct:ast_util_test PASSED in 25.2s //tensorflow/python/autograph/pyct:cache_test PASSED in 13.9s //tensorflow/python/autograph/pyct:cfg_test PASSED in 8.0s //tensorflow/python/autograph/pyct:error_utils_test PASSED in 7.2s //tensorflow/python/autograph/pyct:inspect_utils_test PASSED in 7.8s //tensorflow/python/autograph/pyct:loader_test PASSED in 6.9s //tensorflow/python/autograph/pyct:naming_test PASSED in 6.5s //tensorflow/python/autograph/pyct:origin_info_test PASSED in 5.4s //tensorflow/python/autograph/pyct:parser_test PASSED in 6.1s //tensorflow/python/autograph/pyct:pretty_printer_test PASSED in 5.9s //tensorflow/python/autograph/pyct:qual_names_test PASSED in 6.4s //tensorflow/python/autograph/pyct:templates_test PASSED in 6.5s //tensorflow/python/autograph/pyct:transformer_test PASSED in 7.5s //tensorflow/python/autograph/pyct:transpiler_test PASSED in 6.3s //tensorflow/python/autograph/pyct/static_analysis:activity_test PASSED in 6.4s //tensorflow/python/autograph/pyct/static_analysis:liveness_test PASSED in 25.5s //tensorflow/python/autograph/pyct/static_analysis:reaching_definitions_test PASSED in 7.8s //tensorflow/python/autograph/pyct/static_analysis:reaching_fndefs_test PASSED in 6.3s //tensorflow/python/autograph/pyct/static_analysis:type_inference_test PASSED in 25.4s //tensorflow/python/autograph/tests:assertion_test PASSED in 21.7s //tensorflow/python/autograph/tests:basic_ifexp_test PASSED in 16.1s //tensorflow/python/autograph/tests:call_to_builtin_function_test PASSED in 14.9s //tensorflow/python/autograph/tests:call_to_lambda_function_test PASSED in 33.2s //tensorflow/python/autograph/tests:call_to_named_tuple_test PASSED in 19.7s //tensorflow/python/autograph/tests:call_to_numpy_function_test PASSED in 14.5s //tensorflow/python/autograph/tests:call_to_print_function_test PASSED in 14.6s //tensorflow/python/autograph/tests:call_to_tf_api_test PASSED in 15.6s //tensorflow/python/autograph/tests:call_to_user_function_test PASSED in 14.1s //tensorflow/python/autograph/tests:composite_names_in_control_flow_test PASSED in 23.2s //tensorflow/python/autograph/tests:cond_basic_test PASSED in 23.3s //tensorflow/python/autograph/tests:datasets_test PASSED in 21.9s //tensorflow/python/autograph/tests:early_return_test PASSED in 18.5s //tensorflow/python/autograph/tests:ext_slice_test PASSED in 22.5s //tensorflow/python/autograph/tests:generator_test PASSED in 15.2s //tensorflow/python/autograph/tests:logical_expression_test PASSED in 16.2s //tensorflow/python/autograph/tests:loop_basic_test PASSED in 88.5s //tensorflow/python/autograph/tests:loop_control_flow_illegal_cases_test PASSED in 24.2s //tensorflow/python/autograph/tests:loop_created_variables_test PASSED in 26.3s //tensorflow/python/autograph/tests:loop_scoping_test PASSED in 31.6s //tensorflow/python/autograph/tests:loop_with_function_call_test PASSED in 26.2s //tensorflow/python/autograph/tests:loop_with_variable_type_illegal_cases_test PASSED in 17.0s //tensorflow/python/autograph/tests:loop_with_variable_type_test PASSED in 36.8s //tensorflow/python/autograph/tests:nested_control_flow_test PASSED in 39.2s //tensorflow/python/autograph/tests:type_annotations_test PASSED in 14.4s //tensorflow/python/autograph/utils:context_managers_test PASSED in 7.3s //tensorflow/python/autograph/utils:misc_test PASSED in 7.0s //tensorflow/python/autograph/utils:tensor_list_test PASSED in 6.8s //tensorflow/python/autograph/utils:tensors_test PASSED in 6.5s //tensorflow/python/checkpoint:benchmarks_test PASSED in 7.0s //tensorflow/python/checkpoint:checkpoint_management_test_cpu PASSED in 10.3s //tensorflow/python/checkpoint:checkpoint_metrics_test PASSED in 15.7s //tensorflow/python/checkpoint:checkpoint_test PASSED in 22.3s //tensorflow/python/checkpoint:checkpoint_view_test PASSED in 7.9s //tensorflow/python/checkpoint:checkpoint_with_v1_optimizers_test PASSED in 10.5s //tensorflow/python/checkpoint:functional_saver_test_cpu PASSED in 9.3s //tensorflow/python/checkpoint:restore_test PASSED in 8.0s //tensorflow/python/checkpoint:save_util_v1_test PASSED in 8.6s //tensorflow/python/checkpoint:saveable_compat_test PASSED in 8.1s //tensorflow/python/checkpoint:tensor_callable_test PASSED in 7.3s //tensorflow/python/checkpoint:trackable_view_test PASSED in 7.1s //tensorflow/python/client:device_lib_test_cpu PASSED in 7.4s //tensorflow/python/client:events_writer_test PASSED in 6.6s //tensorflow/python/client:session_benchmark_cpu PASSED in 8.0s //tensorflow/python/client:session_list_devices_test PASSED in 7.7s //tensorflow/python/client:session_partial_run_test PASSED in 12.4s //tensorflow/python/client:timeline_test_cpu PASSED in 7.3s //tensorflow/python/client:virtual_gpu_test_cpu PASSED in 7.0s //tensorflow/python/compat:compat_test PASSED in 7.8s //tensorflow/python/compat:disable_v2_behavior_test PASSED in 8.5s //tensorflow/python/compiler/mlir:mlir_test PASSED in 7.2s //tensorflow/python/compiler/tensorrt:trt_convert_test_cpu PASSED in 14.4s //tensorflow/python/compiler/tensorrt/test:batch_matmul_test_cpu PASSED in 7.7s //tensorflow/python/compiler/tensorrt/test:biasadd_matmul_test_cpu PASSED in 7.7s //tensorflow/python/compiler/tensorrt/test:binary_tensor_weight_broadcast_test_cpu PASSED in 6.8s //tensorflow/python/compiler/tensorrt/test:bool_test_cpu PASSED in 7.6s //tensorflow/python/compiler/tensorrt/test:cast_test_cpu PASSED in 8.4s //tensorflow/python/compiler/tensorrt/test:concatenation_test_cpu PASSED in 9.3s //tensorflow/python/compiler/tensorrt/test:const_broadcast_test_cpu PASSED in 8.0s //tensorflow/python/compiler/tensorrt/test:data_dependent_shape_test_cpu PASSED in 9.4s //tensorflow/python/compiler/tensorrt/test:dynamic_input_shapes_test_cpu PASSED in 9.2s //tensorflow/python/compiler/tensorrt/test:identity_output_test_cpu PASSED in 7.7s //tensorflow/python/compiler/tensorrt/test:int32_test_cpu PASSED in 7.3s //tensorflow/python/compiler/tensorrt/test:lru_cache_test_cpu PASSED in 10.5s //tensorflow/python/compiler/tensorrt/test:memory_alignment_test_cpu PASSED in 8.0s //tensorflow/python/compiler/tensorrt/test:multi_connection_neighbor_engine_test_cpu PASSED in 8.4s //tensorflow/python/compiler/tensorrt/test:neighboring_engine_test_cpu PASSED in 7.5s //tensorflow/python/compiler/tensorrt/test:quantization_test_cpu PASSED in 7.4s //tensorflow/python/compiler/tensorrt/test:rank_two_test_cpu PASSED in 8.3s //tensorflow/python/compiler/tensorrt/test:reshape_transpose_test_cpu PASSED in 8.6s //tensorflow/python/compiler/tensorrt/test:topk_test_cpu PASSED in 9.3s //tensorflow/python/compiler/tensorrt/test:trt_engine_op_shape_test_cpu PASSED in 9.1s //tensorflow/python/compiler/tensorrt/test:trt_mode_test_cpu PASSED in 8.1s //tensorflow/python/compiler/tensorrt/test:unary_test_cpu PASSED in 7.8s //tensorflow/python/compiler/tensorrt/test:vgg_block_nchw_test_cpu PASSED in 7.0s //tensorflow/python/compiler/tensorrt/test:vgg_block_test_cpu PASSED in 7.7s //tensorflow/python/compiler/xla:jit_compile_test_cpu PASSED in 7.8s //tensorflow/python/compiler/xla:jit_test_cpu PASSED in 12.4s //tensorflow/python/compiler/xla:xla_test_cpu PASSED in 23.3s //tensorflow/python/compiler/xla/experimental:xla_sharding_test PASSED in 15.8s //tensorflow/python/data/benchmarks:batch_benchmark PASSED in 6.7s //tensorflow/python/data/benchmarks:filter_benchmark PASSED in 7.5s //tensorflow/python/data/benchmarks:from_tensor_slices_benchmark PASSED in 8.4s //tensorflow/python/data/benchmarks:interleave_benchmark PASSED in 7.6s //tensorflow/python/data/benchmarks:list_files_benchmark PASSED in 9.0s //tensorflow/python/data/benchmarks:map_benchmark PASSED in 6.7s //tensorflow/python/data/benchmarks:meta_benchmark PASSED in 8.8s //tensorflow/python/data/benchmarks:prefetch_benchmark PASSED in 7.8s //tensorflow/python/data/benchmarks:range_benchmark PASSED in 6.6s //tensorflow/python/data/experimental/benchmarks:autotune_benchmark PASSED in 6.9s //tensorflow/python/data/experimental/benchmarks:csv_dataset_benchmark PASSED in 8.3s //tensorflow/python/data/experimental/benchmarks:map_and_batch_benchmark PASSED in 6.3s //tensorflow/python/data/experimental/benchmarks:map_defun_benchmark PASSED in 7.7s //tensorflow/python/data/experimental/benchmarks:matching_files_benchmark PASSED in 7.8s //tensorflow/python/data/experimental/benchmarks:optimize_benchmark PASSED in 8.2s //tensorflow/python/data/experimental/benchmarks:parameter_value_benchmark PASSED in 9.1s //tensorflow/python/data/experimental/benchmarks:rejection_resample_benchmark PASSED in 8.1s //tensorflow/python/data/experimental/benchmarks:snapshot_dataset_benchmark PASSED in 7.9s //tensorflow/python/data/experimental/benchmarks:unbatch_benchmark PASSED in 6.8s //tensorflow/python/data/experimental/kernel_tests:assert_cardinality_test PASSED in 25.3s //tensorflow/python/data/experimental/kernel_tests:assert_next_test PASSED in 9.5s //tensorflow/python/data/experimental/kernel_tests:assert_prev_test PASSED in 10.0s //tensorflow/python/data/experimental/kernel_tests:checkpoint_input_pipeline_hook_test PASSED in 16.8s //tensorflow/python/data/experimental/kernel_tests:compression_ops_test PASSED in 12.4s //tensorflow/python/data/experimental/kernel_tests:copy_to_device_test_cpu PASSED in 23.1s //tensorflow/python/data/experimental/kernel_tests:dense_to_sparse_batch_test PASSED in 18.4s //tensorflow/python/data/experimental/kernel_tests:from_list_test PASSED in 36.6s //tensorflow/python/data/experimental/kernel_tests:io_test PASSED in 27.3s //tensorflow/python/data/experimental/kernel_tests:lookup_ops_test PASSED in 8.7s //tensorflow/python/data/experimental/kernel_tests:make_csv_dataset_test PASSED in 36.2s //tensorflow/python/data/experimental/kernel_tests:make_saveable_from_iterator_test PASSED in 7.0s //tensorflow/python/data/experimental/kernel_tests:make_tf_record_dataset_test PASSED in 50.9s //tensorflow/python/data/experimental/kernel_tests:map_defun_op_test PASSED in 9.3s //tensorflow/python/data/experimental/kernel_tests:matching_files_dataset_test PASSED in 17.1s //tensorflow/python/data/experimental/kernel_tests:model_dataset_test PASSED in 9.3s //tensorflow/python/data/experimental/kernel_tests:non_serializable_test PASSED in 8.6s //tensorflow/python/data/experimental/kernel_tests:prefetch_to_device_test_cpu PASSED in 11.0s //tensorflow/python/data/experimental/kernel_tests:prefetch_with_slack_test PASSED in 9.8s //tensorflow/python/data/experimental/kernel_tests:shuffle_and_repeat_test PASSED in 31.4s //tensorflow/python/data/experimental/kernel_tests:sleep_test PASSED in 7.2s //tensorflow/python/data/experimental/kernel_tests:tf_record_writer_test PASSED in 14.1s //tensorflow/python/data/experimental/kernel_tests:variant_test PASSED in 8.4s //tensorflow/python/data/experimental/kernel_tests:wrap_unwrap_test_cpu PASSED in 8.5s //tensorflow/python/data/experimental/kernel_tests/optimization:filter_fusion_test PASSED in 32.0s //tensorflow/python/data/experimental/kernel_tests/optimization:filter_parallelization_test PASSED in 60.3s //tensorflow/python/data/experimental/kernel_tests/optimization:grappler_test_cpu PASSED in 10.5s //tensorflow/python/data/experimental/kernel_tests/optimization:make_deterministic_test PASSED in 27.0s //tensorflow/python/data/experimental/kernel_tests/optimization:map_and_batch_fusion_test PASSED in 9.5s //tensorflow/python/data/experimental/kernel_tests/optimization:map_and_filter_fusion_test PASSED in 20.9s //tensorflow/python/data/experimental/kernel_tests/optimization:map_fusion_test PASSED in 18.5s //tensorflow/python/data/experimental/kernel_tests/optimization:map_parallelization_test PASSED in 12.2s //tensorflow/python/data/experimental/kernel_tests/optimization:noop_elimination_test PASSED in 11.9s //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_test PASSED in 15.5s //tensorflow/python/data/experimental/kernel_tests/service:multi_device_test PASSED in 22.1s //tensorflow/python/data/experimental/service:server_lib_test PASSED in 11.4s //tensorflow/python/data/kernel_tests:as_numpy_iterator_test PASSED in 17.3s //tensorflow/python/data/kernel_tests:bucket_by_sequence_length_test PASSED in 24.7s //tensorflow/python/data/kernel_tests:cache_test PASSED in 77.2s //tensorflow/python/data/kernel_tests:cardinality_test PASSED in 16.1s //tensorflow/python/data/kernel_tests:checkpoint_test PASSED in 24.7s //tensorflow/python/data/kernel_tests:concatenate_test PASSED in 44.6s //tensorflow/python/data/kernel_tests:counter_test PASSED in 32.7s //tensorflow/python/data/kernel_tests:dataset_spec_test PASSED in 7.7s //tensorflow/python/data/kernel_tests:dataset_test PASSED in 30.9s //tensorflow/python/data/kernel_tests:enumerate_test PASSED in 31.8s //tensorflow/python/data/kernel_tests:from_sparse_tensor_slices_test PASSED in 7.3s //tensorflow/python/data/kernel_tests:from_tensor_slices_test PASSED in 29.6s //tensorflow/python/data/kernel_tests:from_tensors_test PASSED in 19.8s //tensorflow/python/data/kernel_tests:get_single_element_test PASSED in 13.7s //tensorflow/python/data/kernel_tests:ignore_errors_test PASSED in 18.0s //tensorflow/python/data/kernel_tests:io_test PASSED in 52.3s //tensorflow/python/data/kernel_tests:iterator_test_cpu PASSED in 16.4s //tensorflow/python/data/kernel_tests:len_test PASSED in 7.6s //tensorflow/python/data/kernel_tests:list_files_test PASSED in 12.2s //tensorflow/python/data/kernel_tests:optional_test_cpu PASSED in 10.4s //tensorflow/python/data/kernel_tests:options_test PASSED in 9.5s //tensorflow/python/data/kernel_tests:placement_test_cpu PASSED in 9.4s //tensorflow/python/data/kernel_tests:prefetch_test PASSED in 45.3s //tensorflow/python/data/kernel_tests:random_test PASSED in 25.8s //tensorflow/python/data/kernel_tests:range_test PASSED in 44.7s //tensorflow/python/data/kernel_tests:rebatch_test PASSED in 7.5s //tensorflow/python/data/kernel_tests:reduce_test_cpu PASSED in 25.3s //tensorflow/python/data/kernel_tests:scan_test_cpu PASSED in 38.1s //tensorflow/python/data/kernel_tests:sparse_batch_test PASSED in 16.9s //tensorflow/python/data/kernel_tests:unbatch_test PASSED in 30.0s //tensorflow/python/data/util:convert_test PASSED in 7.9s //tensorflow/python/data/util:nest_test PASSED in 6.9s //tensorflow/python/data/util:options_test PASSED in 7.8s //tensorflow/python/data/util:random_seed_test PASSED in 9.6s //tensorflow/python/data/util:sparse_test PASSED in 7.7s //tensorflow/python/data/util:structure_test PASSED in 9.0s //tensorflow/python/data/util:traverse_test PASSED in 7.9s //tensorflow/python/debug/cli:analyzer_cli_test_cpu PASSED in 8.3s //tensorflow/python/debug/cli:cli_config_test PASSED in 6.3s //tensorflow/python/debug/cli:cli_shared_test PASSED in 6.7s //tensorflow/python/debug/cli:command_parser_test PASSED in 5.6s //tensorflow/python/debug/cli:curses_ui_test PASSED in 6.9s //tensorflow/python/debug/cli:debugger_cli_common_test PASSED in 5.9s //tensorflow/python/debug/cli:evaluator_test PASSED in 7.4s //tensorflow/python/debug/cli:profile_analyzer_cli_test PASSED in 6.0s //tensorflow/python/debug/cli:readline_ui_test PASSED in 6.3s //tensorflow/python/debug/cli:tensor_format_test PASSED in 7.1s //tensorflow/python/debug/lib:check_numerics_callback_test_cpu PASSED in 12.5s //tensorflow/python/debug/lib:common_test PASSED in 6.8s //tensorflow/python/debug/lib:debug_data_test PASSED in 6.1s //tensorflow/python/debug/lib:debug_events_monitors_test PASSED in 8.2s //tensorflow/python/debug/lib:debug_events_writer_test PASSED in 8.1s //tensorflow/python/debug/lib:debug_gradients_test_cpu PASSED in 6.9s //tensorflow/python/debug/lib:debug_graph_reconstruction_test_cpu PASSED in 14.4s //tensorflow/python/debug/lib:debug_graphs_test PASSED in 6.0s //tensorflow/python/debug/lib:debug_grappler_test_cpu PASSED in 8.0s //tensorflow/python/debug/lib:debug_utils_test PASSED in 6.4s //tensorflow/python/debug/lib:debug_v2_ops_test_cpu PASSED in 15.1s //tensorflow/python/debug/lib:profiling_test PASSED in 6.3s //tensorflow/python/debug/lib:session_debug_file_test_cpu PASSED in 13.6s //tensorflow/python/debug/lib:session_debug_multi_gpu_test_cpu PASSED in 6.1s //tensorflow/python/debug/lib:source_utils_test PASSED in 9.5s //tensorflow/python/debug/wrappers:disk_usage_test PASSED in 7.1s //tensorflow/python/debug/wrappers:dumping_wrapper_test PASSED in 6.5s //tensorflow/python/debug/wrappers:framework_test PASSED in 6.3s //tensorflow/python/debug/wrappers:local_cli_wrapper_test PASSED in 14.9s //tensorflow/python/distribute:checkpoint_utils_test_2gpu PASSED in 11.3s //tensorflow/python/distribute:checkpoint_utils_test_cpu PASSED in 10.8s //tensorflow/python/distribute:checkpointing_test_2gpu PASSED in 10.2s //tensorflow/python/distribute:checkpointing_test_cpu PASSED in 7.9s //tensorflow/python/distribute:collective_all_reduce_strategy_test_2gpu PASSED in 55.2s //tensorflow/python/distribute:collective_all_reduce_strategy_test_cpu PASSED in 56.4s //tensorflow/python/distribute:collective_all_reduce_strategy_test_xla_2gpu PASSED in 33.7s //tensorflow/python/distribute:collective_util_test PASSED in 6.9s //tensorflow/python/distribute:combinations_test_2gpu PASSED in 18.6s //tensorflow/python/distribute:combinations_test_cpu PASSED in 20.0s //tensorflow/python/distribute:cross_device_utils_test_cpu PASSED in 9.9s //tensorflow/python/distribute:custom_training_loop_gradient_test_2gpu PASSED in 11.5s //tensorflow/python/distribute:custom_training_loop_gradient_test_cpu PASSED in 9.3s //tensorflow/python/distribute:device_util_test_cpu PASSED in 8.8s //tensorflow/python/distribute:distribute_coordinator_test PASSED in 13.7s //tensorflow/python/distribute:distribute_lib_test PASSED in 9.9s //tensorflow/python/distribute:distribute_utils_test_2gpu PASSED in 10.5s //tensorflow/python/distribute:distribute_utils_test_cpu PASSED in 9.0s //tensorflow/python/distribute:input_ops_test_cpu PASSED in 12.5s //tensorflow/python/distribute:metrics_v1_test_2gpu PASSED in 44.5s //tensorflow/python/distribute:metrics_v1_test_cpu PASSED in 38.1s //tensorflow/python/distribute:mirrored_values_test_2gpu PASSED in 7.7s //tensorflow/python/distribute:mirrored_values_test_cpu PASSED in 10.6s //tensorflow/python/distribute:mirrored_variable_test_2gpu PASSED in 18.2s //tensorflow/python/distribute:mirrored_variable_test_cpu PASSED in 19.5s //tensorflow/python/distribute:multi_process_runner_no_init_test PASSED in 7.7s //tensorflow/python/distribute:multi_worker_continuous_run_test_cpu PASSED in 20.5s //tensorflow/python/distribute:multi_worker_util_test PASSED in 5.7s //tensorflow/python/distribute:numpy_dataset_test PASSED in 8.1s //tensorflow/python/distribute:one_device_strategy_test_cpu PASSED in 15.4s //tensorflow/python/distribute:packed_distributed_variable_test PASSED in 7.0s //tensorflow/python/distribute:parameter_server_strategy_test_2gpu PASSED in 34.2s //tensorflow/python/distribute:parameter_server_strategy_test_cpu PASSED in 32.8s //tensorflow/python/distribute:parameter_server_strategy_v2_test_2gpu PASSED in 19.8s //tensorflow/python/distribute:parameter_server_strategy_v2_test_cpu PASSED in 27.1s //tensorflow/python/distribute:per_replica_test_2gpu PASSED in 9.0s //tensorflow/python/distribute:per_replica_test_cpu PASSED in 9.0s //tensorflow/python/distribute:ps_values_test_2gpu PASSED in 8.7s //tensorflow/python/distribute:ps_values_test_cpu PASSED in 10.3s //tensorflow/python/distribute:remote_mirrored_strategy_eager_test_cpu PASSED in 10.1s //tensorflow/python/distribute:sharded_variable_test PASSED in 27.5s //tensorflow/python/distribute:shared_variable_creator_test PASSED in 6.1s //tensorflow/python/distribute:strategy_combinations_test_cpu PASSED in 44.4s //tensorflow/python/distribute:template_mirrored_strategy_test_cpu PASSED in 8.7s //tensorflow/python/distribute:test_util_test_2gpu PASSED in 16.2s //tensorflow/python/distribute:test_util_test_cpu PASSED in 16.7s //tensorflow/python/distribute:tf_function_test_2gpu PASSED in 13.1s //tensorflow/python/distribute:tf_function_test_cpu PASSED in 9.8s //tensorflow/python/distribute:values_v2_test_cpu PASSED in 13.7s //tensorflow/python/distribute:warm_starting_util_test_2gpu PASSED in 10.5s //tensorflow/python/distribute:warm_starting_util_test_cpu PASSED in 9.2s //tensorflow/python/distribute/cluster_resolver:base_cluster_resolver_py_test PASSED in 7.9s //tensorflow/python/distribute/cluster_resolver:gce_cluster_resolver_py_test PASSED in 7.2s //tensorflow/python/distribute/cluster_resolver:kubernetes_cluster_resolver_py_test PASSED in 7.1s //tensorflow/python/distribute/cluster_resolver:sagemaker_cluster_resolver_py_test PASSED in 8.1s //tensorflow/python/distribute/cluster_resolver:slurm_cluster_resolver_py_test PASSED in 7.6s //tensorflow/python/distribute/cluster_resolver:tfconfig_cluster_resolver_py_test PASSED in 7.1s //tensorflow/python/distribute/cluster_resolver/tpu:tpu_cluster_resolver_py_test PASSED in 10.0s //tensorflow/python/distribute/coordinator:metric_utils_test PASSED in 9.3s //tensorflow/python/distribute/coordinator:watchdog_test PASSED in 61.0s //tensorflow/python/distribute/experimental:dtensor_util_test_cpu PASSED in 10.9s //tensorflow/python/distribute/experimental:mirrored_strategy_test_cpu PASSED in 35.0s //tensorflow/python/distribute/integration_test:saved_model_test_cpu PASSED in 35.6s //tensorflow/python/distribute/parallel_device:parallel_device_test_cpu PASSED in 22.9s //tensorflow/python/distribute/v1:all_reduce_test PASSED in 42.9s //tensorflow/python/distribute/v1:cross_device_ops_test_2gpu PASSED in 65.8s //tensorflow/python/distribute/v1:cross_device_ops_test_cpu PASSED in 69.5s //tensorflow/python/dlpack:dlpack_test_cpu PASSED in 8.2s //tensorflow/python/eager:backprop_test_cpu PASSED in 98.6s //tensorflow/python/eager:benchmarks_test_cpu PASSED in 10.2s //tensorflow/python/eager:cancellation_test_cpu PASSED in 6.8s //tensorflow/python/eager:context_test_cpu PASSED in 8.3s //tensorflow/python/eager:core_test_cpu PASSED in 21.2s //tensorflow/python/eager:gradient_input_output_exclusions_test PASSED in 32.3s //tensorflow/python/eager:graph_only_ops_test_cpu PASSED in 6.7s //tensorflow/python/eager:lift_to_graph_test PASSED in 10.7s //tensorflow/python/eager:monitoring_test_cpu PASSED in 13.8s //tensorflow/python/eager:ops_test_cpu PASSED in 11.8s //tensorflow/python/eager:profiler_client_test PASSED in 7.3s //tensorflow/python/eager:profiler_test_cpu PASSED in 6.8s //tensorflow/python/eager:pywrap_tfe_test PASSED in 19.7s //tensorflow/python/eager:remote_benchmarks_test_cpu PASSED in 9.3s //tensorflow/python/eager:run_eager_op_as_function_test_cpu PASSED in 8.2s //tensorflow/python/eager:run_eager_op_as_function_xla_test_cpu PASSED in 6.1s //tensorflow/python/eager:tape_test PASSED in 7.8s //tensorflow/python/eager:tensor_test_cpu PASSED in 10.4s //tensorflow/python/eager:wrap_function_device_test_cpu PASSED in 7.9s //tensorflow/python/eager:wrap_function_test PASSED in 11.1s //tensorflow/python/eager/benchmarks:kpi_benchmark_test_cpu PASSED in 14.4s //tensorflow/python/eager/memory_tests:remote_memory_test_cpu PASSED in 7.7s //tensorflow/python/eager/polymorphic_function:argument_naming_test_cpu PASSED in 7.8s //tensorflow/python/eager/polymorphic_function:collection_test_cpu PASSED in 7.3s //tensorflow/python/eager/polymorphic_function:compiler_ir_test_cpu PASSED in 7.4s //tensorflow/python/eager/polymorphic_function:compiler_ir_test_cpu_mlir_bridge_test PASSED in 9.5s //tensorflow/python/eager/polymorphic_function:function_spec_test PASSED in 6.9s //tensorflow/python/eager/polymorphic_function:polymorphic_function_xla_jit_test_cpu PASSED in 24.4s //tensorflow/python/eager/polymorphic_function:polymorphic_function_xla_jit_test_cpu_mlir_bridge_test PASSED in 35.0s //tensorflow/python/eager/polymorphic_function:polymorphic_function_xla_test_cpu PASSED in 6.5s //tensorflow/python/eager/polymorphic_function:quarantine_test PASSED in 30.7s //tensorflow/python/feature_column:sequence_feature_column_integration_test PASSED in 8.7s //tensorflow/python/feature_column:serialization_test PASSED in 10.7s //tensorflow/python/framework:auto_control_deps_test PASSED in 24.6s //tensorflow/python/framework:c_api_util_test PASSED in 8.5s //tensorflow/python/framework:common_shapes_test PASSED in 7.2s //tensorflow/python/framework:composite_tensor_test PASSED in 10.6s //tensorflow/python/framework:config_test_2gpu PASSED in 11.2s //tensorflow/python/framework:config_test_cpu PASSED in 12.9s //tensorflow/python/framework:constant_op_test PASSED in 8.3s //tensorflow/python/framework:device_spec_test PASSED in 7.0s //tensorflow/python/framework:device_test PASSED in 7.5s //tensorflow/python/framework:dtypes_test PASSED in 15.0s //tensorflow/python/framework:error_interpolation_test PASSED in 8.6s //tensorflow/python/framework:errors_test PASSED in 7.9s //tensorflow/python/framework:extension_type_field_test PASSED in 9.0s //tensorflow/python/framework:extension_type_test PASSED in 19.2s //tensorflow/python/framework:file_system_test PASSED in 8.2s //tensorflow/python/framework:function_def_to_graph_test PASSED in 7.8s //tensorflow/python/framework:graph_building_benchmark_cpu PASSED in 6.4s //tensorflow/python/framework:graph_util_test PASSED in 7.8s //tensorflow/python/framework:immutable_dict_test PASSED in 7.4s //tensorflow/python/framework:importer_test PASSED in 16.6s //tensorflow/python/framework:indexed_slices_test PASSED in 6.9s //tensorflow/python/framework:kernels_test PASSED in 7.4s //tensorflow/python/framework:meta_graph_test PASSED in 9.5s //tensorflow/python/framework:node_file_writer_test_cpu PASSED in 7.6s //tensorflow/python/framework:offset_counter_helper_test PASSED in 0.7s //tensorflow/python/framework:op_allowlist_namespace_test PASSED in 1.8s //tensorflow/python/framework:op_callbacks_test_cpu PASSED in 9.1s //tensorflow/python/framework:op_def_library_test PASSED in 8.4s //tensorflow/python/framework:op_def_util_test PASSED in 6.9s //tensorflow/python/framework:ops_enable_eager_test PASSED in 2.5s //tensorflow/python/framework:ops_test PASSED in 20.7s //tensorflow/python/framework:proto_test PASSED in 6.3s //tensorflow/python/framework:py_context_manager_test PASSED in 6.8s //tensorflow/python/framework:python_api_dispatcher_test PASSED in 7.0s //tensorflow/python/framework:python_api_info_test PASSED in 7.7s //tensorflow/python/framework:python_api_parameter_converter_test PASSED in 14.0s //tensorflow/python/framework:python_op_gen_annotation_test PASSED in 3.7s //tensorflow/python/framework:python_op_gen_annotator_test PASSED in 0.1s //tensorflow/python/framework:python_tensor_converter_test PASSED in 7.5s //tensorflow/python/framework:random_seed_test PASSED in 6.9s //tensorflow/python/framework:registry_test PASSED in 8.5s //tensorflow/python/framework:smart_cond_test PASSED in 7.7s //tensorflow/python/framework:sparse_tensor_test PASSED in 8.5s //tensorflow/python/framework:subscribe_test PASSED in 9.4s //tensorflow/python/framework:tensor_shape_test PASSED in 8.2s //tensorflow/python/framework:tensor_test PASSED in 9.8s //tensorflow/python/framework:tensor_util_test PASSED in 8.8s //tensorflow/python/framework:test_combinations_test PASSED in 6.9s //tensorflow/python/framework:test_util_test_cpu PASSED in 13.6s //tensorflow/python/framework:tf2_test PASSED in 12.1s //tensorflow/python/framework:traceable_stack_test PASSED in 7.4s //tensorflow/python/framework:type_spec_test PASSED in 7.5s //tensorflow/python/framework:versions_test PASSED in 7.3s //tensorflow/python/framework/experimental:graph_building_test_cpu PASSED in 7.3s //tensorflow/python/framework/experimental:unified_api_test_cpu PASSED in 12.1s //tensorflow/python/grappler:arithmetic_optimizer_test_cpu PASSED in 9.6s //tensorflow/python/grappler:auto_mixed_precision_test_cpu PASSED in 12.1s //tensorflow/python/grappler:constant_folding_test_cpu PASSED in 7.9s //tensorflow/python/grappler:cost_analyzer_test PASSED in 9.6s //tensorflow/python/grappler:datasets_test PASSED in 9.5s //tensorflow/python/grappler:item_test PASSED in 7.6s //tensorflow/python/grappler:memory_optimizer_test PASSED in 15.5s //tensorflow/python/grappler:model_analyzer_test PASSED in 9.2s //tensorflow/python/grappler:remapper_test_cpu PASSED in 8.2s //tensorflow/python/grappler:tf_optimizer_test PASSED in 7.6s //tensorflow/python/kernel_tests:benchmark_test_cpu PASSED in 8.4s //tensorflow/python/kernel_tests:check_ops_test_cpu PASSED in 17.3s //tensorflow/python/kernel_tests:collective_ops_multi_worker_test PASSED in 27.1s //tensorflow/python/kernel_tests:composite_tensor_ops_test PASSED in 8.5s //tensorflow/python/kernel_tests:critical_section_test_cpu PASSED in 26.6s //tensorflow/python/kernel_tests:garbage_collection_test PASSED in 6.9s //tensorflow/python/kernel_tests:gradient_correctness_test_cpu PASSED in 8.2s //tensorflow/python/kernel_tests:histogram_ops_test_cpu PASSED in 7.3s //tensorflow/python/kernel_tests:logging_ops_test_cpu PASSED in 9.0s //tensorflow/python/kernel_tests:numerics_test_cpu PASSED in 6.8s //tensorflow/python/kernel_tests:template_test PASSED in 9.4s //tensorflow/python/kernel_tests:trace_op_test_cpu PASSED in 7.7s //tensorflow/python/kernel_tests/array_ops:batch_gather_op_test_cpu PASSED in 7.9s //tensorflow/python/kernel_tests/array_ops:batch_scatter_ops_test PASSED in 8.2s //tensorflow/python/kernel_tests/array_ops:batchtospace_op_test_cpu PASSED in 13.2s //tensorflow/python/kernel_tests/array_ops:bcast_ops_test PASSED in 7.1s //tensorflow/python/kernel_tests/array_ops:bitcast_op_test_cpu PASSED in 8.0s //tensorflow/python/kernel_tests/array_ops:broadcast_to_ops_test_cpu PASSED in 28.6s //tensorflow/python/kernel_tests/array_ops:cast_op_test_cpu PASSED in 9.6s //tensorflow/python/kernel_tests/array_ops:constant_op_eager_test_cpu PASSED in 13.5s //tensorflow/python/kernel_tests/array_ops:constant_op_test_cpu PASSED in 11.6s //tensorflow/python/kernel_tests/array_ops:denormal_test_cpu PASSED in 7.3s //tensorflow/python/kernel_tests/array_ops:depthtospace_op_test_cpu PASSED in 10.2s //tensorflow/python/kernel_tests/array_ops:edit_distance_op_test PASSED in 9.6s //tensorflow/python/kernel_tests/array_ops:fingerprint_op_test PASSED in 6.4s //tensorflow/python/kernel_tests/array_ops:gather_nd_op_test_cpu PASSED in 8.4s //tensorflow/python/kernel_tests/array_ops:identity_n_op_py_test PASSED in 8.1s //tensorflow/python/kernel_tests/array_ops:identity_op_py_test PASSED in 7.9s //tensorflow/python/kernel_tests/array_ops:large_concat_op_test_cpu PASSED in 10.0s //tensorflow/python/kernel_tests/array_ops:manip_ops_test_cpu PASSED in 7.8s //tensorflow/python/kernel_tests/array_ops:one_hot_op_test_cpu PASSED in 7.6s //tensorflow/python/kernel_tests/array_ops:pad_op_test_cpu PASSED in 18.0s //tensorflow/python/kernel_tests/array_ops:reshape_op_test_cpu PASSED in 10.0s //tensorflow/python/kernel_tests/array_ops:reverse_sequence_op_test_cpu PASSED in 7.9s //tensorflow/python/kernel_tests/array_ops:scalar_test_cpu PASSED in 10.7s //tensorflow/python/kernel_tests/array_ops:shape_ops_test_cpu PASSED in 14.4s //tensorflow/python/kernel_tests/array_ops:slice_op_test_cpu PASSED in 10.1s //tensorflow/python/kernel_tests/array_ops:spacetobatch_op_test_cpu PASSED in 14.1s //tensorflow/python/kernel_tests/array_ops:spacetodepth_op_test_cpu PASSED in 9.9s //tensorflow/python/kernel_tests/array_ops:stack_op_test_cpu PASSED in 14.6s //tensorflow/python/kernel_tests/array_ops:unique_op_test_cpu PASSED in 7.4s //tensorflow/python/kernel_tests/array_ops:unstack_op_test_cpu PASSED in 8.4s //tensorflow/python/kernel_tests/array_ops:where_op_test_cpu PASSED in 17.4s //tensorflow/python/kernel_tests/control_flow:cond_v2_test_cpu PASSED in 53.0s //tensorflow/python/kernel_tests/control_flow:control_flow_util_test PASSED in 9.5s //tensorflow/python/kernel_tests/control_flow:control_flow_util_v2_test PASSED in 7.7s //tensorflow/python/kernel_tests/control_flow:py_func_test_cpu PASSED in 15.2s //tensorflow/python/kernel_tests/control_flow:scan_ops_test_cpu PASSED in 65.6s //tensorflow/python/kernel_tests/control_flow:while_v2_test_cpu PASSED in 65.7s //tensorflow/python/kernel_tests/custom_ops:ackermann_test PASSED in 8.9s //tensorflow/python/kernel_tests/custom_ops:duplicate_op_test PASSED in 6.6s //tensorflow/python/kernel_tests/custom_ops:invalid_op_test PASSED in 6.9s //tensorflow/python/kernel_tests/data_structures:conditional_accumulator_test PASSED in 8.7s //tensorflow/python/kernel_tests/data_structures:dynamic_partition_op_test_2gpu PASSED in 12.6s //tensorflow/python/kernel_tests/data_structures:dynamic_partition_op_test_cpu PASSED in 14.2s //tensorflow/python/kernel_tests/data_structures:dynamic_stitch_op_test_cpu PASSED in 8.6s //tensorflow/python/kernel_tests/data_structures:fifo_queue_test PASSED in 11.4s //tensorflow/python/kernel_tests/data_structures:list_ops_test_cpu PASSED in 26.2s //tensorflow/python/kernel_tests/data_structures:listdiff_op_test PASSED in 8.0s //tensorflow/python/kernel_tests/data_structures:lookup_ops_test PASSED in 21.8s //tensorflow/python/kernel_tests/data_structures:map_ops_test PASSED in 12.8s //tensorflow/python/kernel_tests/data_structures:padding_fifo_queue_test_cpu PASSED in 8.8s //tensorflow/python/kernel_tests/data_structures:priority_queue_test PASSED in 7.2s //tensorflow/python/kernel_tests/data_structures:stack_ops_test_cpu PASSED in 8.3s //tensorflow/python/kernel_tests/data_structures:stage_op_test_cpu PASSED in 9.0s //tensorflow/python/kernel_tests/distributions:bernoulli_test_cpu PASSED in 12.0s //tensorflow/python/kernel_tests/distributions:bijector_test_cpu PASSED in 8.4s //tensorflow/python/kernel_tests/distributions:categorical_test_cpu PASSED in 15.9s //tensorflow/python/kernel_tests/distributions:dirichlet_multinomial_test_cpu PASSED in 15.1s //tensorflow/python/kernel_tests/distributions:dirichlet_test_cpu PASSED in 14.6s //tensorflow/python/kernel_tests/distributions:exponential_test_cpu PASSED in 10.7s //tensorflow/python/kernel_tests/distributions:gamma_test_cpu PASSED in 41.0s //tensorflow/python/kernel_tests/distributions:identity_bijector_test_cpu PASSED in 8.6s //tensorflow/python/kernel_tests/distributions:kullback_leibler_test_cpu PASSED in 8.3s //tensorflow/python/kernel_tests/distributions:laplace_test_cpu PASSED in 33.6s //tensorflow/python/kernel_tests/distributions:multinomial_test_cpu PASSED in 9.1s //tensorflow/python/kernel_tests/distributions:normal_test_cpu PASSED in 21.9s //tensorflow/python/kernel_tests/distributions:special_math_test_cpu PASSED in 22.2s //tensorflow/python/kernel_tests/distributions:uniform_test_cpu PASSED in 8.9s //tensorflow/python/kernel_tests/image_ops:attention_ops_test PASSED in 9.5s //tensorflow/python/kernel_tests/image_ops:decode_bmp_op_test PASSED in 8.3s //tensorflow/python/kernel_tests/image_ops:decode_compressed_op_test PASSED in 6.9s //tensorflow/python/kernel_tests/image_ops:decode_image_op_test PASSED in 7.5s //tensorflow/python/kernel_tests/image_ops:decode_jpeg_op_test PASSED in 7.1s //tensorflow/python/kernel_tests/image_ops:decode_png_op_test PASSED in 7.5s //tensorflow/python/kernel_tests/image_ops:decode_raw_op_test PASSED in 7.2s //tensorflow/python/kernel_tests/image_ops:draw_bounding_box_op_test_cpu PASSED in 7.2s //tensorflow/python/kernel_tests/image_ops:extract_image_patches_op_test_cpu PASSED in 10.9s //tensorflow/python/kernel_tests/image_ops:extract_volume_patches_op_test_cpu PASSED in 9.1s //tensorflow/python/kernel_tests/io_ops:checkpoint_ops_test PASSED in 8.7s //tensorflow/python/kernel_tests/io_ops:decode_csv_op_test PASSED in 8.4s //tensorflow/python/kernel_tests/io_ops:io_ops_test PASSED in 7.7s //tensorflow/python/kernel_tests/io_ops:parse_single_example_op_test PASSED in 11.6s //tensorflow/python/kernel_tests/io_ops:parsing_ops_test PASSED in 26.9s //tensorflow/python/kernel_tests/io_ops:reader_ops_test PASSED in 9.5s //tensorflow/python/kernel_tests/io_ops:record_input_test PASSED in 25.0s //tensorflow/python/kernel_tests/io_ops:save_restore_ops_test PASSED in 9.3s //tensorflow/python/kernel_tests/linalg:determinant_op_test_cpu PASSED in 7.5s //tensorflow/python/kernel_tests/linalg:linear_operator_addition_test_cpu PASSED in 9.0s //tensorflow/python/kernel_tests/linalg:linear_operator_algebra_test_cpu PASSED in 10.8s //tensorflow/python/kernel_tests/linalg:linear_operator_test_cpu PASSED in 10.3s //tensorflow/python/kernel_tests/linalg:lu_op_test_cpu PASSED in 9.4s //tensorflow/python/kernel_tests/linalg:matrix_inverse_op_test_cpu PASSED in 7.7s //tensorflow/python/kernel_tests/linalg:matrix_logarithm_op_test PASSED in 63.3s //tensorflow/python/kernel_tests/linalg:matrix_solve_ls_op_test_cpu PASSED in 69.4s //tensorflow/python/kernel_tests/linalg:matrix_solve_op_test_cpu PASSED in 17.3s //tensorflow/python/kernel_tests/linalg:matrix_square_root_op_test_cpu PASSED in 7.4s //tensorflow/python/kernel_tests/linalg:slicing_test_cpu PASSED in 12.8s //tensorflow/python/kernel_tests/linalg/sparse:conjugate_gradient_test_cpu PASSED in 12.8s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_test_cpu PASSED in 9.1s //tensorflow/python/kernel_tests/math_ops:aggregate_ops_test_cpu PASSED in 8.8s //tensorflow/python/kernel_tests/math_ops:argmax_op_test_cpu PASSED in 8.4s //tensorflow/python/kernel_tests/math_ops:banded_triangular_solve_op_test_cpu PASSED in 10.8s //tensorflow/python/kernel_tests/math_ops:basic_gpu_test_cpu PASSED in 8.3s //tensorflow/python/kernel_tests/math_ops:bincount_op_test_cpu PASSED in 10.6s //tensorflow/python/kernel_tests/math_ops:bucketize_op_test_cpu PASSED in 8.6s //tensorflow/python/kernel_tests/math_ops:clip_ops_test PASSED in 8.2s //tensorflow/python/kernel_tests/math_ops:confusion_matrix_test PASSED in 9.6s //tensorflow/python/kernel_tests/math_ops:cross_grad_test_cpu PASSED in 7.2s //tensorflow/python/kernel_tests/math_ops:cumulative_logsumexp_test_cpu PASSED in 9.1s //tensorflow/python/kernel_tests/math_ops:in_topk_op_test_cpu PASSED in 7.7s //tensorflow/python/kernel_tests/math_ops:reduce_benchmark_test_cpu PASSED in 6.7s //tensorflow/python/kernel_tests/math_ops:segment_reduction_ops_d9m_test_cpu PASSED in 7.1s //tensorflow/python/kernel_tests/math_ops:sets_test PASSED in 25.4s //tensorflow/python/kernel_tests/math_ops:topk_op_test_cpu PASSED in 11.0s //tensorflow/python/kernel_tests/math_ops:zero_division_test_cpu PASSED in 6.7s //tensorflow/python/kernel_tests/nn_ops:betainc_op_test_cpu PASSED in 9.7s //tensorflow/python/kernel_tests/nn_ops:bias_op_test_cpu PASSED in 176.3s //tensorflow/python/kernel_tests/nn_ops:conv1d_test_cpu PASSED in 8.1s //tensorflow/python/kernel_tests/nn_ops:conv1d_transpose_test_cpu PASSED in 9.1s //tensorflow/python/kernel_tests/nn_ops:conv2d_transpose_test_cpu PASSED in 9.0s //tensorflow/python/kernel_tests/nn_ops:conv3d_backprop_filter_v2_grad_test_cpu PASSED in 31.8s //tensorflow/python/kernel_tests/nn_ops:conv3d_transpose_test_cpu PASSED in 10.0s //tensorflow/python/kernel_tests/nn_ops:ctc_decoder_ops_test PASSED in 9.5s //tensorflow/python/kernel_tests/nn_ops:ctc_loss_op_test_cpu PASSED in 66.0s //tensorflow/python/kernel_tests/nn_ops:cudnn_d9m_test_cpu PASSED in 6.7s //tensorflow/python/kernel_tests/nn_ops:cudnn_deterministic_ops_test_cpu PASSED in 7.6s //tensorflow/python/kernel_tests/nn_ops:losses_test PASSED in 41.4s //tensorflow/python/kernel_tests/nn_ops:lrn_op_test_cpu PASSED in 9.6s //tensorflow/python/kernel_tests/nn_ops:morphological_ops_test_cpu PASSED in 13.5s //tensorflow/python/kernel_tests/nn_ops:nth_element_op_test_cpu PASSED in 9.1s //tensorflow/python/kernel_tests/nn_ops:pool_test_cpu PASSED in 36.7s //tensorflow/python/kernel_tests/nn_ops:pooling_ops_3d_test_cpu PASSED in 19.1s //tensorflow/python/kernel_tests/nn_ops:relu_op_test_cpu PASSED in 10.2s //tensorflow/python/kernel_tests/nn_ops:softmax_op_test_cpu PASSED in 9.2s //tensorflow/python/kernel_tests/nn_ops:softplus_op_test_cpu PASSED in 6.8s //tensorflow/python/kernel_tests/nn_ops:softsign_op_test_cpu PASSED in 7.7s //tensorflow/python/kernel_tests/nn_ops:xent_op_d9m_test_cpu PASSED in 154.7s //tensorflow/python/kernel_tests/nn_ops:xent_op_test_cpu PASSED in 9.4s //tensorflow/python/kernel_tests/proto:descriptor_source_test PASSED in 9.2s //tensorflow/python/kernel_tests/proto:encode_proto_op_test PASSED in 7.9s //tensorflow/python/kernel_tests/quantization_ops:quantization_ops_test PASSED in 8.2s //tensorflow/python/kernel_tests/random:candidate_sampler_ops_test PASSED in 10.0s //tensorflow/python/kernel_tests/random:multinomial_op_test_cpu PASSED in 7.5s //tensorflow/python/kernel_tests/random:parameterized_truncated_normal_op_test_cpu PASSED in 14.1s //tensorflow/python/kernel_tests/random:random_crop_test_cpu PASSED in 9.0s //tensorflow/python/kernel_tests/random:random_grad_test_cpu PASSED in 7.6s //tensorflow/python/kernel_tests/random:random_ops_test_cpu PASSED in 19.0s //tensorflow/python/kernel_tests/random:random_poisson_test_cpu PASSED in 8.6s //tensorflow/python/kernel_tests/random:random_shuffle_queue_test PASSED in 6.9s //tensorflow/python/kernel_tests/random:stateful_random_ops_test_cpu PASSED in 16.9s //tensorflow/python/kernel_tests/signal:mel_ops_test_cpu PASSED in 13.6s //tensorflow/python/kernel_tests/signal:mfcc_ops_test_cpu PASSED in 8.6s //tensorflow/python/kernel_tests/signal:reconstruction_ops_test_cpu PASSED in 12.7s //tensorflow/python/kernel_tests/signal:shape_ops_test_cpu PASSED in 18.7s //tensorflow/python/kernel_tests/sparse_ops:sparse_add_op_test PASSED in 10.2s //tensorflow/python/kernel_tests/sparse_ops:sparse_concat_op_test PASSED in 9.3s //tensorflow/python/kernel_tests/sparse_ops:sparse_conditional_accumulator_test PASSED in 8.2s //tensorflow/python/kernel_tests/sparse_ops:sparse_cross_op_test PASSED in 13.4s //tensorflow/python/kernel_tests/sparse_ops:sparse_matmul_op_test_cpu PASSED in 50.2s //tensorflow/python/kernel_tests/sparse_ops:sparse_reorder_op_test PASSED in 9.5s //tensorflow/python/kernel_tests/sparse_ops:sparse_reshape_op_test PASSED in 9.1s //tensorflow/python/kernel_tests/sparse_ops:sparse_serialization_ops_test PASSED in 9.1s //tensorflow/python/kernel_tests/sparse_ops:sparse_slice_op_test PASSED in 8.8s //tensorflow/python/kernel_tests/sparse_ops:sparse_split_op_test_cpu PASSED in 7.9s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensor_dense_matmul_grad_test_cpu PASSED in 16.3s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensor_dense_matmul_op_d9m_test_cpu PASSED in 36.2s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensor_dense_matmul_op_test_cpu PASSED in 29.7s //tensorflow/python/kernel_tests/sparse_ops:sparse_tensors_map_ops_test PASSED in 12.9s //tensorflow/python/kernel_tests/sparse_ops:sparse_to_dense_op_py_test_cpu PASSED in 8.2s //tensorflow/python/kernel_tests/sparse_ops:sparse_xent_op_d9m_test_cpu PASSED in 98.0s //tensorflow/python/kernel_tests/sparse_ops:sparse_xent_op_test_cpu PASSED in 12.9s //tensorflow/python/kernel_tests/sparse_ops:sparsemask_op_test PASSED in 8.6s //tensorflow/python/kernel_tests/strings_ops:as_string_op_test PASSED in 8.8s //tensorflow/python/kernel_tests/strings_ops:base64_ops_test PASSED in 12.8s //tensorflow/python/kernel_tests/strings_ops:reduce_join_op_test_cpu PASSED in 8.9s //tensorflow/python/kernel_tests/strings_ops:regex_full_match_op_test PASSED in 15.3s //tensorflow/python/kernel_tests/strings_ops:regex_replace_op_test PASSED in 7.1s //tensorflow/python/kernel_tests/strings_ops:string_bytes_split_op_test PASSED in 7.2s //tensorflow/python/kernel_tests/strings_ops:string_format_op_test PASSED in 8.6s //tensorflow/python/kernel_tests/strings_ops:string_join_op_test PASSED in 9.8s //tensorflow/python/kernel_tests/strings_ops:string_length_op_test PASSED in 9.2s //tensorflow/python/kernel_tests/strings_ops:string_lower_op_test PASSED in 6.8s //tensorflow/python/kernel_tests/strings_ops:string_split_op_test PASSED in 11.3s //tensorflow/python/kernel_tests/strings_ops:string_strip_op_test PASSED in 6.5s //tensorflow/python/kernel_tests/strings_ops:string_to_hash_bucket_op_test_cpu PASSED in 6.8s //tensorflow/python/kernel_tests/strings_ops:string_to_number_op_test_cpu PASSED in 7.0s //tensorflow/python/kernel_tests/strings_ops:string_upper_op_test PASSED in 8.1s //tensorflow/python/kernel_tests/strings_ops:substr_op_test PASSED in 8.7s //tensorflow/python/kernel_tests/strings_ops:unicode_decode_op_test PASSED in 14.5s //tensorflow/python/kernel_tests/strings_ops:unicode_encode_op_test PASSED in 10.8s //tensorflow/python/kernel_tests/strings_ops:unicode_script_op_test PASSED in 7.4s //tensorflow/python/kernel_tests/strings_ops:unicode_transcode_op_test PASSED in 10.2s //tensorflow/python/kernel_tests/strings_ops:unsorted_segment_join_op_test_cpu PASSED in 9.5s //tensorflow/python/kernel_tests/summary_ops:summary_ops_test_cpu PASSED in 24.1s //tensorflow/python/kernel_tests/summary_ops:summary_v1_audio_op_test_cpu PASSED in 7.7s //tensorflow/python/kernel_tests/summary_ops:summary_v1_image_op_test_cpu PASSED in 7.7s //tensorflow/python/kernel_tests/summary_ops:summary_v1_ops_test PASSED in 7.4s //tensorflow/python/kernel_tests/summary_ops:summary_v1_tensor_op_test PASSED in 6.6s //tensorflow/python/kernel_tests/v1_compat_tests:array_ops_test_cpu PASSED in 26.7s //tensorflow/python/kernel_tests/v1_compat_tests:dense_update_ops_test_cpu PASSED in 15.3s //tensorflow/python/kernel_tests/v1_compat_tests:identity_op_py_test PASSED in 6.2s //tensorflow/python/kernel_tests/v1_compat_tests:scatter_nd_ops_test_cpu PASSED in 8.4s //tensorflow/python/kernel_tests/v1_compat_tests:session_ops_test_cpu PASSED in 9.3s //tensorflow/python/kernel_tests/v1_compat_tests:stack_op_test_cpu PASSED in 7.3s //tensorflow/python/kernel_tests/variables:dense_update_ops_no_tsan_test_cpu PASSED in 8.8s //tensorflow/python/kernel_tests/variables:dense_update_ops_test_cpu PASSED in 7.0s //tensorflow/python/kernel_tests/variables:partitioned_variables_test PASSED in 11.4s //tensorflow/python/kernel_tests/variables:resource_variable_ops_test_cpu PASSED in 62.8s //tensorflow/python/kernel_tests/variables:variable_ops_test_cpu PASSED in 8.3s //tensorflow/python/kernel_tests/variables:variable_scope_test PASSED in 36.5s //tensorflow/python/kernel_tests/variables:variables_test PASSED in 10.7s //tensorflow/python/lib/core:custom_float_test PASSED in 7.3s //tensorflow/python/lib/io:file_io_test PASSED in 10.2s //tensorflow/python/lib/io:tf_record_test PASSED in 10.5s //tensorflow/python/module:module_test PASSED in 9.0s //tensorflow/python/ops/losses:util_test PASSED in 7.3s //tensorflow/python/ops/memory_tests:custom_gradient_memory_test_cpu PASSED in 13.2s //tensorflow/python/ops/numpy_ops:np_array_ops_test_cpu PASSED in 74.7s //tensorflow/python/ops/numpy_ops:np_arrays_test_cpu PASSED in 9.7s //tensorflow/python/ops/numpy_ops:np_dtypes_test_cpu PASSED in 6.9s //tensorflow/python/ops/numpy_ops:np_interop_test_cpu PASSED in 38.9s //tensorflow/python/ops/numpy_ops:np_logic_test_cpu PASSED in 11.1s //tensorflow/python/ops/numpy_ops:np_math_ops_test_cpu PASSED in 22.4s //tensorflow/python/ops/numpy_ops:np_random_test_cpu PASSED in 68.1s //tensorflow/python/ops/numpy_ops:np_utils_test_cpu PASSED in 7.8s //tensorflow/python/ops/numpy_ops/integration_test:np_config_test_cpu PASSED in 18.2s //tensorflow/python/ops/numpy_ops/integration_test:public_symbol_test PASSED in 14.8s //tensorflow/python/ops/parallel_for:array_test_cpu PASSED in 44.4s //tensorflow/python/ops/parallel_for:gradients_test_cpu PASSED in 10.3s //tensorflow/python/ops/parallel_for:xla_control_flow_ops_test_cpu PASSED in 53.2s //tensorflow/python/ops/ragged:convert_to_tensor_or_ragged_tensor_op_test PASSED in 8.2s //tensorflow/python/ops/ragged:ragged_batch_gather_op_test PASSED in 46.1s //tensorflow/python/ops/ragged:ragged_bitcast_op_test PASSED in 6.6s //tensorflow/python/ops/ragged:ragged_boolean_mask_op_test PASSED in 13.2s //tensorflow/python/ops/ragged:ragged_concat_op_test PASSED in 9.2s //tensorflow/python/ops/ragged:ragged_const_op_test PASSED in 8.1s //tensorflow/python/ops/ragged:ragged_constant_value_op_test PASSED in 7.3s //tensorflow/python/ops/ragged:ragged_cross_op_test PASSED in 22.2s //tensorflow/python/ops/ragged:ragged_dispatch_test PASSED in 124.0s //tensorflow/python/ops/ragged:ragged_dynamic_partition_op_test_cpu PASSED in 19.4s //tensorflow/python/ops/ragged:ragged_eager_test PASSED in 6.5s //tensorflow/python/ops/ragged:ragged_expand_dims_op_test PASSED in 6.9s //tensorflow/python/ops/ragged:ragged_factory_ops_test_cpu PASSED in 15.4s //tensorflow/python/ops/ragged:ragged_from_sparse_op_test PASSED in 6.8s //tensorflow/python/ops/ragged:ragged_from_tensor_op_test PASSED in 18.3s //tensorflow/python/ops/ragged:ragged_gather_nd_op_test PASSED in 7.9s //tensorflow/python/ops/ragged:ragged_map_flat_values_op_test PASSED in 9.5s //tensorflow/python/ops/ragged:ragged_map_fn_op_test PASSED in 14.5s //tensorflow/python/ops/ragged:ragged_math_ops_test PASSED in 11.6s //tensorflow/python/ops/ragged:ragged_matmul_op_test PASSED in 33.9s //tensorflow/python/ops/ragged:ragged_merge_dims_op_test PASSED in 32.3s //tensorflow/python/ops/ragged:ragged_one_hot_op_test PASSED in 11.8s //tensorflow/python/ops/ragged:ragged_operators_test PASSED in 18.7s //tensorflow/python/ops/ragged:ragged_placeholder_op_test PASSED in 6.2s //tensorflow/python/ops/ragged:ragged_print_op_test PASSED in 12.8s //tensorflow/python/ops/ragged:ragged_range_op_test PASSED in 7.7s //tensorflow/python/ops/ragged:ragged_rank_op_test PASSED in 6.9s //tensorflow/python/ops/ragged:ragged_reduce_op_test PASSED in 43.6s //tensorflow/python/ops/ragged:ragged_resize_image_op_test PASSED in 14.8s //tensorflow/python/ops/ragged:ragged_reverse_op_test PASSED in 7.2s //tensorflow/python/ops/ragged:ragged_row_lengths_op_test PASSED in 7.1s //tensorflow/python/ops/ragged:ragged_row_splits_to_segment_ids_op_test PASSED in 7.2s //tensorflow/python/ops/ragged:ragged_segment_ids_to_row_splits_op_test PASSED in 7.3s //tensorflow/python/ops/ragged:ragged_segment_op_test PASSED in 15.5s //tensorflow/python/ops/ragged:ragged_size_op_test PASSED in 7.3s //tensorflow/python/ops/ragged:ragged_split_op_test PASSED in 51.7s //tensorflow/python/ops/ragged:ragged_squeeze_op_test PASSED in 16.7s //tensorflow/python/ops/ragged:ragged_stack_op_test PASSED in 10.1s //tensorflow/python/ops/ragged:ragged_tensor_bounding_shape_op_test PASSED in 9.9s //tensorflow/python/ops/ragged:ragged_tensor_shape_test PASSED in 59.4s //tensorflow/python/ops/ragged:ragged_tile_op_test PASSED in 38.2s //tensorflow/python/ops/ragged:ragged_to_sparse_op_test PASSED in 7.2s //tensorflow/python/ops/ragged:ragged_to_tensor_op_test PASSED in 69.7s //tensorflow/python/ops/ragged:ragged_util_test PASSED in 22.1s //tensorflow/python/ops/ragged:ragged_where_op_test PASSED in 30.4s //tensorflow/python/ops/ragged:row_partition_test PASSED in 22.6s //tensorflow/python/ops/ragged:string_ngrams_op_test PASSED in 6.8s //tensorflow/python/ops/ragged:strings_reduce_join_op_test PASSED in 9.7s //tensorflow/python/ops/structured:structured_array_ops_test PASSED in 40.3s //tensorflow/python/ops/structured:structured_tensor_slice_test PASSED in 56.6s //tensorflow/python/ops/structured:structured_tensor_spec_test PASSED in 10.7s //tensorflow/python/ops/structured:structured_tensor_test PASSED in 39.6s //tensorflow/python/ops/v1_compat_tests:gradient_checker_test_cpu PASSED in 10.1s //tensorflow/python/platform:benchmark_test PASSED in 8.5s //tensorflow/python/platform:build_info_test PASSED in 6.6s //tensorflow/python/platform:resource_loader_test PASSED in 1.9s //tensorflow/python/profiler:pprof_profiler_test PASSED in 7.5s //tensorflow/python/profiler:profile_context_test_cpu PASSED in 43.0s //tensorflow/python/profiler:profiler_client_test_cpu PASSED in 9.0s //tensorflow/python/profiler:profiler_test_cpu PASSED in 19.3s //tensorflow/python/profiler:profiler_v2_test_cpu PASSED in 7.1s //tensorflow/python/profiler:profiler_wrapper_test PASSED in 6.5s //tensorflow/python/profiler:tfprof_logger_test PASSED in 6.6s //tensorflow/python/profiler/integration_test:profiler_api_test_cpu PASSED in 28.0s //tensorflow/python/profiler/internal:flops_registry_test PASSED in 6.5s //tensorflow/python/profiler/internal:print_model_analysis_test PASSED in 7.4s //tensorflow/python/profiler/internal:run_metadata_test_cpu PASSED in 12.7s //tensorflow/python/saved_model:fingerprinting_test PASSED in 10.1s //tensorflow/python/saved_model:keras_injection_test PASSED in 15.4s //tensorflow/python/saved_model:load_v1_in_v2_test PASSED in 24.0s //tensorflow/python/saved_model:loader_test PASSED in 10.5s //tensorflow/python/saved_model:method_name_updater_test PASSED in 6.6s //tensorflow/python/saved_model:metrics_test PASSED in 9.1s //tensorflow/python/saved_model:nested_structure_coder_test PASSED in 7.4s //tensorflow/python/saved_model:pywrap_saved_model_fingerprinting_test PASSED in 7.9s //tensorflow/python/saved_model:pywrap_saved_model_metrics_test PASSED in 7.2s //tensorflow/python/saved_model:revived_types_test PASSED in 27.3s //tensorflow/python/saved_model:save_context_test PASSED in 6.9s //tensorflow/python/saved_model:save_test PASSED in 22.9s //tensorflow/python/saved_model:saved_model_test PASSED in 16.8s //tensorflow/python/saved_model:signature_def_utils_test PASSED in 7.2s //tensorflow/python/saved_model:simple_save_test PASSED in 8.7s //tensorflow/python/saved_model:tracing_utils_test PASSED in 9.0s //tensorflow/python/saved_model:utils_test PASSED in 26.9s //tensorflow/python/saved_model/model_utils:export_output_test PASSED in 7.1s //tensorflow/python/saved_model/model_utils:export_test PASSED in 10.2s //tensorflow/python/saved_model/model_utils:mode_keys_test PASSED in 6.5s //tensorflow/python/saved_model/registration:registration_saving_test PASSED in 14.0s //tensorflow/python/saved_model/registration:registration_test PASSED in 7.5s //tensorflow/python/saved_model/registration:tf_registration_test PASSED in 13.7s //tensorflow/python/summary:plugin_asset_test PASSED in 7.5s //tensorflow/python/summary:summary_iterator_test PASSED in 26.0s //tensorflow/python/summary:summary_test PASSED in 7.8s //tensorflow/python/summary:summary_v2_test PASSED in 6.9s //tensorflow/python/summary/writer:writer_test PASSED in 20.2s //tensorflow/python/tools:aot_compiled_test PASSED in 21.9s //tensorflow/python/tools:freeze_graph_test PASSED in 24.0s //tensorflow/python/tools:optimize_for_inference_test PASSED in 12.6s //tensorflow/python/tools:print_selective_registration_header_test PASSED in 14.7s //tensorflow/python/tools:saved_model_cli_test PASSED in 25.4s //tensorflow/python/tools:saved_model_utils_test PASSED in 6.8s //tensorflow/python/tools:strip_unused_test PASSED in 6.4s //tensorflow/python/tools/api/generator:create_python_api_test PASSED in 14.7s //tensorflow/python/tools/api/generator:output_init_files_test PASSED in 14.9s //tensorflow/python/tools/api/generator:tensorflow_doc_srcs_test PASSED in 13.6s //tensorflow/python/tpu:bfloat16_test PASSED in 14.8s //tensorflow/python/tpu:feature_column_test PASSED in 11.6s //tensorflow/python/tpu:topology_test PASSED in 7.0s //tensorflow/python/tpu:tpu_embedding_for_serving_test PASSED in 9.8s //tensorflow/python/tpu:tpu_embedding_v2_utils_test PASSED in 8.1s //tensorflow/python/tpu:tpu_infeed_test PASSED in 13.2s //tensorflow/python/tpu:tpu_sharding_test PASSED in 8.1s //tensorflow/python/tpu:tpu_test_wrapper_test PASSED in 6.3s //tensorflow/python/tpu/client:client_py_test PASSED in 8.9s //tensorflow/python/trackable:autotrackable_test PASSED in 8.3s //tensorflow/python/trackable:base_delegate_test PASSED in 11.3s //tensorflow/python/trackable:base_test PASSED in 9.0s //tensorflow/python/trackable:data_structures_test PASSED in 12.1s //tensorflow/python/trackable:python_state_test PASSED in 7.6s //tensorflow/python/trackable:resource_test PASSED in 7.0s //tensorflow/python/trackable:trackable_utils_test PASSED in 18.3s //tensorflow/python/training:adadelta_test_cpu PASSED in 14.5s //tensorflow/python/training:adagrad_da_test_cpu PASSED in 27.5s //tensorflow/python/training:adagrad_test_cpu PASSED in 19.7s //tensorflow/python/training:adam_test_cpu PASSED in 22.5s //tensorflow/python/training:basic_loops_test_cpu PASSED in 8.2s //tensorflow/python/training:basic_session_run_hooks_test PASSED in 19.3s //tensorflow/python/training:checkpoint_ops_test PASSED in 7.6s //tensorflow/python/training:coordinator_test_cpu PASSED in 34.1s //tensorflow/python/training:device_setter_test_cpu PASSED in 9.7s //tensorflow/python/training:ftrl_test_cpu PASSED in 14.6s //tensorflow/python/training:gradient_descent_test_cpu PASSED in 9.8s //tensorflow/python/training:input_test PASSED in 41.2s //tensorflow/python/training:momentum_test_cpu PASSED in 12.5s //tensorflow/python/training:monitored_session_test PASSED in 28.0s //tensorflow/python/training:moving_averages_test_cpu PASSED in 15.7s //tensorflow/python/training:optimizer_test_cpu PASSED in 12.2s //tensorflow/python/training:proximal_adagrad_test_cpu PASSED in 8.6s //tensorflow/python/training:proximal_gradient_descent_test_cpu PASSED in 9.7s //tensorflow/python/training:quantize_training_test_cpu PASSED in 7.7s //tensorflow/python/training:queue_runner_test_cpu PASSED in 6.7s //tensorflow/python/training:rmsprop_test_cpu PASSED in 41.7s //tensorflow/python/training:saver_large_partitioned_variable_test PASSED in 15.2s //tensorflow/python/training:saver_test_2gpu PASSED in 44.6s //tensorflow/python/training:saver_test_cpu PASSED in 37.7s //tensorflow/python/training:server_lib_multiple_containers_test PASSED in 8.7s //tensorflow/python/training:server_lib_same_variables_clear_container_test PASSED in 11.8s //tensorflow/python/training:server_lib_same_variables_clear_test PASSED in 10.5s //tensorflow/python/training:server_lib_same_variables_no_clear_test PASSED in 7.5s //tensorflow/python/training:server_lib_sparse_job_test PASSED in 9.0s //tensorflow/python/training:server_lib_test PASSED in 15.6s //tensorflow/python/training:session_manager_test_cpu PASSED in 78.6s //tensorflow/python/training:slot_creator_test_cpu PASSED in 7.8s //tensorflow/python/training:supervisor_test PASSED in 14.3s //tensorflow/python/training:training_ops_mlir_test_cpu PASSED in 8.4s //tensorflow/python/training:training_ops_test_cpu PASSED in 6.9s //tensorflow/python/training:training_util_test PASSED in 7.5s //tensorflow/python/training:warm_starting_util_test PASSED in 25.8s //tensorflow/python/training/experimental:loss_scale_optimizer_test PASSED in 23.1s //tensorflow/python/training/experimental:loss_scale_test PASSED in 32.8s //tensorflow/python/training/experimental:mixed_precision_test_cpu PASSED in 7.1s //tensorflow/python/training/saving:saveable_object_util_test PASSED in 7.5s //tensorflow/python/util:compat_test PASSED in 7.9s //tensorflow/python/util:decorator_utils_test PASSED in 7.1s //tensorflow/python/util:deprecation_test PASSED in 7.7s //tensorflow/python/util:dispatch_test PASSED in 9.6s //tensorflow/python/util:example_parser_configuration_test PASSED in 6.2s //tensorflow/python/util:fast_module_type_test PASSED in 8.7s //tensorflow/python/util:function_parameter_canonicalizer_test PASSED in 6.6s //tensorflow/python/util:function_utils_test PASSED in 6.9s //tensorflow/python/util:keyword_args_test PASSED in 7.1s //tensorflow/python/util:lock_util_test PASSED in 8.9s //tensorflow/python/util:module_wrapper_test PASSED in 7.7s //tensorflow/python/util:nest_test PASSED in 13.3s //tensorflow/python/util:object_identity_test PASSED in 8.6s //tensorflow/python/util:serialization_test PASSED in 6.9s //tensorflow/python/util:tf_contextlib_test PASSED in 7.9s //tensorflow/python/util:tf_decorator_test PASSED in 7.0s //tensorflow/python/util:tf_export_test PASSED in 7.5s //tensorflow/python/util:tf_inspect_test PASSED in 7.8s //tensorflow/python/util:tf_should_use_test PASSED in 6.8s //tensorflow/python/util:tf_stack_test PASSED in 8.3s //tensorflow/python/util:traceback_utils_test PASSED in 9.6s //tensorflow/python/util:type_annotations_test PASSED in 6.9s //tensorflow/python/util:variable_utils_test PASSED in 7.1s //tensorflow/python/util:vlog_test PASSED in 25.2s //tensorflow/tools/api/tests:module_test PASSED in 15.2s //tensorflow/tools/benchmark:benchmark_model_test PASSED in 1.7s //tensorflow/tools/common:public_api_test PASSED in 1.7s //tensorflow/tools/common:traverse_test PASSED in 1.6s //tensorflow/tools/compatibility:all_renames_v2_test PASSED in 5.7s //tensorflow/tools/compatibility:ast_edits_test PASSED in 6.7s //tensorflow/tools/compatibility:test_file_v1_0 PASSED in 12.9s //tensorflow/tools/compatibility:test_file_v2_0 PASSED in 14.0s //tensorflow/tools/compatibility:tf_upgrade_test PASSED in 7.5s //tensorflow/tools/compatibility:tf_upgrade_v2_safety_test PASSED in 6.7s //tensorflow/tools/docs:tf_doctest_test PASSED in 1.4s //tensorflow/tools/graph_transforms:file_utils_test PASSED in 1.3s //tensorflow/tools/graph_transforms:transform_graph_test PASSED in 1.9s //tensorflow/tools/graph_transforms:transform_utils_test PASSED in 2.3s //tensorflow/tools/graph_transforms:transforms_test PASSED in 4.0s //tensorflow/tools/proto_text:gen_proto_text_functions_lib_test PASSED in 0.2s //tensorflow/tools/tensorflow_builder/compat_checker:compat_checker_test PASSED in 0.5s //tensorflow/tsl/c:tsl_status_helper_test PASSED in 0.2s //tensorflow/tsl/c:tsl_status_test PASSED in 0.4s //tensorflow/tsl/concurrency:async_value_ref_test PASSED in 0.2s //tensorflow/tsl/concurrency:async_value_test PASSED in 0.4s //tensorflow/tsl/concurrency:concurrent_vector_test PASSED in 0.8s //tensorflow/tsl/cuda:cudnn_version_test PASSED in 0.1s //tensorflow/tsl/distributed_runtime/coordination:coordination_service_agent_test PASSED in 12.9s //tensorflow/tsl/distributed_runtime/coordination:coordination_service_error_util_test PASSED in 0.7s //tensorflow/tsl/distributed_runtime/coordination:coordination_service_recoverable_job_test PASSED in 1.8s //tensorflow/tsl/distributed_runtime/preemption:preemption_notifier_test PASSED in 6.2s //tensorflow/tsl/distributed_runtime/preemption:preemption_sync_manager_test PASSED in 5.7s //tensorflow/tsl/distributed_runtime/rpc:grpc_channel_test PASSED in 0.2s //tensorflow/tsl/distributed_runtime/rpc:grpc_util_test PASSED in 0.8s //tensorflow/tsl/framework:cancellation_test PASSED in 1.2s //tensorflow/tsl/framework/convolution:spatial_convolutions_test PASSED in 0.1s //tensorflow/tsl/lib/gtl:tsl_lib_gtl_tests PASSED in 0.7s //tensorflow/tsl/lib/hash:crc32c_test PASSED in 0.2s //tensorflow/tsl/lib/histogram:histogram_test PASSED in 0.5s //tensorflow/tsl/lib/io:buffered_inputstream_test PASSED in 0.3s //tensorflow/tsl/lib/io:cache_test PASSED in 0.1s //tensorflow/tsl/lib/io:inputbuffer_test PASSED in 1.2s //tensorflow/tsl/lib/io:inputstream_interface_test PASSED in 0.7s //tensorflow/tsl/lib/io:random_inputstream_test PASSED in 0.1s //tensorflow/tsl/lib/io:record_reader_writer_test PASSED in 0.1s //tensorflow/tsl/lib/io:recordio_test PASSED in 0.5s //tensorflow/tsl/lib/io:table_test PASSED in 4.2s //tensorflow/tsl/lib/io:zlib_buffers_test PASSED in 6.2s //tensorflow/tsl/lib/io/snappy:snappy_test PASSED in 0.4s //tensorflow/tsl/lib/math:math_util_test PASSED in 0.1s //tensorflow/tsl/lib/random:distribution_sampler_test PASSED in 0.1s //tensorflow/tsl/lib/random:philox_random_test PASSED in 0.1s //tensorflow/tsl/lib/random:random_distributions_test PASSED in 21.8s //tensorflow/tsl/lib/random:simple_philox_test PASSED in 0.4s //tensorflow/tsl/lib/random:weighted_picker_test PASSED in 12.7s //tensorflow/tsl/platform:ctstring_test PASSED in 0.1s //tensorflow/tsl/platform:denormal_test PASSED in 0.6s //tensorflow/tsl/platform:errors_test PASSED in 0.1s //tensorflow/tsl/platform:fingerprint_test PASSED in 0.1s //tensorflow/tsl/platform:float8_test PASSED in 0.6s //tensorflow/tsl/platform:hash_test PASSED in 0.1s //tensorflow/tsl/platform:integral_types_test PASSED in 0.1s //tensorflow/tsl/platform:intrusive_ptr_test PASSED in 0.4s //tensorflow/tsl/platform:logging_test PASSED in 24.2s //tensorflow/tsl/platform:mutex_test PASSED in 0.6s //tensorflow/tsl/platform:net_test PASSED in 0.1s //tensorflow/tsl/platform:numbers_test PASSED in 0.1s //tensorflow/tsl/platform:path_test PASSED in 0.1s //tensorflow/tsl/platform:port_test PASSED in 8.3s //tensorflow/tsl/platform:random_test PASSED in 1.6s //tensorflow/tsl/platform:refcount_test PASSED in 0.1s //tensorflow/tsl/platform:retrying_file_system_test PASSED in 0.1s //tensorflow/tsl/platform:retrying_utils_test PASSED in 0.6s //tensorflow/tsl/platform:scanner_test PASSED in 0.1s //tensorflow/tsl/platform:setround_test PASSED in 0.2s //tensorflow/tsl/platform:stacktrace_handler_test PASSED in 2.2s //tensorflow/tsl/platform:stacktrace_test PASSED in 0.2s //tensorflow/tsl/platform:status_matchers_test PASSED in 0.4s //tensorflow/tsl/platform:status_test PASSED in 0.1s //tensorflow/tsl/platform:statusor_test PASSED in 18.0s //tensorflow/tsl/platform:str_util_test PASSED in 0.1s //tensorflow/tsl/platform:strcat_test PASSED in 0.1s //tensorflow/tsl/platform:stringpiece_test PASSED in 0.1s //tensorflow/tsl/platform:stringprintf_test PASSED in 0.2s //tensorflow/tsl/platform:subprocess_test PASSED in 0.6s //tensorflow/tsl/platform:tstring_test PASSED in 0.1s //tensorflow/tsl/platform:unbounded_work_queue_test PASSED in 0.5s //tensorflow/tsl/platform/cloud:compute_engine_metadata_client_test PASSED in 0.7s //tensorflow/tsl/platform/cloud:compute_engine_zone_provider_test PASSED in 0.1s //tensorflow/tsl/platform/cloud:curl_http_request_test PASSED in 8.3s //tensorflow/tsl/platform/cloud:expiring_lru_cache_test PASSED in 0.1s //tensorflow/tsl/platform/cloud:gcs_dns_cache_test PASSED in 0.7s //tensorflow/tsl/platform/cloud:gcs_file_system_test PASSED in 4.6s //tensorflow/tsl/platform/cloud:gcs_throttle_test PASSED in 0.1s //tensorflow/tsl/platform/cloud:google_auth_provider_test PASSED in 0.3s //tensorflow/tsl/platform/cloud:oauth_client_test PASSED in 0.1s //tensorflow/tsl/platform/cloud:ram_file_block_cache_test PASSED in 2.5s //tensorflow/tsl/platform/cloud:time_util_test PASSED in 0.1s //tensorflow/tsl/profiler/backends/cpu:traceme_recorder_test PASSED in 0.2s //tensorflow/tsl/profiler/convert:trace_events_to_json_test PASSED in 0.1s //tensorflow/tsl/profiler/convert:xla_op_utils_test PASSED in 0.5s //tensorflow/tsl/profiler/convert:xplane_to_trace_events_test PASSED in 0.6s //tensorflow/tsl/profiler/lib:profiler_factory_test PASSED in 0.1s //tensorflow/tsl/profiler/lib:profiler_lock_test PASSED in 0.1s //tensorflow/tsl/profiler/lib:scoped_annotation_test PASSED in 0.4s //tensorflow/tsl/profiler/lib:traceme_encode_test PASSED in 0.1s //tensorflow/tsl/profiler/rpc/client:profiler_client_test PASSED in 3.5s //tensorflow/tsl/profiler/rpc/client:remote_profiler_session_manager_test PASSED in 4.3s //tensorflow/tsl/profiler/utils:buffer_pool_test PASSED in 0.1s //tensorflow/tsl/profiler/utils:group_events_test PASSED in 0.3s //tensorflow/tsl/profiler/utils:parse_annotation_test PASSED in 0.3s //tensorflow/tsl/profiler/utils:preprocess_xplane_test PASSED in 0.5s //tensorflow/tsl/profiler/utils:tf_op_utils_test PASSED in 0.2s //tensorflow/tsl/profiler/utils:timespan_test PASSED in 0.1s //tensorflow/tsl/profiler/utils:tpu_xplane_utils_test PASSED in 0.7s //tensorflow/tsl/profiler/utils:xplane_builder_test PASSED in 0.4s //tensorflow/tsl/profiler/utils:xplane_utils_test PASSED in 0.1s //tensorflow/tsl/util:device_name_utils_test PASSED in 0.1s //tensorflow/tsl/util:stats_calculator_test PASSED in 0.1s //tensorflow/compiler/tests:complex_div_test_cpu PASSED in 6.4s Stats over 2 runs: max = 6.4s, min = 5.7s, avg = 6.1s, dev = 0.4s //tensorflow/compiler/tests:complex_div_test_cpu_mlir_bridge_test PASSED in 6.5s Stats over 2 runs: max = 6.5s, min = 6.0s, avg = 6.3s, dev = 0.2s //tensorflow/compiler/xla/tests:conditional_test_cpu PASSED in 9.8s Stats over 2 runs: max = 9.8s, min = 9.6s, avg = 9.7s, dev = 0.1s //tensorflow/python:control_flow_ops_test_cpu PASSED in 27.8s Stats over 2 runs: max = 27.8s, min = 22.3s, avg = 25.0s, dev = 2.7s //tensorflow/python/data/experimental/kernel_tests/optimization:optimization_test PASSED in 20.8s Stats over 2 runs: max = 20.8s, min = 14.2s, avg = 17.5s, dev = 3.3s //tensorflow/python/data/experimental/kernel_tests/service:metadata_test PASSED in 19.8s Stats over 2 runs: max = 19.8s, min = 18.5s, avg = 19.1s, dev = 0.7s //tensorflow/python/data/kernel_tests:padded_batch_test PASSED in 41.2s Stats over 2 runs: max = 41.2s, min = 37.9s, avg = 39.5s, dev = 1.7s //tensorflow/python/data/kernel_tests:repeat_test PASSED in 40.6s Stats over 2 runs: max = 40.6s, min = 38.7s, avg = 39.7s, dev = 1.0s //tensorflow/python/data/kernel_tests:window_test PASSED in 55.6s Stats over 2 runs: max = 55.6s, min = 37.7s, avg = 46.6s, dev = 9.0s //tensorflow/python/distribute:strategy_common_test_2gpu PASSED in 26.6s Stats over 2 runs: max = 26.6s, min = 18.5s, avg = 22.5s, dev = 4.0s //tensorflow/python/distribute:strategy_common_test_cpu PASSED in 30.6s Stats over 2 runs: max = 30.6s, min = 24.5s, avg = 27.5s, dev = 3.1s //tensorflow/python/distribute:strategy_common_test_xla_2gpu PASSED in 13.7s Stats over 2 runs: max = 13.7s, min = 12.8s, avg = 13.2s, dev = 0.4s //tensorflow/python/kernel_tests/array_ops:scatter_nd_ops_test_cpu PASSED in 12.8s Stats over 2 runs: max = 12.8s, min = 12.6s, avg = 12.7s, dev = 0.1s //tensorflow/python/kernel_tests/array_ops:scatter_ops_test_cpu PASSED in 19.8s Stats over 2 runs: max = 19.8s, min = 18.6s, avg = 19.2s, dev = 0.6s //tensorflow/python/kernel_tests/control_flow:functional_ops_test_cpu PASSED in 15.9s Stats over 2 runs: max = 15.9s, min = 14.7s, avg = 15.3s, dev = 0.6s //tensorflow/python/kernel_tests/control_flow:map_fn_test_cpu PASSED in 9.2s Stats over 2 runs: max = 9.2s, min = 8.3s, avg = 8.8s, dev = 0.4s //tensorflow/python/kernel_tests/nn_ops:bias_op_d9m_test_cpu PASSED in 112.0s Stats over 2 runs: max = 112.0s, min = 42.3s, avg = 77.2s, dev = 34.8s //tensorflow/python/kernel_tests/nn_ops:conv2d_backprop_filter_grad_test_cpu PASSED in 123.7s Stats over 2 runs: max = 123.7s, min = 7.4s, avg = 65.6s, dev = 58.1s //tensorflow/core/grappler/clusters:single_machine_test FLAKY, failed in 1 out of 2 in 900.0s Stats over 2 runs: max = 900.0s, min = 22.6s, avg = 461.3s, dev = 438.7s /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/core/grappler/clusters/single_machine_test/test_attempts/attempt_1.log //tensorflow/compiler/tests:spacetobatch_op_test_cpu PASSED in 8.5s Stats over 3 runs: max = 8.5s, min = 7.8s, avg = 8.1s, dev = 0.3s //tensorflow/compiler/tests:spacetobatch_op_test_cpu_mlir_bridge_test PASSED in 13.1s Stats over 3 runs: max = 13.1s, min = 12.0s, avg = 12.4s, dev = 0.5s //tensorflow/compiler/xla/tests:triangular_solve_test_cpu PASSED in 70.2s Stats over 3 runs: max = 70.2s, min = 67.6s, avg = 68.6s, dev = 1.1s //tensorflow/core/data/service:thread_safe_buffer_test PASSED in 0.7s Stats over 3 runs: max = 0.7s, min = 0.3s, avg = 0.4s, dev = 0.2s //tensorflow/python/data/experimental/kernel_tests/service:multi_process_cluster_test PASSED in 20.1s Stats over 3 runs: max = 20.1s, min = 13.8s, avg = 17.8s, dev = 2.9s //tensorflow/python/data/kernel_tests:unique_test PASSED in 14.8s Stats over 3 runs: max = 14.8s, min = 11.8s, avg = 12.9s, dev = 1.4s //tensorflow/python/kernel_tests/array_ops:gather_op_test_cpu PASSED in 58.9s Stats over 3 runs: max = 58.9s, min = 32.1s, avg = 41.5s, dev = 12.4s //tensorflow/python/kernel_tests/array_ops:weights_broadcast_test PASSED in 11.5s Stats over 3 runs: max = 11.5s, min = 10.5s, avg = 10.9s, dev = 0.4s //tensorflow/python/kernel_tests/distributions:util_test_cpu PASSED in 14.4s Stats over 3 runs: max = 14.4s, min = 10.2s, avg = 12.2s, dev = 1.7s //tensorflow/python/kernel_tests/linalg:matrix_triangular_solve_op_test_cpu PASSED in 364.6s Stats over 3 runs: max = 364.6s, min = 8.9s, avg = 127.9s, dev = 167.4s //tensorflow/python/kernel_tests/random:multinomial_op_big_test_cpu PASSED in 15.8s Stats over 3 runs: max = 15.8s, min = 12.0s, avg = 13.4s, dev = 1.7s //tensorflow/compiler/tests:ternary_ops_test_cpu PASSED in 13.0s Stats over 4 runs: max = 13.0s, min = 9.5s, avg = 11.2s, dev = 1.7s //tensorflow/compiler/tests:ternary_ops_test_cpu_mlir_bridge_test PASSED in 18.9s Stats over 4 runs: max = 18.9s, min = 12.4s, avg = 14.7s, dev = 2.5s //tensorflow/compiler/tests:unary_ops_test_cpu PASSED in 42.7s Stats over 4 runs: max = 42.7s, min = 8.6s, avg = 26.8s, dev = 13.5s //tensorflow/compiler/tests:unary_ops_test_cpu_mlir_bridge_test PASSED in 40.7s Stats over 4 runs: max = 40.7s, min = 7.8s, avg = 26.5s, dev = 13.6s //tensorflow/compiler/xla/tests:dynamic_ops_test_cpu PASSED in 11.5s Stats over 4 runs: max = 11.5s, min = 9.5s, avg = 10.3s, dev = 0.8s //tensorflow/core/kernels:example_parsing_ops_test PASSED in 1.0s Stats over 4 runs: max = 1.0s, min = 0.7s, avg = 0.8s, dev = 0.1s //tensorflow/python:nn_batchnorm_test_cpu PASSED in 24.8s Stats over 4 runs: max = 24.8s, min = 9.6s, avg = 17.3s, dev = 5.6s //tensorflow/python:nn_fused_batchnorm_d9m_test_cpu PASSED in 17.4s Stats over 4 runs: max = 17.4s, min = 15.8s, avg = 16.5s, dev = 0.6s //tensorflow/python/data/experimental/kernel_tests:auto_shard_dataset_test PASSED in 37.7s Stats over 4 runs: max = 37.7s, min = 24.7s, avg = 31.0s, dev = 4.6s //tensorflow/python/data/experimental/kernel_tests:map_and_batch_test PASSED in 34.1s Stats over 4 runs: max = 34.1s, min = 22.5s, avg = 26.0s, dev = 4.8s //tensorflow/python/data/experimental/kernel_tests:parse_example_dataset_test PASSED in 37.1s Stats over 4 runs: max = 37.1s, min = 12.1s, avg = 23.8s, dev = 11.2s //tensorflow/python/data/experimental/kernel_tests:rebatch_dataset_test PASSED in 24.7s Stats over 4 runs: max = 24.7s, min = 11.0s, avg = 15.7s, dev = 5.4s //tensorflow/python/data/experimental/kernel_tests:sql_dataset_test PASSED in 58.0s Stats over 4 runs: max = 58.0s, min = 44.0s, avg = 51.5s, dev = 5.4s //tensorflow/python/data/experimental/kernel_tests/service:cross_trainer_cache_ft_test PASSED in 9.7s Stats over 4 runs: max = 9.7s, min = 7.3s, avg = 8.4s, dev = 1.1s //tensorflow/python/data/kernel_tests:batch_test PASSED in 38.6s Stats over 4 runs: max = 38.6s, min = 25.1s, avg = 29.3s, dev = 5.5s //tensorflow/python/data/kernel_tests:fixed_length_record_dataset_test PASSED in 18.2s Stats over 4 runs: max = 18.2s, min = 7.6s, avg = 12.8s, dev = 4.3s //tensorflow/python/data/kernel_tests:from_generator_test PASSED in 24.1s Stats over 4 runs: max = 24.1s, min = 14.8s, avg = 19.1s, dev = 3.6s //tensorflow/python/data/kernel_tests:group_by_window_test PASSED in 21.8s Stats over 4 runs: max = 21.8s, min = 7.9s, avg = 13.5s, dev = 5.9s //tensorflow/python/data/kernel_tests:ragged_batch_test PASSED in 31.1s Stats over 4 runs: max = 31.1s, min = 26.0s, avg = 27.8s, dev = 2.0s //tensorflow/python/data/kernel_tests:shuffle_test PASSED in 35.9s Stats over 4 runs: max = 35.9s, min = 26.5s, avg = 31.0s, dev = 3.7s //tensorflow/python/data/kernel_tests:skip_test PASSED in 23.5s Stats over 4 runs: max = 23.5s, min = 18.3s, avg = 20.9s, dev = 2.2s //tensorflow/python/data/kernel_tests:take_test PASSED in 24.3s Stats over 4 runs: max = 24.3s, min = 21.1s, avg = 22.8s, dev = 1.1s //tensorflow/python/data/kernel_tests:take_while_test PASSED in 29.3s Stats over 4 runs: max = 29.3s, min = 26.9s, avg = 28.1s, dev = 0.9s //tensorflow/python/data/kernel_tests:text_line_dataset_test PASSED in 23.6s Stats over 4 runs: max = 23.6s, min = 16.7s, avg = 20.6s, dev = 3.0s //tensorflow/python/data/kernel_tests:zip_test PASSED in 14.5s Stats over 4 runs: max = 14.5s, min = 11.0s, avg = 12.1s, dev = 1.4s //tensorflow/python/debug/lib:dumping_callback_test_cpu PASSED in 13.5s Stats over 4 runs: max = 13.5s, min = 12.7s, avg = 13.2s, dev = 0.3s //tensorflow/python/distribute:cross_device_ops_test_2gpu PASSED in 29.4s Stats over 4 runs: max = 29.4s, min = 19.2s, avg = 23.3s, dev = 4.0s //tensorflow/python/distribute:cross_device_ops_test_cpu PASSED in 29.1s Stats over 4 runs: max = 29.1s, min = 19.9s, avg = 24.1s, dev = 3.8s //tensorflow/python/distribute:strategy_gather_test_2gpu PASSED in 26.6s Stats over 4 runs: max = 26.6s, min = 15.0s, avg = 20.7s, dev = 5.2s //tensorflow/python/distribute:strategy_gather_test_cpu PASSED in 26.6s Stats over 4 runs: max = 26.6s, min = 15.0s, avg = 20.8s, dev = 5.5s //tensorflow/python/distribute:strategy_gather_test_xla_2gpu PASSED in 18.6s Stats over 4 runs: max = 18.6s, min = 9.7s, avg = 14.3s, dev = 4.3s //tensorflow/python/framework:convert_to_constants_test PASSED in 22.8s Stats over 4 runs: max = 22.8s, min = 15.3s, avg = 18.2s, dev = 2.8s //tensorflow/python/kernel_tests:collective_ops_test_2gpu PASSED in 43.7s Stats over 4 runs: max = 43.7s, min = 38.3s, avg = 41.5s, dev = 2.0s //tensorflow/python/kernel_tests:collective_ops_test_cpu PASSED in 34.7s Stats over 4 runs: max = 34.7s, min = 33.3s, avg = 33.8s, dev = 0.6s //tensorflow/python/kernel_tests/array_ops:concat_op_test_cpu PASSED in 13.3s Stats over 4 runs: max = 13.3s, min = 12.0s, avg = 12.6s, dev = 0.5s //tensorflow/python/kernel_tests/array_ops:init_ops_test_cpu PASSED in 75.7s Stats over 4 runs: max = 75.7s, min = 26.2s, avg = 48.2s, dev = 19.8s //tensorflow/python/kernel_tests/array_ops:split_op_test_cpu PASSED in 26.7s Stats over 4 runs: max = 26.7s, min = 7.9s, avg = 14.5s, dev = 7.6s //tensorflow/python/kernel_tests/linalg:einsum_op_test_cpu PASSED in 104.3s Stats over 4 runs: max = 104.3s, min = 16.4s, avg = 51.2s, dev = 35.2s //tensorflow/python/kernel_tests/linalg:linear_operator_lower_triangular_test_cpu PASSED in 27.8s Stats over 4 runs: max = 27.8s, min = 22.0s, avg = 24.7s, dev = 2.1s //tensorflow/python/kernel_tests/random:random_gamma_test_cpu PASSED in 81.7s Stats over 4 runs: max = 81.7s, min = 7.6s, avg = 39.6s, dev = 32.6s //tensorflow/python/kernel_tests/signal:window_ops_test_cpu PASSED in 14.7s Stats over 4 runs: max = 14.7s, min = 13.8s, avg = 14.4s, dev = 0.3s //tensorflow/python/ops/ragged:ragged_gather_op_test PASSED in 77.6s Stats over 4 runs: max = 77.6s, min = 15.8s, avg = 46.2s, dev = 22.2s //tensorflow/python/ops/ragged:ragged_getitem_test PASSED in 45.2s Stats over 4 runs: max = 45.2s, min = 43.2s, avg = 43.9s, dev = 0.8s //tensorflow/tools/docs:tf_doctest PASSED in 68.0s Stats over 4 runs: max = 68.0s, min = 36.3s, avg = 46.5s, dev = 12.6s //tensorflow/compiler/tests:async_comp_test_cpu PASSED in 7.0s Stats over 5 runs: max = 7.0s, min = 6.8s, avg = 6.9s, dev = 0.1s //tensorflow/compiler/tests:conv3d_test_cpu PASSED in 11.6s Stats over 5 runs: max = 11.6s, min = 5.8s, avg = 8.3s, dev = 2.6s //tensorflow/compiler/tests:conv3d_test_cpu_mlir_bridge_test PASSED in 14.2s Stats over 5 runs: max = 14.2s, min = 6.4s, avg = 9.7s, dev = 3.4s //tensorflow/compiler/tests:depthwise_conv_op_test_cpu PASSED in 14.9s Stats over 5 runs: max = 14.9s, min = 9.0s, avg = 11.6s, dev = 2.3s //tensorflow/compiler/tests:depthwise_conv_op_test_cpu_mlir_bridge_test PASSED in 16.1s Stats over 5 runs: max = 16.1s, min = 8.0s, avg = 11.1s, dev = 3.1s //tensorflow/compiler/tests:fused_batchnorm_test_cpu PASSED in 9.2s Stats over 5 runs: max = 9.2s, min = 7.3s, avg = 7.8s, dev = 0.7s //tensorflow/compiler/tests:fused_batchnorm_test_cpu_mlir_bridge_test PASSED in 7.5s Stats over 5 runs: max = 7.5s, min = 6.7s, avg = 7.0s, dev = 0.3s //tensorflow/compiler/tests:image_ops_jit_compile_test_cpu PASSED in 7.7s Stats over 5 runs: max = 7.7s, min = 6.7s, avg = 7.0s, dev = 0.4s //tensorflow/compiler/tests:reduce_ops_test_cpu PASSED in 11.5s Stats over 5 runs: max = 11.5s, min = 8.9s, avg = 10.0s, dev = 0.9s //tensorflow/compiler/tests:reduce_ops_test_cpu_mlir_bridge_test PASSED in 15.0s Stats over 5 runs: max = 15.0s, min = 12.8s, avg = 14.0s, dev = 0.8s //tensorflow/compiler/tests:repeat_op_test_cpu PASSED in 7.5s Stats over 5 runs: max = 7.5s, min = 6.5s, avg = 6.8s, dev = 0.4s //tensorflow/compiler/tests:repeat_op_test_cpu_mlir_bridge_test PASSED in 7.9s Stats over 5 runs: max = 7.9s, min = 7.0s, avg = 7.3s, dev = 0.3s //tensorflow/compiler/tests:special_math_test_cpu PASSED in 104.9s Stats over 5 runs: max = 104.9s, min = 15.2s, avg = 47.3s, dev = 31.2s //tensorflow/compiler/tests:special_math_test_cpu_mlir_bridge_test PASSED in 105.2s Stats over 5 runs: max = 105.2s, min = 14.7s, avg = 47.6s, dev = 31.1s //tensorflow/compiler/xla/client/lib:self_adjoint_eig_test_cpu PASSED in 32.3s Stats over 5 runs: max = 32.3s, min = 13.8s, avg = 24.3s, dev = 7.7s //tensorflow/core/grappler/optimizers:constant_folding_test PASSED in 4.2s Stats over 5 runs: max = 4.2s, min = 2.9s, avg = 3.5s, dev = 0.5s //tensorflow/dtensor/python/tests:layout_propagation_test_cpu PASSED in 12.4s Stats over 5 runs: max = 12.4s, min = 10.7s, avg = 11.4s, dev = 0.6s //tensorflow/python/distribute:mirrored_strategy_test_2gpu PASSED in 12.4s Stats over 5 runs: max = 12.4s, min = 10.4s, avg = 11.3s, dev = 0.7s //tensorflow/python/distribute:mirrored_strategy_test_cpu PASSED in 12.5s Stats over 5 runs: max = 12.5s, min = 10.4s, avg = 11.3s, dev = 0.7s //tensorflow/python/distribute:moving_averages_test_2gpu PASSED in 26.1s Stats over 5 runs: max = 26.1s, min = 17.6s, avg = 21.6s, dev = 2.9s //tensorflow/python/distribute:moving_averages_test_cpu PASSED in 21.7s Stats over 5 runs: max = 21.7s, min = 14.8s, avg = 17.9s, dev = 3.0s //tensorflow/python/distribute:vars_test_2gpu PASSED in 15.1s Stats over 5 runs: max = 15.1s, min = 12.0s, avg = 13.4s, dev = 1.0s //tensorflow/python/distribute:vars_test_cpu PASSED in 15.0s Stats over 5 runs: max = 15.0s, min = 13.9s, avg = 14.4s, dev = 0.4s //tensorflow/python/eager:device_placement_test_cpu PASSED in 8.9s Stats over 5 runs: max = 8.9s, min = 7.1s, avg = 8.3s, dev = 0.7s //tensorflow/python/eager:forwardprop_test_cpu PASSED in 121.8s Stats over 5 runs: max = 121.8s, min = 13.4s, avg = 48.8s, dev = 37.8s //tensorflow/python/eager/polymorphic_function:gradients_test_cpu PASSED in 13.7s Stats over 5 runs: max = 13.7s, min = 9.1s, avg = 11.2s, dev = 2.0s //tensorflow/python/kernel_tests/linalg:cholesky_op_test_cpu PASSED in 75.1s Stats over 5 runs: max = 75.1s, min = 38.1s, avg = 50.7s, dev = 13.3s //tensorflow/python/kernel_tests/linalg:linear_operator_adjoint_test_cpu PASSED in 23.5s Stats over 5 runs: max = 23.5s, min = 22.4s, avg = 23.0s, dev = 0.4s //tensorflow/python/kernel_tests/linalg:linear_operator_composition_test_cpu PASSED in 58.1s Stats over 5 runs: max = 58.1s, min = 51.6s, avg = 54.8s, dev = 2.6s //tensorflow/python/kernel_tests/linalg:linear_operator_diag_test_cpu PASSED in 21.1s Stats over 5 runs: max = 21.1s, min = 20.1s, avg = 20.6s, dev = 0.4s //tensorflow/python/kernel_tests/linalg:linear_operator_full_matrix_test_cpu PASSED in 26.3s Stats over 5 runs: max = 26.3s, min = 23.9s, avg = 24.9s, dev = 0.9s //tensorflow/python/kernel_tests/linalg:linear_operator_householder_test_cpu PASSED in 25.3s Stats over 5 runs: max = 25.3s, min = 24.4s, avg = 24.9s, dev = 0.4s //tensorflow/python/kernel_tests/linalg:linear_operator_identity_test_cpu PASSED in 32.7s Stats over 5 runs: max = 32.7s, min = 26.2s, avg = 29.6s, dev = 2.3s //tensorflow/python/kernel_tests/linalg:linear_operator_inversion_test_cpu PASSED in 42.9s Stats over 5 runs: max = 42.9s, min = 36.8s, avg = 40.8s, dev = 2.4s //tensorflow/python/kernel_tests/linalg:linear_operator_permutation_test_cpu PASSED in 19.4s Stats over 5 runs: max = 19.4s, min = 17.0s, avg = 18.0s, dev = 1.0s //tensorflow/python/kernel_tests/linalg:linear_operator_toeplitz_test_cpu PASSED in 15.4s Stats over 5 runs: max = 15.4s, min = 12.8s, avg = 13.6s, dev = 0.9s //tensorflow/python/kernel_tests/linalg:linear_operator_tridiag_test_cpu PASSED in 69.7s Stats over 5 runs: max = 69.7s, min = 65.6s, avg = 67.0s, dev = 1.5s //tensorflow/python/kernel_tests/linalg:linear_operator_util_test_cpu PASSED in 8.4s Stats over 5 runs: max = 8.4s, min = 8.0s, avg = 8.2s, dev = 0.2s //tensorflow/python/kernel_tests/linalg:linear_operator_zeros_test_cpu PASSED in 18.2s Stats over 5 runs: max = 18.2s, min = 15.4s, avg = 16.5s, dev = 1.0s //tensorflow/python/kernel_tests/nn_ops:fractional_avg_pool_op_test PASSED in 17.7s Stats over 5 runs: max = 17.7s, min = 6.9s, avg = 9.9s, dev = 3.9s //tensorflow/python/kernel_tests/nn_ops:fractional_max_pool_op_test PASSED in 14.2s Stats over 5 runs: max = 14.2s, min = 6.6s, avg = 8.6s, dev = 2.8s //tensorflow/python/kernel_tests/sparse_ops:sparse_ops_test_cpu PASSED in 28.5s Stats over 5 runs: max = 28.5s, min = 6.8s, avg = 12.1s, dev = 8.2s //tensorflow/python/ops/parallel_for:math_test_cpu PASSED in 90.0s Stats over 5 runs: max = 90.0s, min = 23.9s, avg = 47.6s, dev = 24.1s //tensorflow/compiler/tests:scan_ops_test_cpu PASSED in 14.1s Stats over 6 runs: max = 14.1s, min = 10.5s, avg = 12.5s, dev = 1.2s //tensorflow/compiler/tests:scan_ops_test_cpu_mlir_bridge_test PASSED in 17.2s Stats over 6 runs: max = 17.2s, min = 12.1s, avg = 14.9s, dev = 1.5s //tensorflow/python:accumulate_n_benchmark_cpu PASSED in 6.6s Stats over 6 runs: max = 6.6s, min = 4.6s, avg = 5.9s, dev = 0.7s //tensorflow/python/data/experimental/kernel_tests:make_batched_features_dataset_test PASSED in 36.2s Stats over 6 runs: max = 36.2s, min = 13.5s, avg = 24.3s, dev = 10.2s //tensorflow/python/kernel_tests/array_ops:diag_op_test_cpu PASSED in 71.9s Stats over 6 runs: max = 71.9s, min = 9.3s, avg = 22.0s, dev = 22.4s //tensorflow/python/kernel_tests/math_ops:reduction_ops_test_cpu PASSED in 38.0s Stats over 6 runs: max = 38.0s, min = 20.7s, avg = 31.4s, dev = 5.5s //tensorflow/python/distribute/experimental/rpc:rpc_ops_test PASSED in 12.4s Stats over 7 runs: max = 12.4s, min = 7.5s, avg = 9.1s, dev = 1.9s //tensorflow/compiler/tests:matrix_diag_ops_test_cpu PASSED in 61.6s Stats over 8 runs: max = 61.6s, min = 4.8s, avg = 24.8s, dev = 19.7s //tensorflow/compiler/tests:matrix_diag_ops_test_cpu_mlir_bridge_test PASSED in 86.4s Stats over 8 runs: max = 86.4s, min = 5.7s, avg = 29.1s, dev = 26.7s //tensorflow/python/data/experimental/kernel_tests:csv_dataset_test PASSED in 36.0s Stats over 8 runs: max = 36.0s, min = 7.6s, avg = 18.8s, dev = 10.2s //tensorflow/python/data/experimental/kernel_tests:parallel_interleave_test PASSED in 27.4s Stats over 8 runs: max = 27.4s, min = 11.5s, avg = 18.9s, dev = 5.0s //tensorflow/python/data/experimental/kernel_tests/service:coordinated_read_ft_test PASSED in 49.3s Stats over 8 runs: max = 49.3s, min = 15.3s, avg = 30.4s, dev = 13.6s //tensorflow/python/data/experimental/kernel_tests/service:coordinated_read_test PASSED in 66.2s Stats over 8 runs: max = 66.2s, min = 15.2s, avg = 29.5s, dev = 15.4s //tensorflow/python/data/experimental/kernel_tests/service:cross_trainer_cache_test PASSED in 21.2s Stats over 8 runs: max = 21.2s, min = 6.3s, avg = 11.8s, dev = 5.6s //tensorflow/python/data/experimental/kernel_tests/service:fault_tolerance_test PASSED in 30.6s Stats over 8 runs: max = 30.6s, min = 16.1s, avg = 20.2s, dev = 4.7s //tensorflow/python/data/kernel_tests:filter_test PASSED in 14.0s Stats over 8 runs: max = 14.0s, min = 10.3s, avg = 12.4s, dev = 1.0s //tensorflow/python/data/kernel_tests:flat_map_test PASSED in 29.4s Stats over 8 runs: max = 29.4s, min = 15.0s, avg = 21.0s, dev = 5.0s //tensorflow/python/data/kernel_tests:shard_test PASSED in 22.2s Stats over 8 runs: max = 22.2s, min = 17.0s, avg = 20.2s, dev = 2.1s //tensorflow/python/data/kernel_tests:tf_record_dataset_test PASSED in 28.7s Stats over 8 runs: max = 28.7s, min = 17.2s, avg = 24.2s, dev = 3.3s //tensorflow/python/distribute/failure_handling:failure_handler_test PASSED in 58.0s Stats over 8 runs: max = 58.0s, min = 21.8s, avg = 39.5s, dev = 11.4s //tensorflow/python/kernel_tests/linalg:linalg_ops_test_cpu PASSED in 51.1s Stats over 8 runs: max = 51.1s, min = 29.4s, avg = 41.0s, dev = 7.6s //tensorflow/python/kernel_tests/linalg:linear_operator_block_diag_test_cpu PASSED in 56.3s Stats over 8 runs: max = 56.3s, min = 42.2s, avg = 50.7s, dev = 5.4s //tensorflow/python/kernel_tests/linalg:linear_operator_block_lower_triangular_test_cpu PASSED in 73.0s Stats over 8 runs: max = 73.0s, min = 50.2s, avg = 58.5s, dev = 7.8s //tensorflow/python/kernel_tests/nn_ops:depthwise_conv_op_d9m_test_cpu PASSED in 64.3s Stats over 8 runs: max = 64.3s, min = 6.6s, avg = 16.6s, dev = 19.3s //tensorflow/python/kernel_tests/nn_ops:depthwise_conv_op_test_cpu PASSED in 8.9s Stats over 8 runs: max = 8.9s, min = 6.5s, avg = 6.9s, dev = 0.8s //tensorflow/python/kernel_tests/signal:fft_ops_test_cpu PASSED in 21.9s Stats over 8 runs: max = 21.9s, min = 9.4s, avg = 14.4s, dev = 5.0s //tensorflow/python/ops/ragged:dynamic_ragged_shape_test PASSED in 45.9s Stats over 8 runs: max = 45.9s, min = 27.3s, avg = 34.5s, dev = 6.7s //tensorflow/python/ops/ragged:ragged_tensor_test PASSED in 25.6s Stats over 8 runs: max = 25.6s, min = 11.2s, avg = 16.0s, dev = 4.1s //tensorflow/dtensor/python/tests:input_util_test FLAKY, failed in 1 out of 9 in 23.0s Stats over 9 runs: max = 23.0s, min = 12.8s, avg = 19.0s, dev = 2.9s /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/dtensor/python/tests/input_util_test/shard_1_of_8/test_attempts/attempt_1.log //tensorflow/python/distribute/failure_handling:gce_failure_handler_test FLAKY, failed in 1 out of 9 in 121.2s Stats over 9 runs: max = 121.2s, min = 13.7s, avg = 50.1s, dev = 42.4s /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/distribute/failure_handling/gce_failure_handler_test/shard_7_of_8/test_attempts/attempt_1.log //tensorflow/compiler/tests:bincount_op_test_cpu PASSED in 6.4s Stats over 10 runs: max = 6.4s, min = 3.7s, avg = 4.9s, dev = 0.8s //tensorflow/compiler/tests:conv2d_test_cpu PASSED in 7.0s Stats over 10 runs: max = 7.0s, min = 6.3s, avg = 6.6s, dev = 0.3s //tensorflow/compiler/tests:conv2d_test_cpu_mlir_bridge_test PASSED in 8.2s Stats over 10 runs: max = 8.2s, min = 5.4s, avg = 7.1s, dev = 0.9s //tensorflow/compiler/tests:image_ops_test_cpu PASSED in 16.3s Stats over 10 runs: max = 16.3s, min = 11.2s, avg = 14.1s, dev = 1.6s //tensorflow/compiler/tests:random_ops_test_cpu PASSED in 19.8s Stats over 10 runs: max = 19.8s, min = 14.0s, avg = 16.9s, dev = 1.8s //tensorflow/compiler/tests:random_ops_test_cpu_mlir_bridge_test PASSED in 31.0s Stats over 10 runs: max = 31.0s, min = 9.0s, avg = 16.0s, dev = 5.9s //tensorflow/compiler/tests:stateless_random_ops_test_cpu PASSED in 97.0s Stats over 10 runs: max = 97.0s, min = 39.5s, avg = 68.3s, dev = 19.6s //tensorflow/compiler/tests:stateless_random_ops_test_cpu_mlir_bridge_test PASSED in 67.9s Stats over 10 runs: max = 67.9s, min = 32.6s, avg = 51.3s, dev = 11.9s //tensorflow/compiler/xla/client/lib:svd_test_cpu PASSED in 100.8s Stats over 10 runs: max = 100.8s, min = 10.6s, avg = 35.4s, dev = 32.8s //tensorflow/compiler/xla/client/lib:tridiagonal_test_cpu PASSED in 12.1s Stats over 10 runs: max = 12.1s, min = 7.9s, avg = 10.0s, dev = 1.0s //tensorflow/compiler/xla/service/cpu:cpu_runtime_test PASSED in 14.1s Stats over 10 runs: max = 14.1s, min = 1.3s, avg = 9.9s, dev = 4.4s //tensorflow/python:special_math_ops_test_cpu PASSED in 51.5s Stats over 10 runs: max = 51.5s, min = 8.2s, avg = 15.6s, dev = 12.3s //tensorflow/python/data/kernel_tests:rejection_resample_test PASSED in 13.7s Stats over 10 runs: max = 13.7s, min = 4.8s, avg = 8.4s, dev = 2.5s //tensorflow/python/distribute:input_lib_test_2gpu PASSED in 40.8s Stats over 10 runs: max = 40.8s, min = 26.6s, avg = 32.4s, dev = 4.5s //tensorflow/python/distribute:input_lib_test_cpu PASSED in 30.8s Stats over 10 runs: max = 30.8s, min = 20.5s, avg = 25.5s, dev = 3.3s //tensorflow/python/distribute:input_lib_type_spec_test_2gpu PASSED in 17.4s Stats over 10 runs: max = 17.4s, min = 6.3s, avg = 11.8s, dev = 3.8s //tensorflow/python/distribute:input_lib_type_spec_test_cpu PASSED in 18.0s Stats over 10 runs: max = 18.0s, min = 7.9s, avg = 12.7s, dev = 3.4s //tensorflow/python/framework:config_vgpu_test_2gpu PASSED in 8.4s Stats over 10 runs: max = 8.4s, min = 4.7s, avg = 5.7s, dev = 1.1s //tensorflow/python/framework:config_vgpu_test_cpu PASSED in 7.2s Stats over 10 runs: max = 7.2s, min = 5.8s, avg = 6.5s, dev = 0.5s //tensorflow/python/framework:function_test_cpu PASSED in 60.8s Stats over 10 runs: max = 60.8s, min = 7.0s, avg = 13.5s, dev = 15.9s //tensorflow/python/grappler:cluster_test_cpu PASSED in 7.2s Stats over 10 runs: max = 7.2s, min = 5.1s, avg = 6.4s, dev = 0.6s //tensorflow/python/kernel_tests/array_ops:array_ops_test_cpu PASSED in 18.1s Stats over 10 runs: max = 18.1s, min = 9.8s, avg = 12.8s, dev = 2.3s //tensorflow/python/kernel_tests/array_ops:inplace_ops_test_cpu PASSED in 10.5s Stats over 10 runs: max = 10.5s, min = 5.9s, avg = 8.3s, dev = 1.4s //tensorflow/python/kernel_tests/data_structures:tensor_array_ops_test_cpu PASSED in 13.4s Stats over 10 runs: max = 13.4s, min = 8.2s, avg = 10.1s, dev = 1.5s //tensorflow/python/kernel_tests/linalg:linear_operator_kronecker_test_cpu PASSED in 43.1s Stats over 10 runs: max = 43.1s, min = 27.6s, avg = 34.9s, dev = 6.6s //tensorflow/python/kernel_tests/linalg:linear_operator_low_rank_update_test_cpu PASSED in 100.5s Stats over 10 runs: max = 100.5s, min = 90.0s, avg = 93.5s, dev = 3.5s //tensorflow/python/kernel_tests/linalg:tridiagonal_matmul_op_test_cpu PASSED in 124.8s Stats over 10 runs: max = 124.8s, min = 3.7s, avg = 18.3s, dev = 35.5s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_ops_test_cpu PASSED in 51.0s Stats over 10 runs: max = 51.0s, min = 12.3s, avg = 30.7s, dev = 12.8s //tensorflow/python/kernel_tests/math_ops:segment_reduction_ops_test_cpu PASSED in 27.6s Stats over 10 runs: max = 27.6s, min = 7.6s, avg = 15.6s, dev = 7.8s //tensorflow/python/kernel_tests/nn_ops:rnn_test_cpu PASSED in 12.1s Stats over 10 runs: max = 12.1s, min = 7.9s, avg = 10.1s, dev = 1.3s //tensorflow/python/kernel_tests/random:random_index_shuffle_test PASSED in 8.1s Stats over 10 runs: max = 8.1s, min = 6.5s, avg = 7.3s, dev = 0.6s //tensorflow/python/kernel_tests/random:stateless_random_ops_test_cpu PASSED in 154.4s Stats over 10 runs: max = 154.4s, min = 17.5s, avg = 84.1s, dev = 63.6s //tensorflow/python/ops/ragged:ragged_tensor_supported_values_test PASSED in 18.1s Stats over 10 runs: max = 18.1s, min = 14.7s, avg = 16.1s, dev = 1.3s //tensorflow/python/saved_model:load_test_cpu PASSED in 50.4s Stats over 10 runs: max = 50.4s, min = 24.0s, avg = 28.7s, dev = 7.4s //tensorflow/compiler/tests:fft_test_cpu PASSED in 27.1s Stats over 12 runs: max = 27.1s, min = 13.4s, avg = 18.5s, dev = 4.9s //tensorflow/compiler/xla/service:triangular_solve_expander_test PASSED in 5.4s Stats over 12 runs: max = 5.4s, min = 2.7s, avg = 4.1s, dev = 0.9s //tensorflow/python/data/experimental/kernel_tests:group_by_reducer_test PASSED in 19.5s Stats over 12 runs: max = 19.5s, min = 6.3s, avg = 11.2s, dev = 4.4s //tensorflow/python/data/kernel_tests:choose_from_datasets_test PASSED in 15.8s Stats over 12 runs: max = 15.8s, min = 6.9s, avg = 10.8s, dev = 2.7s //tensorflow/python/data/kernel_tests:memory_cleanup_test_cpu PASSED in 9.6s Stats over 12 runs: max = 9.6s, min = 3.6s, avg = 7.0s, dev = 1.5s //tensorflow/python/distribute:multi_process_runner_test_2gpu PASSED in 223.5s Stats over 12 runs: max = 223.5s, min = 12.2s, avg = 50.5s, dev = 58.3s //tensorflow/python/distribute:multi_process_runner_test_cpu PASSED in 228.4s Stats over 12 runs: max = 228.4s, min = 11.6s, avg = 50.9s, dev = 59.7s //tensorflow/python/kernel_tests/nn_ops:pooling_ops_test_cpu FAILED in 3 out of 12 in 24.6s Stats over 12 runs: max = 24.6s, min = 3.7s, avg = 10.9s, dev = 6.5s /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test_cpu/shard_8_of_10/test.log /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test_cpu/shard_8_of_10/test_attempts/attempt_1.log /home/buildslave/.cache/bazel/_bazel_buildslave/fbac33eb30dbfb6b11b15a7ff5ac830d/execroot/org_tensorflow/bazel-out/aarch64-opt/testlogs/tensorflow/python/kernel_tests/nn_ops/pooling_ops_test_cpu/shard_8_of_10/test_attempts/attempt_2.log //tensorflow/python/eager/polymorphic_function:polymorphic_function_test_cpu PASSED in 22.4s Stats over 15 runs: max = 22.4s, min = 13.0s, avg = 16.8s, dev = 3.0s //tensorflow/python/kernel_tests/linalg:linear_operator_circulant_test_cpu PASSED in 64.3s Stats over 15 runs: max = 64.3s, min = 54.7s, avg = 59.8s, dev = 2.8s //tensorflow/python/kernel_tests/nn_ops:rnn_cell_test_cpu PASSED in 44.7s Stats over 15 runs: max = 44.7s, min = 6.0s, avg = 13.4s, dev = 9.3s //tensorflow/python:image_ops_test_cpu PASSED in 29.7s Stats over 16 runs: max = 29.7s, min = 9.4s, avg = 19.7s, dev = 4.6s //tensorflow/python/data/experimental/kernel_tests/service:dynamic_sharding_test PASSED in 25.6s Stats over 16 runs: max = 25.6s, min = 12.4s, avg = 18.5s, dev = 3.8s //tensorflow/python/data/experimental/kernel_tests/service:worker_tags_test PASSED in 36.7s Stats over 16 runs: max = 36.7s, min = 4.2s, avg = 19.9s, dev = 12.5s //tensorflow/python/data/kernel_tests:snapshot_test PASSED in 28.1s Stats over 16 runs: max = 28.1s, min = 11.2s, avg = 18.3s, dev = 5.0s //tensorflow/python/kernel_tests/control_flow:control_flow_ops_py_test_cpu PASSED in 32.0s Stats over 16 runs: max = 32.0s, min = 7.5s, avg = 11.0s, dev = 5.6s //tensorflow/python/kernel_tests/linalg:matrix_exponential_op_test PASSED in 13.1s Stats over 16 runs: max = 13.1s, min = 7.0s, avg = 8.8s, dev = 1.5s //tensorflow/python/kernel_tests/signal:dct_ops_test_cpu PASSED in 11.6s Stats over 16 runs: max = 11.6s, min = 6.1s, avg = 8.5s, dev = 2.0s //tensorflow/python/ops/parallel_for:control_flow_ops_test_cpu PASSED in 59.7s Stats over 16 runs: max = 59.7s, min = 13.1s, avg = 21.3s, dev = 10.6s //tensorflow/python/data/experimental/kernel_tests/service:distributed_save_ft_test PASSED in 10.3s Stats over 17 runs: max = 10.3s, min = 5.8s, avg = 7.0s, dev = 1.1s //tensorflow/python/data/kernel_tests:map_test PASSED in 46.2s Stats over 19 runs: max = 46.2s, min = 15.9s, avg = 28.0s, dev = 8.2s //tensorflow/compiler/tests:pooling_ops_3d_test_cpu PASSED in 6.9s Stats over 20 runs: max = 6.9s, min = 3.2s, avg = 4.4s, dev = 1.0s //tensorflow/compiler/tests:pooling_ops_3d_test_cpu_mlir_bridge_test PASSED in 6.6s Stats over 20 runs: max = 6.6s, min = 3.2s, avg = 4.5s, dev = 1.0s //tensorflow/compiler/tests:pooling_ops_test_cpu PASSED in 11.2s Stats over 20 runs: max = 11.2s, min = 3.1s, avg = 5.0s, dev = 1.8s //tensorflow/compiler/tests:pooling_ops_test_cpu_mlir_bridge_test PASSED in 10.6s Stats over 20 runs: max = 10.6s, min = 3.6s, avg = 6.0s, dev = 1.7s //tensorflow/compiler/xla/tests:convolution_dimension_numbers_test_cpu PASSED in 12.2s Stats over 20 runs: max = 12.2s, min = 7.2s, avg = 8.9s, dev = 1.4s //tensorflow/compiler/xla/tests:dot_operation_single_threaded_runtime_test_cpu PASSED in 17.3s Stats over 20 runs: max = 17.3s, min = 11.6s, avg = 14.5s, dev = 1.6s //tensorflow/compiler/xla/tests:dot_operation_test_cpu PASSED in 15.9s Stats over 20 runs: max = 15.9s, min = 11.8s, avg = 13.1s, dev = 1.1s //tensorflow/compiler/xla/tests:prng_test_cpu PASSED in 12.5s Stats over 20 runs: max = 12.5s, min = 6.7s, avg = 8.9s, dev = 1.5s //tensorflow/compiler/xla/tests:reduce_window_test_cpu PASSED in 44.1s Stats over 20 runs: max = 44.1s, min = 7.6s, avg = 17.8s, dev = 12.0s //tensorflow/python/autograph/tests:loop_control_flow_test PASSED in 18.7s Stats over 20 runs: max = 18.7s, min = 13.6s, avg = 16.9s, dev = 1.2s //tensorflow/python/kernel_tests:metrics_test PASSED in 45.8s Stats over 20 runs: max = 45.8s, min = 7.8s, avg = 22.3s, dev = 10.7s //tensorflow/python/kernel_tests/array_ops:matrix_band_part_op_test_cpu PASSED in 7.7s Stats over 20 runs: max = 7.7s, min = 2.6s, avg = 5.5s, dev = 1.2s //tensorflow/python/kernel_tests/data_structures:barrier_ops_test PASSED in 14.4s Stats over 20 runs: max = 14.4s, min = 3.6s, avg = 7.0s, dev = 2.8s //tensorflow/python/kernel_tests/linalg:eig_op_test PASSED in 61.1s Stats over 20 runs: max = 61.1s, min = 6.7s, avg = 18.3s, dev = 16.0s //tensorflow/python/kernel_tests/linalg:linalg_grad_test_cpu PASSED in 177.5s Stats over 20 runs: max = 177.5s, min = 35.6s, avg = 84.8s, dev = 38.5s //tensorflow/python/kernel_tests/linalg:norm_op_test_cpu PASSED in 7.5s Stats over 20 runs: max = 7.5s, min = 4.2s, avg = 5.6s, dev = 0.9s //tensorflow/python/kernel_tests/linalg:normalize_op_test_cpu PASSED in 15.6s Stats over 20 runs: max = 15.6s, min = 5.1s, avg = 10.9s, dev = 2.5s //tensorflow/python/kernel_tests/linalg:qr_op_test_cpu PASSED in 229.3s Stats over 20 runs: max = 229.3s, min = 33.1s, avg = 105.4s, dev = 54.7s //tensorflow/python/kernel_tests/linalg:self_adjoint_eig_op_test_cpu PASSED in 23.3s Stats over 20 runs: max = 23.3s, min = 3.8s, avg = 10.9s, dev = 6.0s //tensorflow/python/kernel_tests/math_ops:batch_matmul_op_test_cpu PASSED in 30.4s Stats over 20 runs: max = 30.4s, min = 6.1s, avg = 15.9s, dev = 7.9s //tensorflow/python/kernel_tests/math_ops:matmul_op_test_cpu PASSED in 23.0s Stats over 20 runs: max = 23.0s, min = 14.4s, avg = 18.6s, dev = 2.2s //tensorflow/python/kernel_tests/math_ops:tensordot_op_test_cpu PASSED in 61.7s Stats over 20 runs: max = 61.7s, min = 5.9s, avg = 27.1s, dev = 19.9s //tensorflow/python/kernel_tests/nn_ops:embedding_ops_test_cpu PASSED in 19.4s Stats over 20 runs: max = 19.4s, min = 10.4s, avg = 12.7s, dev = 1.9s //tensorflow/python/data/experimental/kernel_tests/service:local_workers_test PASSED in 27.8s Stats over 24 runs: max = 27.8s, min = 9.3s, avg = 19.4s, dev = 4.2s //tensorflow/python/data/kernel_tests:interleave_test PASSED in 21.6s Stats over 24 runs: max = 21.6s, min = 9.3s, avg = 13.5s, dev = 3.5s //tensorflow/python/data/kernel_tests:sample_from_datasets_test PASSED in 17.6s Stats over 24 runs: max = 17.6s, min = 4.0s, avg = 9.0s, dev = 4.1s //tensorflow/compiler/xla/tests:array_elementwise_ops_test_cpu PASSED in 14.3s Stats over 25 runs: max = 14.3s, min = 6.8s, avg = 9.9s, dev = 2.0s //tensorflow/compiler/xla/tests:select_and_scatter_test_cpu PASSED in 41.3s Stats over 25 runs: max = 41.3s, min = 8.3s, avg = 14.1s, dev = 8.0s //tensorflow/compiler/xla/tests:convolution_variants_test_cpu PASSED in 12.3s Stats over 30 runs: max = 12.3s, min = 6.6s, avg = 8.9s, dev = 1.4s //tensorflow/compiler/xla/tests:iota_test_cpu PASSED in 17.4s Stats over 30 runs: max = 17.4s, min = 12.3s, avg = 13.8s, dev = 1.0s //tensorflow/compiler/xla/tests:params_test_cpu PASSED in 11.5s Stats over 30 runs: max = 11.5s, min = 7.5s, avg = 8.8s, dev = 0.9s //tensorflow/compiler/xla/tests:reshape_test_cpu PASSED in 11.6s Stats over 30 runs: max = 11.6s, min = 7.1s, avg = 8.8s, dev = 1.1s //tensorflow/python/kernel_tests/nn_ops:conv_ops_3d_test_cpu PASSED in 42.9s Stats over 30 runs: max = 42.9s, min = 2.9s, avg = 11.2s, dev = 8.7s //tensorflow/compiler/xla/tests:reduce_test_cpu PASSED in 10.2s Stats over 31 runs: max = 10.2s, min = 6.9s, avg = 8.3s, dev = 0.9s //tensorflow/compiler/xla/tests:scalar_computations_test_cpu PASSED in 11.6s Stats over 32 runs: max = 11.6s, min = 8.0s, avg = 9.1s, dev = 0.8s //tensorflow/python/data/experimental/kernel_tests/service:auto_shard_test PASSED in 22.7s Stats over 32 runs: max = 22.7s, min = 5.3s, avg = 15.2s, dev = 4.7s //tensorflow/python/data/experimental/kernel_tests/service:data_service_ops_test PASSED in 26.9s Stats over 32 runs: max = 26.9s, min = 9.8s, avg = 18.2s, dev = 5.1s //tensorflow/compiler/xla/tests:batch_normalization_test_cpu PASSED in 9.9s Stats over 40 runs: max = 9.9s, min = 7.2s, avg = 8.7s, dev = 0.7s //tensorflow/compiler/xla/tests:bfloat16_test_cpu PASSED in 12.5s Stats over 40 runs: max = 12.5s, min = 10.3s, avg = 11.3s, dev = 0.7s //tensorflow/compiler/xla/tests:conv_depthwise_backprop_filter_test_cpu PASSED in 9.6s Stats over 40 runs: max = 9.6s, min = 7.2s, avg = 8.2s, dev = 0.7s //tensorflow/compiler/xla/tests:slice_test_cpu PASSED in 14.0s Stats over 40 runs: max = 14.0s, min = 9.8s, avg = 11.4s, dev = 0.9s //tensorflow/compiler/mlir/quantization/tensorflow/python:quantize_model_test PASSED in 48.4s Stats over 50 runs: max = 48.4s, min = 20.1s, avg = 30.1s, dev = 8.2s //tensorflow/compiler/tests:sort_ops_test_cpu PASSED in 39.0s Stats over 50 runs: max = 39.0s, min = 2.8s, avg = 12.1s, dev = 8.2s //tensorflow/compiler/tests:sort_ops_test_cpu_mlir_bridge_test PASSED in 43.3s Stats over 50 runs: max = 43.3s, min = 2.6s, avg = 10.7s, dev = 8.7s //tensorflow/compiler/xla/tests:conv_depthwise_test_cpu PASSED in 9.4s Stats over 50 runs: max = 9.4s, min = 6.6s, avg = 7.8s, dev = 0.7s //tensorflow/compiler/xla/tests:convolution_test_1d_no_vmodule_cpu PASSED in 13.8s Stats over 50 runs: max = 13.8s, min = 9.9s, avg = 12.0s, dev = 1.0s //tensorflow/compiler/xla/tests:convolution_test_cpu PASSED in 19.4s Stats over 50 runs: max = 19.4s, min = 9.1s, avg = 13.6s, dev = 2.4s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_dense_mat_mul_grad_test_cpu PASSED in 13.3s Stats over 50 runs: max = 13.3s, min = 4.4s, avg = 7.2s, dev = 2.3s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_grad_test_cpu PASSED in 7.2s Stats over 50 runs: max = 7.2s, min = 2.7s, avg = 4.0s, dev = 1.3s //tensorflow/python/kernel_tests/linalg/sparse:csr_sparse_matrix_sparse_mat_mul_grad_test_cpu PASSED in 7.3s Stats over 50 runs: max = 7.3s, min = 2.8s, avg = 4.2s, dev = 1.1s //tensorflow/python/kernel_tests/math_ops:cwise_ops_binary_test_cpu PASSED in 40.4s Stats over 50 runs: max = 40.4s, min = 11.1s, avg = 23.0s, dev = 7.8s //tensorflow/python/kernel_tests/math_ops:cwise_ops_test_cpu PASSED in 14.9s Stats over 50 runs: max = 14.9s, min = 3.5s, avg = 5.9s, dev = 1.8s //tensorflow/python/kernel_tests/math_ops:cwise_ops_unary_test_cpu PASSED in 12.2s Stats over 50 runs: max = 12.2s, min = 2.8s, avg = 4.9s, dev = 2.4s Executed 3644 out of 3644 tests: 3643 tests pass and 1 fails locally. There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.