Skip to content

Console Output

Skipping 2,902 KB.. Full Log
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   ee5bd74cfec316736bf6abc03f22955f88d53e24
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-01 15:16:10
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
check diff failed 5-th time, retry later
table partition_table2.t2 not exists for 1-th check, retry later

  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0{"level":"warn","ts":"2024-05-07T09:51:02.901421+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0011d8000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-07T09:51:02.90381+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f061c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-07T09:51:02.998417+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f3e380/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"warn","ts":1715046663.0499544,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0025d68c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1715046663.0500152,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1715046663.0911312,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00236b880/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"info","ts":1715046663.0911927,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
{"level":"warn","ts":1715046663.1908581,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002e33dc0/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
{"level":"info","ts":1715046663.1909368,"caller":"v3@v3.5.10/client.go:210","msg":"Auto sync endpoints failed.","error":"context deadline exceeded"}
check diff failed 6-th time, retry later
Starting Upstream TiDB...
Release Version: v7.5.1-51-gdbd8ea2700
Edition: Community
Git Commit Hash: dbd8ea2700febe87bb6dfcc3dd7faf555c0094b0
Git Branch: release-7.5
UTC Build Time: 2024-05-06 16:47:24
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.1-51-gdbd8ea2700
Edition: Community
Git Commit Hash: dbd8ea2700febe87bb6dfcc3dd7faf555c0094b0
Git Branch: release-7.5
UTC Build Time: 2024-05-06 16:47:24
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
table partition_table2.t2 not exists for 2-th check, retry later
check diff failed 7-th time, retry later
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
table partition_table2.t2 not exists for 3-th check, retry later
check diff failed 8-th time, retry later
____________________________________
*************************** 1. row ***************************
  primary_ts: 449589126818430977
secondary_ts: 449589129489416205
*************************** 2. row ***************************
  primary_ts: 449589134682750976
secondary_ts: 449589134981595137
*************************** 3. row ***************************
  primary_ts: 449589142547070976
secondary_ts: 449589142845915137
*************************** 4. row ***************************
  primary_ts: 449589150411390976
secondary_ts: 449589150946164737
*************************** 5. row ***************************
  primary_ts: 449589158275710976
secondary_ts: 449589158600769537
*************************** 6. row ***************************
  primary_ts: 449589166140030976
secondary_ts: 449589166464827393
*************************** 7. row ***************************
  primary_ts: 449589174004350976
secondary_ts: 449589174067265538
*************************** 8. row ***************************
  primary_ts: 449589181868670976
secondary_ts: 449589182062395393
*************************** 9. row ***************************
  primary_ts: 449589189732990976
secondary_ts: 449589189953191941
skip invalid syncpoint primary_ts: 449589126818430977, first_ddl_ts: 449589130200350747
check diff successfully
check diff successfully
check diff successfully
check diff successfully
check diff successfully
check diff successfully
check diff successfully
check diff successfully
check diff successfully
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Tue May  7 09:51:04 CST 2024] <<<<<< run test case syncpoint success! >>>>>>
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
table partition_table2.t2 not exists for 4-th check, retry later
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d42fd5cf00005	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-mysql-test-363-n08jm, pid:6598, start at 2024-05-07 09:51:08.097675536 +0800 CST m=+5.082108832	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240507-09:53:08.104 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240507-09:51:08.092 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240507-09:41:08.092 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d42fd5cf00005	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-mysql-test-363-n08jm, pid:6598, start at 2024-05-07 09:51:08.097675536 +0800 CST m=+5.082108832	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240507-09:53:08.104 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240507-09:51:08.092 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240507-09:41:08.092 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d42fd5cd8000e	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-mysql-test-363-n08jm, pid:6683, start at 2024-05-07 09:51:08.100502271 +0800 CST m=+5.033246555	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240507-09:53:08.109 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240507-09:51:08.086 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240507-09:41:08.086 +0800	All versions after safe point can be accessed. (DO NOT EDIT)

  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0{"level":"warn","ts":"2024-05-07T09:51:08.902918+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0011d8000/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-07T09:51:08.904942+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f061c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
{"level":"warn","ts":"2024-05-07T09:51:08.999372+0800","logger":"etcd-client","caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f3e380/127.0.0.1:2479","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2479: connect: connection refused\""}
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-12-g9002cc34d
Edition:         Community
Git Commit Hash: 9002cc34d3b593a718b6c5260ba18f30a45ab314
Git Branch:      HEAD
UTC Build Time:  2024-04-18 07:24:48
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-04-18 07:28:40
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/force_replicate_table/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/force_replicate_table/tiflash/log/error.log
arg matches is ArgMatches { args: {"config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/force_replicate_table/tiflash-proxy.toml"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-12-g9002cc34d"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["9002cc34d3b593a718b6c5260ba18f30a45ab314"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/force_replicate_table/tiflash/db/proxy"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/force_replicate_table/tiflash/log/proxy.log"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
check diff failed 9-th time, retry later
table partition_table2.t2 not exists for 5-th check, retry later

  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
100   135  100   135    0     0      4      0  0:00:33  0:00:30  0:00:03    27
100   135  100   135    0     0      4      0  0:00:33  0:00:30  0:00:03    33
+ synced_status='{
    "error_msg": "[CDC:ErrPDEtcdAPIError]etcd api call error: context deadline exceeded",
    "error_code": "CDC:ErrPDEtcdAPIError"
}'
++ jq -r .error_code
++ echo '{' '"error_msg":' '"[CDC:ErrPDEtcdAPIError]etcd' api call error: context deadline 'exceeded",' '"error_code":' '"CDC:ErrPDEtcdAPIError"' '}'
+ error_code=CDC:ErrPDEtcdAPIError
+ cleanup_process cdc.test
wait process cdc.test exit for 1-th time...
check diff failed 10-th time, retry later
[Tue May  7 09:51:11 CST 2024] <<<<<< START cdc server in force_replicate_table case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ GO_FAILPOINTS=
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.force_replicate_table.80708072.out server --log-file /tmp/tidb_cdc_test/force_replicate_table/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/force_replicate_table/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
table partition_table2.t2 not exists for 6-th check, retry later
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
+ stop_tidb_cluster
check diff failed 11-th time, retry later
table partition_table2.t2 not exists for 7-th check, retry later
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Tue, 07 May 2024 01:51:14 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/6ee33f04-4a12-4ee1-a48a-f7c870f3c2a8
	{"id":"6ee33f04-4a12-4ee1-a48a-f7c870f3c2a8","address":"127.0.0.1:8300","version":"v7.5.1-26-g93530e277"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f50bf2d6af1
	6ee33f04-4a12-4ee1-a48a-f7c870f3c2a8

/tidb/cdc/default/default/upstream/7366069309297098705
	{"id":7366069309297098705,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/6ee33f04-4a12-4ee1-a48a-f7c870f3c2a8
	{"id":"6ee33f04-4a12-4ee1-a48a-f7c870f3c2a8","address":"127.0.0.1:8300","version":"v7.5.1-26-g93530e277"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f50bf2d6af1
	6ee33f04-4a12-4ee1-a48a-f7c870f3c2a8

/tidb/cdc/default/default/upstream/7366069309297098705
	{"id":7366069309297098705,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/6ee33f04-4a12-4ee1-a48a-f7c870f3c2a8
	{"id":"6ee33f04-4a12-4ee1-a48a-f7c870f3c2a8","address":"127.0.0.1:8300","version":"v7.5.1-26-g93530e277"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f50bf2d6af1
	6ee33f04-4a12-4ee1-a48a-f7c870f3c2a8

/tidb/cdc/default/default/upstream/7366069309297098705
	{"id":7366069309297098705,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
Create changefeed successfully!
ID: eee367d1-67cf-42f5-ba5d-a2f4b57432bb
Info: {"upstream_id":7366069309297098705,"namespace":"default","id":"eee367d1-67cf-42f5-ba5d-a2f4b57432bb","sink_uri":"mysql://normal:xxxxx@127.0.0.1:3306/?safe-mode=true","create_time":"2024-05-07T09:51:14.729436696+08:00","start_ts":449589194625646593,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":true,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v7.5.1-26-g93530e277","resolved_ts":449589194625646593,"checkpoint_ts":449589194625646593,"checkpoint_time":"2024-05-07 09:51:11.393"}
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/hang_sink_suicide/run.sh using Sink-Type: mysql... <<=================
[Tue May  7 09:51:14 CST 2024] <<<<<< run test case hang_sink_suicide success! >>>>>>
table partition_table2.t2 exists
check diff failed 12-th time, retry later
table force_replicate_table.t0 not exists for 1-th check, retry later
check diff failed 13-th time, retry later
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/server_config_compatibility/run.sh using Sink-Type: mysql... <<=================
The 1 times to try to start tidb cluster...
table force_replicate_table.t0 exists
table force_replicate_table.t1 exists
table force_replicate_table.t2 not exists for 1-th check, retry later
check diff failed 14-th time, retry later
+ run_case_with_unavailable_tikv conf/changefeed.toml
+ rm -rf /tmp/tidb_cdc_test/synced_status
+ mkdir -p /tmp/tidb_cdc_test/synced_status
+ start_tidb_cluster --workdir /tmp/tidb_cdc_test/synced_status
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
The 1 times to try to start tidb cluster...
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
table force_replicate_table.t2 exists
table force_replicate_table.t3 not exists for 1-th check, retry later
start tidb cluster in /tmp/tidb_cdc_test/server_config_compatibility
Starting Upstream PD...
Release Version: v7.5.1-6-g78f4254e3
Edition: Community
Git Commit Hash: 78f4254e3f5adb48e3e1e2489065f5ccf6cf1815
Git Branch: release-7.5
UTC Build Time:  2024-04-30 02:49:46
Starting Downstream PD...
Release Version: v7.5.1-6-g78f4254e3
Edition: Community
Git Commit Hash: 78f4254e3f5adb48e3e1e2489065f5ccf6cf1815
Git Branch: release-7.5
UTC Build Time:  2024-04-30 02:49:46
Verifying upstream PD is started...
check diff failed 15-th time, retry later
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Tue May  7 09:51:21 CST 2024] <<<<<< START cdc server in consistent_partition_table case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ [[ no != \n\o ]]
+ GO_FAILPOINTS='github.com/pingcap/tiflow/cdc/sink/dmlsink/txn/mysql/MySQLSinkHangLongTime=return(true);github.com/pingcap/tiflow/cdc/sink/ddlsink/mysql/MySQLSinkExecDDLDelay=return(true)'
+ (( i = 0 ))
+ (( i <= 50 ))
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.consistent_partition_table.2047520477.out server --log-file /tmp/tidb_cdc_test/consistent_partition_table/cdcpartition_table.server2.log --log-level debug --data-dir /tmp/tidb_cdc_test/consistent_partition_table/cdc_datapartition_table.server2 --cluster-id default
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
table force_replicate_table.t3 exists
table force_replicate_table.t4 not exists for 1-th check, retry later
chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
start tidb cluster in /tmp/tidb_cdc_test/synced_status
Starting Upstream PD...
Release Version: v7.5.1-6-g78f4254e3
Edition: Community
Git Commit Hash: 78f4254e3f5adb48e3e1e2489065f5ccf6cf1815
Git Branch: release-7.5
UTC Build Time:  2024-04-30 02:49:46
Starting Downstream PD...
Release Version: v7.5.1-6-g78f4254e3
Edition: Community
Git Commit Hash: 78f4254e3f5adb48e3e1e2489065f5ccf6cf1815
Git Branch: release-7.5
UTC Build Time:  2024-04-30 02:49:46
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   ee5bd74cfec316736bf6abc03f22955f88d53e24
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-01 15:16:10
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   ee5bd74cfec316736bf6abc03f22955f88d53e24
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-01 15:16:10
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
check diff failed 16-th time, retry later
table force_replicate_table.t4 exists
table force_replicate_table.t5 not exists for 1-th check, retry later
Starting Upstream TiDB...
Release Version: v7.5.1-51-gdbd8ea2700
Edition: Community
Git Commit Hash: dbd8ea2700febe87bb6dfcc3dd7faf555c0094b0
Git Branch: release-7.5
UTC Build Time: 2024-05-06 16:47:24
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.1-51-gdbd8ea2700
Edition: Community
Git Commit Hash: dbd8ea2700febe87bb6dfcc3dd7faf555c0094b0
Git Branch: release-7.5
UTC Build Time: 2024-05-06 16:47:24
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Tue, 07 May 2024 01:51:25 GMT
< Content-Type: text/plain; charset=utf-8
< Transfer-Encoding: chunked
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:

changefeedID: default/fbe3e360-db05-4e47-90e7-32892da81a44
{UpstreamID:7366069244847187894 Namespace:default ID:fbe3e360-db05-4e47-90e7-32892da81a44 SinkURI:mysql://normal:123456@127.0.0.1:3306/ CreateTime:2024-05-07 09:50:57.760779194 +0800 CST StartTs:449589191016185860 TargetTs:0 AdminJobType:noop Engine:unified SortDir: Config:0xc0034cc900 State:normal Error:<nil> Warning:<nil> CreatorVersion:v7.5.1-26-g93530e277 Epoch:449589191042400262}
{CheckpointTs:449589196678496259 MinTableBarrierTs:449589197989216267 AdminJobType:noop}
span: {table_id:108,start_key:7480000000000000ff6c5f720000000000fa,end_key:7480000000000000ff6c5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:109,start_key:7480000000000000ff6d5f720000000000fa,end_key:7480000000000000ff6d5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:117,start_key:7480000000000000ff755f720000000000fa,end_key:7480000000000000ff755f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:114,start_key:7480000000000000ff725f720000000000fa,end_key:7480000000000000ff725f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:105,start_key:7480000000000000ff695f720000000000fa,end_key:7480000000000000ff695f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:112,start_key:7480000000000000ff705f720000000000fa,end_key:7480000000000000ff705f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:106,start_key:7480000000000000ff6a5f720000000000fa,end_key:7480000000000000ff6a5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:119,start_key:7480000000000000ff775f720000000000fa,end_key:7480000000000000ff775f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:113,start_key:7480000000000000ff715f720000000000fa,end_key:7480000000000000ff715f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:116,start_key:7480000000000000ff745f720000000000fa,end_key:7480000000000000ff745f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:107,start_key:7480000000000000ff6b5f720000000000fa,end_key:7480000000000000ff6b5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:123,start_key:7480000000000000ff7b5f720000000000fa,end_key:7480000000000000ff7b5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/61fe0a1d-8324-49d9-8af0-b91d198fe7bd
	{"id":"61fe0a1d-8324-49d9-8af0-b91d198fe7bd","address":"127.0.0.1:8300","version":"v7.5.1-26-g93530e277"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f50beebb8c9
	61fe0a1d-8324-49d9-8af0-b91d198fe7bd

/tidb/cdc/default/default/changefeed/info/fbe3e360-db05-4e47-90e7-32892da81a44
	{"upstream-id":7366069244847187894,"namespace":"default","changefeed-id":"fbe3e360-db05-4e47-90e7-32892da81a44","sink-uri":"mysql://normal:123456@127.0.0.1:3306/","create-time":"2024-05-07T09:50:57.760779194+08:00","start-ts":449589191016185860,"target-ts":0,"admin-job-type":0,"sort-engine":"","sort-dir":"","config":{"memory-quota":1073741824,"case-sensitive":false,"force-replicate":false,"check-gc-safe-point":true,"enable-sync-point":false,"ignore-ineligible-table":false,"bdr-mode":false,"sync-point-interval":600000000000,"sync-point-retention":86400000000000,"filter":{"rules":["*.*"],"ignore-txn-start-ts":null,"event-filters":null},"mounter":{"worker-num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include-commit-ts":false,"binary-encoding-method":"base64"},"encoder-concurrency":32,"terminator":"\r\n","date-separator":"day","enable-partition-separator":true,"enable-kafka-sink-v2":false,"only-output-updated-columns":false,"delete-only-output-handle-key-columns":false,"advance-timeout-in-sec":150,"send-bootstrap-interval-in-sec":120,"send-bootstrap-in-msg-count":10000,"send-bootstrap-to-all-partition":true,"open":{"output-old-value":true}},"consistent":{"level":"eventual","max-log-size":64,"flush-interval":2000,"meta-flush-interval":200,"encoding-worker-num":16,"flush-worker-num":8,"storage":"file:///tmp/tidb_cdc_test/consistent_partition_table/redo","use-file-backend":false,"compression":"","memory-usage":{"memory-quota-percentage":50,"event-cache-percentage":0}},"scheduler":{"enable-table-across-nodes":false,"region-threshold":100000,"write-key-threshold":0,"region-per-span":0},"integrity":{"integrity-check-level":"none","corruption-handle-level":"warn"},"changefeed-error-stuck-duration":1800000000000,"sql-mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced-status":{"synced-check-interval":300,"checkpoint-interval":15}},"state":"normal","error":null,"warning":null,"creator-version":"v7.5.1-26-g93530e277","epoch":449589191042400262}

/tidb/cdc/default/default/changefeed/status/fbe3e360-db05-4e47-90e7-32892da81a44
	{"checkpoint-ts":449589196678496259,"min-table-barrier-ts":449589197989216267,"admin-job-type":0}

/tidb/cdc/default/default/task/position/61fe0a1d-8324-49d9-8af0-b91d198fe7bd/fbe3e360-db05-4e47-90e7-32892da81a44
	{"checkpoint-ts":0,"resolved-ts":0,"count":0,"error":null,"warning":null}

/tidb/cdc/default/default/upstream/7366069244847187894
	{"id":7366069244847187894,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:

changefeedID: default/fbe3e360-db05-4e47-90e7-32892da81a44
{UpstreamID:7366069244847187894 Namespace:default ID:fbe3e360-db05-4e47-90e7-32892da81a44 SinkURI:mysql://normal:123456@127.0.0.1:3306/ CreateTime:2024-05-07 09:50:57.760779194 +0800 CST StartTs:449589191016185860 TargetTs:0 AdminJobType:noop Engine:unified SortDir: Config:0xc0034cc900 State:normal Error:<nil> Warning:<nil> CreatorVersion:v7.5.1-26-g93530e277 Epoch:449589191042400262}
{CheckpointTs:449589196678496259 MinTableBarrierTs:449589197989216267 AdminJobType:noop}
span: {table_id:108,start_key:7480000000000000ff6c5f720000000000fa,end_key:7480000000000000ff6c5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:109,start_key:7480000000000000ff6d5f720000000000fa,end_key:7480000000000000ff6d5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:117,start_key:7480000000000000ff755f720000000000fa,end_key:7480000000000000ff755f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:114,start_key:7480000000000000ff725f720000000000fa,end_key:7480000000000000ff725f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:105,start_key:7480000000000000ff695f720000000000fa,end_key:7480000000000000ff695f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:112,start_key:7480000000000000ff705f720000000000fa,end_key:7480000000000000ff705f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:106,start_key:7480000000000000ff6a5f720000000000fa,end_key:7480000000000000ff6a5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:119,start_key:7480000000000000ff775f720000000000fa,end_key:7480000000000000ff775f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:113,start_key:7480000000000000ff715f720000000000fa,end_key:7480000000000000ff715f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:116,start_key:7480000000000000ff745f720000000000fa,end_key:7480000000000000ff745f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:107,start_key:7480000000000000ff6b5f720000000000fa,end_key:7480000000000000ff6b5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:123,start_key:7480000000000000ff7b5f720000000000fa,end_key:7480000000000000ff7b5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/61fe0a1d-8324-49d9-8af0-b91d198fe7bd
	{"id":"61fe0a1d-8324-49d9-8af0-b91d198fe7bd","address":"127.0.0.1:8300","version":"v7.5.1-26-g93530e277"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f50beebb8c9
	61fe0a1d-8324-49d9-8af0-b91d198fe7bd

/tidb/cdc/default/default/changefeed/info/fbe3e360-db05-4e47-90e7-32892da81a44
	{"upstream-id":7366069244847187894,"namespace":"default","changefeed-id":"fbe3e360-db05-4e47-90e7-32892da81a44","sink-uri":"mysql://normal:123456@127.0.0.1:3306/","create-time":"2024-05-07T09:50:57.760779194+08:00","start-ts":449589191016185860,"target-ts":0,"admin-job-type":0,"sort-engine":"","sort-dir":"","config":{"memory-quota":1073741824,"case-sensitive":false,"force-replicate":false,"check-gc-safe-point":true,"enable-sync-point":false,"ignore-ineligible-table":false,"bdr-mode":false,"sync-point-interval":600000000000,"sync-point-retention":86400000000000,"filter":{"rules":["*.*"],"ignore-txn-start-ts":null,"event-filters":null},"mounter":{"worker-num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include-commit-ts":false,"binary-encoding-method":"base64"},"encoder-concurrency":32,"terminator":"\r\n","date-separator":"day","enable-partition-separator":true,"enable-kafka-sink-v2":false,"only-output-updated-columns":false,"delete-only-output-handle-key-columns":false,"advance-timeout-in-sec":150,"send-bootstrap-interval-in-sec":120,"send-bootstrap-in-msg-count":10000,"send-bootstrap-to-all-partition":true,"open":{"output-old-value":true}},"consistent":{"level":"eventual","max-log-size":64,"flush-interval":2000,"meta-flush-interval":200,"encoding-worker-num":16,"flush-worker-num":8,"storage":"file:///tmp/tidb_cdc_test/consistent_partition_table/redo","use-file-backend":false,"compression":"","memory-usage":{"memory-quota-percentage":50,"event-cache-percentage":0}},"scheduler":{"enable-table-across-nodes":false,"region-threshold":100000,"write-key-threshold":0,"region-per-span":0},"integrity":{"integrity-check-level":"none","corruption-handle-level":"warn"},"changefeed-error-stuck-duration":1800000000000,"sql-mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced-status":{"synced-check-interval":300,"checkpoint-interval":15}},"state":"normal","error":null,"warning":null,"creator-version":"v7.5.1-26-g93530e277","epoch":449589191042400262}

/tidb/cdc/default/default/changefeed/status/fbe3e360-db05-4e47-90e7-32892da81a44
	{"checkpoint-ts":449589196678496259,"min-table-barrier-ts":449589197989216267,"admin-job-type":0}

/tidb/cdc/default/default/task/position/61fe0a1d-8324-49d9-8af0-b91d198fe7bd/fbe3e360-db05-4e47-90e7-32892da81a44
	{"checkpoint-ts":0,"resolved-ts":0,"count":0,"error":null,"warning":null}

/tidb/cdc/default/default/upstream/7366069244847187894
	{"id":7366069244847187894,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ echo '

*** owner info ***:



*** processors info ***:

changefeedID: default/fbe3e360-db05-4e47-90e7-32892da81a44
{UpstreamID:7366069244847187894 Namespace:default ID:fbe3e360-db05-4e47-90e7-32892da81a44 SinkURI:mysql://normal:123456@127.0.0.1:3306/ CreateTime:2024-05-07 09:50:57.760779194 +0800 CST StartTs:449589191016185860 TargetTs:0 AdminJobType:noop Engine:unified SortDir: Config:0xc0034cc900 State:normal Error:<nil> Warning:<nil> CreatorVersion:v7.5.1-26-g93530e277 Epoch:449589191042400262}
{CheckpointTs:449589196678496259 MinTableBarrierTs:449589197989216267 AdminJobType:noop}
span: {table_id:108,start_key:7480000000000000ff6c5f720000000000fa,end_key:7480000000000000ff6c5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:109,start_key:7480000000000000ff6d5f720000000000fa,end_key:7480000000000000ff6d5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:117,start_key:7480000000000000ff755f720000000000fa,end_key:7480000000000000ff755f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:114,start_key:7480000000000000ff725f720000000000fa,end_key:7480000000000000ff725f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:105,start_key:7480000000000000ff695f720000000000fa,end_key:7480000000000000ff695f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:112,start_key:7480000000000000ff705f720000000000fa,end_key:7480000000000000ff705f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:106,start_key:7480000000000000ff6a5f720000000000fa,end_key:7480000000000000ff6a5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:119,start_key:7480000000000000ff775f720000000000fa,end_key:7480000000000000ff775f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:113,start_key:7480000000000000ff715f720000000000fa,end_key:7480000000000000ff715f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:116,start_key:7480000000000000ff745f720000000000fa,end_key:7480000000000000ff745f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:107,start_key:7480000000000000ff6b5f720000000000fa,end_key:7480000000000000ff6b5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating
span: {table_id:123,start_key:7480000000000000ff7b5f720000000000fa,end_key:7480000000000000ff7b5f730000000000fa}, resolvedTs: 449589196678496259, checkpointTs: 449589196678496259, state: Replicating



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/61fe0a1d-8324-49d9-8af0-b91d198fe7bd
	{"id":"61fe0a1d-8324-49d9-8af0-b91d198fe7bd","address":"127.0.0.1:8300","version":"v7.5.1-26-g93530e277"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f50beebb8c9
	61fe0a1d-8324-49d9-8af0-b91d198fe7bd

/tidb/cdc/default/default/changefeed/info/fbe3e360-db05-4e47-90e7-32892da81a44
	{"upstream-id":7366069244847187894,"namespace":"default","changefeed-id":"fbe3e360-db05-4e47-90e7-32892da81a44","sink-uri":"mysql://normal:123456@127.0.0.1:3306/","create-time":"2024-05-07T09:50:57.760779194+08:00","start-ts":449589191016185860,"target-ts":0,"admin-job-type":0,"sort-engine":"","sort-dir":"","config":{"memory-quota":1073741824,"case-sensitive":false,"force-replicate":false,"check-gc-safe-point":true,"enable-sync-point":false,"ignore-ineligible-table":false,"bdr-mode":false,"sync-point-interval":600000000000,"sync-point-retention":86400000000000,"filter":{"rules":["*.*"],"ignore-txn-start-ts":null,"event-filters":null},"mounter":{"worker-num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include-commit-ts":false,"binary-encoding-method":"base64"},"encoder-concurrency":32,"terminator":"\r\n","date-separator":"day","enable-partition-separator":true,"enable-kafka-sink-v2":false,"only-output-updated-columns":false,"delete-only-output-handle-key-columns":false,"advance-timeout-in-sec":150,"send-bootstrap-interval-in-sec":120,"send-bootstrap-in-msg-count":10000,"send-bootstrap-to-all-partition":true,"open":{"output-old-value":true}},"consistent":{"level":"eventual","max-log-size":64,"flush-interval":2000,"meta-flush-interval":200,"encoding-worker-num":16,"flush-worker-num":8,"storage":"file:///tmp/tidb_cdc_test/consistent_partition_table/redo","use-file-backend":false,"compression":"","memory-usage":{"memory-quota-percentage":50,"event-cache-percentage":0}},"scheduler":{"enable-table-across-nodes":false,"region-threshold":100000,"write-key-threshold":0,"region-per-span":0},"integrity":{"integrity-check-level":"none","corruption-handle-level":"warn"},"changefeed-error-stuck-duration":1800000000000,"sql-mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced-status":{"synced-check-interval":300,"checkpoint-interval":15}},"state":"normal","error":null,"warning":null,"creator-version":"v7.5.1-26-g93530e277","epoch":449589191042400262}

/tidb/cdc/default/default/changefeed/status/fbe3e360-db05-4e47-90e7-32892da81a44
	{"checkpoint-ts":449589196678496259,"min-table-barrier-ts":449589197989216267,"admin-job-type":0}

/tidb/cdc/default/default/task/position/61fe0a1d-8324-49d9-8af0-b91d198fe7bd/fbe3e360-db05-4e47-90e7-32892da81a44
	{"checkpoint-ts":0,"resolved-ts":0,"count":0,"error":null,"warning":null}

/tidb/cdc/default/default/upstream/7366069244847187894
	{"id":7366069244847187894,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ break
+ set +x
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   ee5bd74cfec316736bf6abc03f22955f88d53e24
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-01 15:16:10
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   ee5bd74cfec316736bf6abc03f22955f88d53e24
Git Commit Branch: release-7.5
UTC Build Time:    2024-05-01 15:16:10
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
check diff failed 17-th time, retry later
table force_replicate_table.t5 exists
table force_replicate_table.t6 not exists for 1-th check, retry later
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
Starting Upstream TiDB...
Release Version: v7.5.1-51-gdbd8ea2700
Edition: Community
Git Commit Hash: dbd8ea2700febe87bb6dfcc3dd7faf555c0094b0
Git Branch: release-7.5
UTC Build Time: 2024-05-06 16:47:24
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.1-51-gdbd8ea2700
Edition: Community
Git Commit Hash: dbd8ea2700febe87bb6dfcc3dd7faf555c0094b0
Git Branch: release-7.5
UTC Build Time: 2024-05-06 16:47:24
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
check diff failed 18-th time, retry later
table force_replicate_table.t6 not exists for 2-th check, retry later
check diff failed 19-th time, retry later
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
table force_replicate_table.t6 exists
check_data_subset force_replicate_table.t0 127.0.0.1 4000 127.0.0.1 3306
run task successfully
check_data_subset force_replicate_table.t1 127.0.0.1 4000 127.0.0.1 3306
run task successfully
check_data_subset force_replicate_table.t2 127.0.0.1 4000 127.0.0.1 3306
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d42feb6140012	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-mysql-test-363-v7jrh, pid:10296, start at 2024-05-07 09:51:30.211052017 +0800 CST m=+4.937328353	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240507-09:53:30.217 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240507-09:51:30.181 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240507-09:41:30.181 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d42feb6140012	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-mysql-test-363-v7jrh, pid:10296, start at 2024-05-07 09:51:30.211052017 +0800 CST m=+4.937328353	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240507-09:53:30.217 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240507-09:51:30.181 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240507-09:41:30.181 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d42feb8480013	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-mysql-test-363-v7jrh, pid:10377, start at 2024-05-07 09:51:30.349291116 +0800 CST m=+5.002486947	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240507-09:53:30.355 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240507-09:51:30.322 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240507-09:41:30.322 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-12-g9002cc34d
Edition:         Community
Git Commit Hash: 9002cc34d3b593a718b6c5260ba18f30a45ab314
Git Branch:      HEAD
UTC Build Time:  2024-04-18 07:24:48
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-04-18 07:28:40
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/server_config_compatibility/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/server_config_compatibility/tiflash/log/error.log
arg matches is ArgMatches { args: {"engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-12-g9002cc34d"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["9002cc34d3b593a718b6c5260ba18f30a45ab314"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/server_config_compatibility/tiflash/log/proxy.log"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/server_config_compatibility/tiflash-proxy.toml"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/server_config_compatibility/tiflash/db/proxy"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
check diff failed 20-th time, retry later
run task successfully
check_data_subset force_replicate_table.t3 127.0.0.1 4000 127.0.0.1 3306
run task successfully
check_data_subset force_replicate_table.t4 127.0.0.1 4000 127.0.0.1 3306
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.server_config_compatibility.cli.11687.out cli tso query --pd=http://127.0.0.1:2379
run task successfully
check_data_subset force_replicate_table.t5 127.0.0.1 4000 127.0.0.1 3306
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
check diff failed 21-th time, retry later
run task successfully
check_data_subset force_replicate_table.t6 127.0.0.1 4000 127.0.0.1 3306
id=19,a=NULL doesn't exist in downstream table force_replicate_table.t6
run task failed 1-th time, retry later
+ set +x
+ tso='449589200521003009
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449589200521003009 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d42fedcdc0012	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-mysql-test-363-m338b, pid:18259, start at 2024-05-07 09:51:32.691088435 +0800 CST m=+5.288085860	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240507-09:53:32.699 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240507-09:51:32.663 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240507-09:41:32.663 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d42fedcdc0012	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-mysql-test-363-m338b, pid:18259, start at 2024-05-07 09:51:32.691088435 +0800 CST m=+5.288085860	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240507-09:53:32.699 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240507-09:51:32.663 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240507-09:41:32.663 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63d42fedcc00014	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-mysql-test-363-m338b, pid:18344, start at 2024-05-07 09:51:32.690950056 +0800 CST m=+5.219850309	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240507-09:53:32.701 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240507-09:51:32.705 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240507-09:41:32.705 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-12-g9002cc34d
Edition:         Community
Git Commit Hash: 9002cc34d3b593a718b6c5260ba18f30a45ab314
Git Branch:      HEAD
UTC Build Time:  2024-04-18 07:24:48
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-04-18 07:28:40
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/synced_status/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/synced_status/tiflash/log/error.log
arg matches is ArgMatches { args: {"advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/db/proxy"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash-proxy.toml"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["9002cc34d3b593a718b6c5260ba18f30a45ab314"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-12-g9002cc34d"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/synced_status/tiflash/log/proxy.log"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
check diff failed 22-th time, retry later
[Tue May  7 09:51:36 CST 2024] <<<<<< START cdc server in server_config_compatibility case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ GO_FAILPOINTS=
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.server_config_compatibility.1173811740.out server --log-file /tmp/tidb_cdc_test/server_config_compatibility/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/server_config_compatibility/cdc_data --cluster-id default --config /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/server_config_compatibility/conf/server.toml
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
check_data_subset force_replicate_table.t6 127.0.0.1 4000 127.0.0.1 3306
+ cd /tmp/tidb_cdc_test/synced_status
++ run_cdc_cli_tso_query 127.0.0.1 2379
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.19595.out cli tso query --pd=http://127.0.0.1:2379
check diff failed 23-th time, retry later
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Tue, 07 May 2024 01:51:39 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/ec215306-7400-4174-aad4-3b6586d6a948
	{"id":"ec215306-7400-4174-aad4-3b6586d6a948","address":"127.0.0.1:8300","version":"v7.5.1-26-g93530e277"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f50bf88640b
	ec215306-7400-4174-aad4-3b6586d6a948

/tidb/cdc/default/default/upstream/7366069412839062543
	{"id":7366069412839062543,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/ec215306-7400-4174-aad4-3b6586d6a948
	{"id":"ec215306-7400-4174-aad4-3b6586d6a948","address":"127.0.0.1:8300","version":"v7.5.1-26-g93530e277"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f50bf88640b
	ec215306-7400-4174-aad4-3b6586d6a948

/tidb/cdc/default/default/upstream/7366069412839062543
	{"id":7366069412839062543,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/ec215306-7400-4174-aad4-3b6586d6a948
	{"id":"ec215306-7400-4174-aad4-3b6586d6a948","address":"127.0.0.1:8300","version":"v7.5.1-26-g93530e277"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f50bf88640b
run task successfully
	ec215306-7400-4174-aad4-3b6586d6a948

/tidb/cdc/default/default/upstream/7366069412839062543
	{"id":7366069412839062543,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.server_config_compatibility.cli.11788.out cli changefeed create --start-ts=449589200521003009 --sink-uri=mysql+ssl://normal:123456@127.0.0.1:3306/
+ set +x
+ tso='449589201603919873
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449589201603919873 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
+ start_ts=449589201603919873
+ run_cdc_server --workdir /tmp/tidb_cdc_test/synced_status --binary cdc.test
[Tue May  7 09:51:39 CST 2024] <<<<<< START cdc server in synced_status case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ GO_FAILPOINTS=
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.1962819630.out server --log-file /tmp/tidb_cdc_test/synced_status/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/synced_status/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
Create changefeed successfully!
ID: f7293460-d844-490d-90c7-498c358d5a5a
Info: {"upstream_id":7366069412839062543,"namespace":"default","id":"f7293460-d844-490d-90c7-498c358d5a5a","sink_uri":"mysql+ssl://normal:xxxxx@127.0.0.1:3306/","create_time":"2024-05-07T09:51:39.853874662+08:00","start_ts":449589200521003009,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v7.5.1-26-g93530e277","resolved_ts":449589200521003009,"checkpoint_ts":449589200521003009,"checkpoint_time":"2024-05-07 09:51:33.882"}
PASS
coverage: 2.4% of statements in github.com/pingcap/tiflow/...
wait process cdc.test exit for 1-th time...
check diff failed 24-th time, retry later
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Tue May  7 09:51:41 CST 2024] <<<<<< run test case force_replicate_table success! >>>>>>
+ set +x
TEST FAILED: OUTPUT DOES NOT CONTAIN 'id: 1'
____________________________________
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
check data failed 1-th time, retry later
check diff failed 25-th time, retry later
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Tue, 07 May 2024 01:51:42 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/c04ffc73-7ad7-42a2-a269-803dd44e3c95
	{"id":"c04ffc73-7ad7-42a2-a269-803dd44e3c95","address":"127.0.0.1:8300","version":"v7.5.1-26-g93530e277"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f50bf90b6f5
	c04ffc73-7ad7-42a2-a269-803dd44e3c95

/tidb/cdc/default/default/upstream/7366069420963665426
	{"id":7366069420963665426,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/c04ffc73-7ad7-42a2-a269-803dd44e3c95
	{"id":"c04ffc73-7ad7-42a2-a269-803dd44e3c95","address":"127.0.0.1:8300","version":"v7.5.1-26-g93530e277"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f50bf90b6f5
	c04ffc73-7ad7-42a2-a269-803dd44e3c95

/tidb/cdc/default/default/upstream/7366069420963665426
	{"id":7366069420963665426,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/c04ffc73-7ad7-42a2-a269-803dd44e3c95
	{"id":"c04ffc73-7ad7-42a2-a269-803dd44e3c95","address":"127.0.0.1:8300","version":"v7.5.1-26-g93530e277"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f50bf90b6f5
	c04ffc73-7ad7-42a2-a269-803dd44e3c95

/tidb/cdc/default/default/upstream/7366069420963665426
	{"id":7366069420963665426,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ config_path=conf/changefeed.toml
+ SINK_URI='mysql://root@127.0.0.1:3306/?max-txn-row=1'
+ run_cdc_cli changefeed create --start-ts=449589201603919873 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/synced_status/conf/changefeed.toml
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.synced_status.cli.19674.out cli changefeed create --start-ts=449589201603919873 '--sink-uri=mysql://root@127.0.0.1:3306/?max-txn-row=1' --changefeed-id=test-1 --config=/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/synced_status/conf/changefeed.toml
Create changefeed successfully!
ID: test-1
Info: {"upstream_id":7366069420963665426,"namespace":"default","id":"test-1","sink_uri":"mysql://root@127.0.0.1:3306/?max-txn-row=1","create_time":"2024-05-07T09:51:42.964429104+08:00","start_ts":449589201603919873,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64"},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50,"event_cache_percentage":0}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"sql_mode":"ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION","synced_status":{"synced_check_interval":120,"checkpoint_interval":20}},"state":"normal","creator_version":"v7.5.1-26-g93530e277","resolved_ts":449589201603919873,"checkpoint_ts":449589201603919873,"checkpoint_time":"2024-05-07 09:51:38.013"}
PASS
coverage: 2.4% of statements in github.com/pingcap/tiflow/...
TEST FAILED: OUTPUT DOES NOT CONTAIN 'id: 1'
____________________________________
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
check data failed 2-th time, retry later
check diff failed 26-th time, retry later
+ set +x
+ run_sql 'USE TEST;Create table t1(a int primary key, b int);insert into t1 values(1,2);insert into t1 values(2,3);'
+ check_table_exists test.t1 127.0.0.1 3306
table test.t1 not exists for 1-th check, retry later
check data successfully
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
check diff failed 27-th time, retry later
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Tue May  7 09:51:47 CST 2024] <<<<<< run test case server_config_compatibility success! >>>>>>
table test.t1 exists
+ sleep 5
check diff failed 28-th time, retry later
check diff failed 29-th time, retry later
check diff failed 30-th time, retry later
+ kill_tikv
++ ps aux
++ grep tikv-server
++ grep /tmp/tidb_cdc_test/synced_status
+ info='jenkins    17642 16.8  0.4 3763688 1616788 ?     Sl   09:51   0:04 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20160 --status-addr 127.0.0.1:20181 --log-file /tmp/tidb_cdc_test/synced_status/tikv1.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv1
jenkins    17643 16.9  0.4 3765224 1638448 ?     Sl   09:51   0:04 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20161 --status-addr 127.0.0.1:20182 --log-file /tmp/tidb_cdc_test/synced_status/tikv2.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv2
jenkins    17644 24.8  0.4 3819496 1686648 ?     Sl   09:51   0:06 tikv-server --pd 127.0.0.1:2379 -A 127.0.0.1:20162 --status-addr 127.0.0.1:20183 --log-file /tmp/tidb_cdc_test/synced_status/tikv3.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv3
jenkins    17646 23.5  0.4 3807716 1675168 ?     Sl   09:51   0:06 tikv-server --pd 127.0.0.1:2479 -A 127.0.0.1:21160 --status-addr 127.0.0.1:21180 --log-file /tmp/tidb_cdc_test/synced_status/tikv_down.log --log-level debug -C /tmp/tidb_cdc_test/synced_status/tikv-config.toml -s /tmp/tidb_cdc_test/synced_status/tikv_down'
++ ps aux
++ grep tikv-server
++ grep /tmp/tidb_cdc_test/synced_status
++ awk '{print $2}'
++ xargs kill -9
++ curl -X GET http://127.0.0.1:8300/api/v2/changefeeds/test-1/synced
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   243  100   243    0     0   2497      0 --:--:-- --:--:-- --:--:--  2505
+ synced_status='{"synced":false,"sink_checkpoint_ts":"2024-05-07 09:51:51.513","puller_resolved_ts":"1970-01-01 08:00:00.000","last_synced_ts":"2024-05-07 09:51:45.013","now_ts":"2024-05-07 09:51:52.000","info":"The data syncing is not finished, please wait"}'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-07' '09:51:51.513","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"2024-05-07' '09:51:45.013","now_ts":"2024-05-07' '09:51:52.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq .synced
+ status=false
+ '[' false '!=' false ']'
++ echo '{"synced":false,"sink_checkpoint_ts":"2024-05-07' '09:51:51.513","puller_resolved_ts":"1970-01-01' '08:00:00.000","last_synced_ts":"2024-05-07' '09:51:45.013","now_ts":"2024-05-07' '09:51:52.000","info":"The' data syncing is not finished, please 'wait"}'
++ jq -r .info
+ info='The data syncing is not finished, please wait'
+ target_message='The data syncing is not finished, please wait'
+ '[' 'The data syncing is not finished, please wait' '!=' 'The data syncing is not finished, please wait' ']'
+ sleep 130
\033[0;36m<<< Run all test success >>>\033[0m
[Pipeline] }
Cache not saved (ws/jenkins-pingcap-tiflow-release-7.5-pull_cdc_integration_mysql_test-363/tiflow-cdc already exists)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
check diff failed 31-th time, retry later
check diff failed 32-th time, retry later
check diff failed 33-th time, retry later
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/kafka_big_messages/run.sh using Sink-Type: mysql... <<=================
[Tue May  7 09:51:57 CST 2024] <<<<<< run test case kafka_big_messages success! >>>>>>
check diff failed 34-th time, retry later
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/kafka_compression/run.sh using Sink-Type: mysql... <<=================
[Tue May  7 09:52:00 CST 2024] <<<<<< run test case kafka_compression success! >>>>>>
check diff failed 35-th time, retry later
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/kafka_messages/run.sh using Sink-Type: mysql... <<=================
[Tue May  7 09:52:03 CST 2024] <<<<<< run test case kafka_messages success! >>>>>>
check diff failed 36-th time, retry later
check diff failed 37-th time, retry later
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/kafka_sink_error_resume/run.sh using Sink-Type: mysql... <<=================
[Tue May  7 09:52:07 CST 2024] <<<<<< run test case kafka_sink_error_resume success! >>>>>>
check diff failed 38-th time, retry later
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/mq_sink_lost_callback/run.sh using Sink-Type: mysql... <<=================
[Tue May  7 09:52:10 CST 2024] <<<<<< run test case mq_sink_lost_callback success! >>>>>>
check diff failed 39-th time, retry later
check diff failed 40-th time, retry later
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/mq_sink_dispatcher/run.sh using Sink-Type: mysql... <<=================
[Tue May  7 09:52:13 CST 2024] <<<<<< run test case mq_sink_dispatcher success! >>>>>>
check diff failed 41-th time, retry later
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/kafka_column_selector/run.sh using Sink-Type: mysql... <<=================
[Tue May  7 09:52:16 CST 2024] <<<<<< run test case kafka_column_selector success! >>>>>>
check diff failed 42-th time, retry later
check diff failed 43-th time, retry later
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/kafka_column_selector_avro/run.sh using Sink-Type: mysql... <<=================
[Tue May  7 09:52:19 CST 2024] <<<<<< run test case kafka_column_selector_avro success! >>>>>>
check diff failed 44-th time, retry later
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/lossy_ddl/run.sh using Sink-Type: mysql... <<=================
[Tue May  7 09:52:22 CST 2024] <<<<<< run test case lossy_ddl success! >>>>>>
check diff failed 45-th time, retry later
check diff failed 46-th time, retry later
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/storage_csv_update/run.sh using Sink-Type: mysql... <<=================
[Tue May  7 09:52:25 CST 2024] <<<<<< run test case storage_csv_update success! >>>>>>
check diff failed 47-th time, retry later
\033[0;36m<<< Run all test success >>>\033[0m
[Pipeline] }
Cache not saved (ws/jenkins-pingcap-tiflow-release-7.5-pull_cdc_integration_mysql_test-363/tiflow-cdc already exists)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
check diff failed 48-th time, retry later
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
check diff failed 49-th time, retry later
check diff failed 50-th time, retry later
check diff failed 51-th time, retry later
check diff failed 52-th time, retry later
check diff failed 53-th time, retry later
check diff failed 54-th time, retry later
check diff failed 55-th time, retry later
check diff failed 56-th time, retry later
check diff failed 57-th time, retry later
check diff failed 58-th time, retry later
check diff failed 59-th time, retry later
check diff failed 60-th time, retry later
check diff failed at last
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
There is something error when initialize diff, please check log info in /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log

[2024/05/07 09:52:54.135 +08:00] [INFO] [printer.go:46] ["Welcome to sync_diff_inspector"] ["Release Version"=v7.4.0] ["Git Commit Hash"=d671b0840063bc2532941f02e02e12627402844c] ["Git Branch"=heads/refs/tags/v7.4.0] ["UTC Build Time"="2023-09-22 03:51:56"] ["Go Version"=go1.21.1]
[2024/05/07 09:52:54.136 +08:00] [INFO] [main.go:101] [config="{\"check-thread-count\":4,\"split-thread-count\":5,\"export-fix-sql\":true,\"check-struct-only\":false,\"dm-addr\":\"\",\"dm-task\":\"\",\"data-sources\":{\"mysql1\":{\"host\":\"127.0.0.1\",\"port\":4000,\"user\":\"root\",\"password\":\"******\",\"sql-mode\":\"\",\"snapshot\":\"\",\"security\":null,\"route-rules\":null,\"Router\":{\"Selector\":{}},\"Conn\":null},\"tidb0\":{\"host\":\"127.0.0.1\",\"port\":3306,\"user\":\"root\",\"password\":\"******\",\"sql-mode\":\"\",\"snapshot\":\"\",\"security\":null,\"route-rules\":null,\"Router\":{\"Selector\":{}},\"Conn\":null}},\"routes\":null,\"table-configs\":null,\"task\":{\"source-instances\":[\"mysql1\"],\"source-routes\":null,\"target-instance\":\"tidb0\",\"target-check-tables\":[\"sequence_test.t1\"],\"target-configs\":null,\"output-dir\":\"/tmp/tidb_cdc_test/sequence/sync_diff/output\",\"SourceInstances\":[{\"host\":\"127.0.0.1\",\"port\":4000,\"user\":\"root\",\"password\":\"******\",\"sql-mode\":\"\",\"snapshot\":\"\",\"security\":null,\"route-rules\":null,\"Router\":{\"Selector\":{}},\"Conn\":null}],\"TargetInstance\":{\"host\":\"127.0.0.1\",\"port\":3306,\"user\":\"root\",\"password\":\"******\",\"sql-mode\":\"\",\"snapshot\":\"\",\"security\":null,\"route-rules\":null,\"Router\":{\"Selector\":{}},\"Conn\":null},\"TargetTableConfigs\":null,\"TargetCheckTables\":[{}],\"FixDir\":\"/tmp/tidb_cdc_test/sequence/sync_diff/output/fix-on-tidb0\",\"CheckpointDir\":\"/tmp/tidb_cdc_test/sequence/sync_diff/output/checkpoint\",\"HashFile\":\"\"},\"ConfigFile\":\"/home/jenkins/agent/workspace/pingcap/tiflow/release-7.5/pull_cdc_integration_mysql_test/tiflow/tests/integration_tests/sequence/conf/diff_config.toml\",\"PrintVersion\":false}"]
[2024/05/07 09:52:54.136 +08:00] [DEBUG] [diff.go:842] ["set tidb cfg"]
[2024/05/07 09:52:54.139 +08:00] [DEBUG] [common.go:386] ["query tables"] [query="SHOW FULL TABLES IN `sequence_test` WHERE Table_Type = 'BASE TABLE';"]
[2024/05/07 09:52:54.139 +08:00] [DEBUG] [common.go:386] ["query tables"] [query="SHOW FULL TABLES IN `test` WHERE Table_Type = 'BASE TABLE';"]
[2024/05/07 09:52:54.140 +08:00] [DEBUG] [source.go:326] ["match target table"] [table=`sequence_test`.`t1`]
[2024/05/07 09:52:54.141 +08:00] [FATAL] [main.go:120] ["failed to initialize diff process"] [error="get table sequence_test.t1's information error line 3 column 31 near \"nextval(`sequence_test`.`seq0`)),\n  PRIMARY KEY (`id`) /*T![clustered_index] NONCLUSTERED */\n) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin\" \ngithub.com/pingcap/errors.AddStack\n\t/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20221009092201-b66cddb77c32/errors.go:174\ngithub.com/pingcap/errors.Trace\n\t/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20221009092201-b66cddb77c32/juju_adaptor.go:15\ngithub.com/pingcap/tidb/parser.(*Parser).ParseSQL\n\t/go/pkg/mod/github.com/pingcap/tidb/parser@v0.0.0-20230823131104-05aa17143df8/yy_parser.go:170\ngithub.com/pingcap/tidb/parser.(*Parser).ParseOneStmt\n\t/go/pkg/mod/github.com/pingcap/tidb/parser@v0.0.0-20230823131104-05aa17143df8/yy_parser.go:191\ngithub.com/pingcap/tidb-tools/pkg/dbutil.getTableInfoBySQL\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/pkg/dbutil/table.go:149\ngithub.com/pingcap/tidb-tools/pkg/dbutil.GetTableInfoBySQLWithSessionContext\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/pkg/dbutil/table.go:140\ngithub.com/pingcap/tidb-tools/pkg/dbutil.GetTableInfoWithVersion\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/pkg/dbutil/table.go:121\ngithub.com/pingcap/tidb-tools/sync_diff_inspector/source.initTables\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/source/source.go:328\ngithub.com/pingcap/tidb-tools/sync_diff_inspector/source.NewSources\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/source/source.go:121\nmain.(*Diff).init\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/diff.go:137\nmain.NewDiff\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/diff.go:95\nmain.checkSyncState\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/main.go:117\nmain.main\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/main.go:104\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:267\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1650"] [errorVerbose="get table sequence_test.t1's information error line 3 column 31 near \"nextval(`sequence_test`.`seq0`)),\n  PRIMARY KEY (`id`) /*T![clustered_index] NONCLUSTERED */\n) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin\" \ngithub.com/pingcap/errors.AddStack\n\t/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20221009092201-b66cddb77c32/errors.go:174\ngithub.com/pingcap/errors.Trace\n\t/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20221009092201-b66cddb77c32/juju_adaptor.go:15\ngithub.com/pingcap/tidb/parser.(*Parser).ParseSQL\n\t/go/pkg/mod/github.com/pingcap/tidb/parser@v0.0.0-20230823131104-05aa17143df8/yy_parser.go:170\ngithub.com/pingcap/tidb/parser.(*Parser).ParseOneStmt\n\t/go/pkg/mod/github.com/pingcap/tidb/parser@v0.0.0-20230823131104-05aa17143df8/yy_parser.go:191\ngithub.com/pingcap/tidb-tools/pkg/dbutil.getTableInfoBySQL\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/pkg/dbutil/table.go:149\ngithub.com/pingcap/tidb-tools/pkg/dbutil.GetTableInfoBySQLWithSessionContext\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/pkg/dbutil/table.go:140\ngithub.com/pingcap/tidb-tools/pkg/dbutil.GetTableInfoWithVersion\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/pkg/dbutil/table.go:121\ngithub.com/pingcap/tidb-tools/sync_diff_inspector/source.initTables\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/source/source.go:328\ngithub.com/pingcap/tidb-tools/sync_diff_inspector/source.NewSources\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/source/source.go:121\nmain.(*Diff).init\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/diff.go:137\nmain.NewDiff\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/diff.go:95\nmain.checkSyncState\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/main.go:117\nmain.main\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/main.go:104\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:267\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1650\ngithub.com/pingcap/tidb-tools/sync_diff_inspector/source.initTables\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/source/source.go:330\ngithub.com/pingcap/tidb-tools/sync_diff_inspector/source.NewSources\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/source/source.go:121\nmain.(*Diff).init\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/diff.go:137\nmain.NewDiff\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/diff.go:95\nmain.checkSyncState\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/main.go:117\nmain.main\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/main.go:104\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:267\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1650"] [stack="main.checkSyncState\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/main.go:120\nmain.main\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb-tools/sync_diff_inspector/main.go:104\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:267"]

[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
Post stage
[Pipeline] sh
+ ls /tmp/tidb_cdc_test/
availability
cov.availability.26402642.out
cov.availability.27892791.out
cov.availability.29632965.out
cov.availability.30753077.out
cov.availability.32003202.out
cov.availability.35963598.out
cov.availability.38313833.out
cov.availability.39393941.out
cov.availability.40854087.out
cov.availability.45894591.out
cov.availability.47104712.out
cov.availability.53005302.out
cov.availability.54805482.out
cov.availability.55895591.out
cov.availability.58055807.out
cov.availability.59905992.out
cov.availability.cli.2581.out
cov.availability.cli.2686.out
cov.http_proxies.cli.12208.out
cov.http_proxies.cli.12296.out
cov.sequence.cli.14692.out
http_proxies
sequence
sql_res.availability.txt
sql_res.sequence.txt
++ find /tmp/tidb_cdc_test/ -type f -name '*.log'
+ tar -cvzf log-G18.tar.gz /tmp/tidb_cdc_test/http_proxies/test_proxy.log /tmp/tidb_cdc_test/http_proxies/pd1.log /tmp/tidb_cdc_test/http_proxies/tidb_other.log /tmp/tidb_cdc_test/http_proxies/tikv3.log /tmp/tidb_cdc_test/http_proxies/cdc.log /tmp/tidb_cdc_test/http_proxies/tidb_down.log /tmp/tidb_cdc_test/http_proxies/tikv1.log /tmp/tidb_cdc_test/http_proxies/tidb-slow.log /tmp/tidb_cdc_test/http_proxies/tikv2.log /tmp/tidb_cdc_test/http_proxies/down_pd.log /tmp/tidb_cdc_test/http_proxies/stdout.log /tmp/tidb_cdc_test/http_proxies/tidb.log /tmp/tidb_cdc_test/http_proxies/tikv_down.log /tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log /tmp/tidb_cdc_test/sequence/pd1.log /tmp/tidb_cdc_test/sequence/tikv_down/db/000005.log /tmp/tidb_cdc_test/sequence/tidb_other.log /tmp/tidb_cdc_test/sequence/pd1/region-meta/000001.log /tmp/tidb_cdc_test/sequence/pd1/hot-region/000001.log /tmp/tidb_cdc_test/sequence/tikv1/db/000005.log /tmp/tidb_cdc_test/sequence/tiflash/log/server.log /tmp/tidb_cdc_test/sequence/tiflash/log/error.log /tmp/tidb_cdc_test/sequence/tiflash/log/proxy.log /tmp/tidb_cdc_test/sequence/tiflash/db/proxy/db/000005.log /tmp/tidb_cdc_test/sequence/down_pd/region-meta/000001.log /tmp/tidb_cdc_test/sequence/down_pd/hot-region/000001.log /tmp/tidb_cdc_test/sequence/tikv3.log /tmp/tidb_cdc_test/sequence/cdc.log /tmp/tidb_cdc_test/sequence/sync_diff_inspector.log /tmp/tidb_cdc_test/sequence/tidb_down.log /tmp/tidb_cdc_test/sequence/tikv1.log /tmp/tidb_cdc_test/sequence/tikv3/db/000005.log /tmp/tidb_cdc_test/sequence/tikv2/db/000005.log /tmp/tidb_cdc_test/sequence/tidb-slow.log /tmp/tidb_cdc_test/sequence/tikv2.log /tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0005/000002.log /tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0002/000002.log /tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0007/000002.log /tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0006/000002.log /tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0004/000002.log /tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0000/000002.log /tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0001/000002.log /tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0003/000002.log /tmp/tidb_cdc_test/sequence/down_pd.log /tmp/tidb_cdc_test/sequence/stdout.log /tmp/tidb_cdc_test/sequence/tidb.log /tmp/tidb_cdc_test/sequence/tikv_down.log /tmp/tidb_cdc_test/availability/stdouttest_owner_retryable_error.server1.log /tmp/tidb_cdc_test/availability/stdouttest_gap_between_watch_capture.server1.log /tmp/tidb_cdc_test/availability/stdouttest_owner_cleanup_stale_tasks.server1.log /tmp/tidb_cdc_test/availability/cdctest_owner_cleanup_stale_tasks.server3.log /tmp/tidb_cdc_test/availability/pd1.log /tmp/tidb_cdc_test/availability/cdctest_hang_up_owner.server2.log /tmp/tidb_cdc_test/availability/cdctest_hang_up_capture.server2.log /tmp/tidb_cdc_test/availability/cdctest_kill_owner.server2.log /tmp/tidb_cdc_test/availability/tidb_other.log /tmp/tidb_cdc_test/availability/cdctest_expire_capture.server1.log /tmp/tidb_cdc_test/availability/cdctest_owner_cleanup_stale_tasks.server1.log /tmp/tidb_cdc_test/availability/cdctest_hang_up_capture.server1.log /tmp/tidb_cdc_test/availability/cdctest_gap_between_watch_capture.server2.log /tmp/tidb_cdc_test/availability/cdctest_kill_owner.server1.log /tmp/tidb_cdc_test/availability/cdctest_owner_retryable_error.server2.log /tmp/tidb_cdc_test/availability/cdctest_owner_retryable_error.server1.log /tmp/tidb_cdc_test/availability/stdouttest_kill_owner.server1.log /tmp/tidb_cdc_test/availability/tikv3.log /tmp/tidb_cdc_test/availability/stdouttest_kill_capture.server1.log /tmp/tidb_cdc_test/availability/stdouttest_gap_between_watch_capture.server2.log /tmp/tidb_cdc_test/availability/tidb_down.log /tmp/tidb_cdc_test/availability/stdouttest_stop_processor.log /tmp/tidb_cdc_test/availability/tikv1.log /tmp/tidb_cdc_test/availability/stdouttest_hang_up_capture.server2.log /tmp/tidb_cdc_test/availability/tidb-slow.log /tmp/tidb_cdc_test/availability/tikv2.log /tmp/tidb_cdc_test/availability/stdouttest_hang_up_capture.server1.log /tmp/tidb_cdc_test/availability/stdouttest_owner_retryable_error.server2.log /tmp/tidb_cdc_test/availability/stdouttest_expire_owner.server1.log /tmp/tidb_cdc_test/availability/down_pd.log /tmp/tidb_cdc_test/availability/stdouttest_kill_owner.server2.log /tmp/tidb_cdc_test/availability/stdouttest_expire_capture.server1.log /tmp/tidb_cdc_test/availability/cdctest_stop_processor.log /tmp/tidb_cdc_test/availability/cdctest_hang_up_owner.server1.log /tmp/tidb_cdc_test/availability/stdouttest_hang_up_owner.server2.log /tmp/tidb_cdc_test/availability/tidb.log /tmp/tidb_cdc_test/availability/cdctest_kill_capture.server1.log /tmp/tidb_cdc_test/availability/cdctest_kill_capture.server2.log /tmp/tidb_cdc_test/availability/stdouttest_owner_cleanup_stale_tasks.server3.log /tmp/tidb_cdc_test/availability/tikv_down.log /tmp/tidb_cdc_test/availability/stdouttest_kill_capture.server2.log /tmp/tidb_cdc_test/availability/stdouttest_hang_up_owner.server1.log /tmp/tidb_cdc_test/availability/cdctest_owner_cleanup_stale_tasks.server2.log /tmp/tidb_cdc_test/availability/cdctest_expire_owner.server1.log /tmp/tidb_cdc_test/availability/cdctest_gap_between_watch_capture.server1.log /tmp/tidb_cdc_test/availability/stdouttest_owner_cleanup_stale_tasks.server2.log
tar: Removing leading `/' from member names
/tmp/tidb_cdc_test/http_proxies/test_proxy.log
/tmp/tidb_cdc_test/http_proxies/pd1.log
/tmp/tidb_cdc_test/http_proxies/tidb_other.log
/tmp/tidb_cdc_test/http_proxies/tikv3.log
/tmp/tidb_cdc_test/http_proxies/cdc.log
/tmp/tidb_cdc_test/http_proxies/tidb_down.log
/tmp/tidb_cdc_test/http_proxies/tikv1.log
/tmp/tidb_cdc_test/http_proxies/tidb-slow.log
/tmp/tidb_cdc_test/http_proxies/tikv2.log
/tmp/tidb_cdc_test/http_proxies/down_pd.log
/tmp/tidb_cdc_test/http_proxies/stdout.log
/tmp/tidb_cdc_test/http_proxies/tidb.log
/tmp/tidb_cdc_test/http_proxies/tikv_down.log
/tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
/tmp/tidb_cdc_test/sequence/pd1.log
/tmp/tidb_cdc_test/sequence/tikv_down/db/000005.log
/tmp/tidb_cdc_test/sequence/tidb_other.log
/tmp/tidb_cdc_test/sequence/pd1/region-meta/000001.log
/tmp/tidb_cdc_test/sequence/pd1/hot-region/000001.log
/tmp/tidb_cdc_test/sequence/tikv1/db/000005.log
/tmp/tidb_cdc_test/sequence/tiflash/log/server.log
/tmp/tidb_cdc_test/sequence/tiflash/log/error.log
/tmp/tidb_cdc_test/sequence/tiflash/log/proxy.log
/tmp/tidb_cdc_test/sequence/tiflash/db/proxy/db/000005.log
/tmp/tidb_cdc_test/sequence/down_pd/region-meta/000001.log
/tmp/tidb_cdc_test/sequence/down_pd/hot-region/000001.log
/tmp/tidb_cdc_test/sequence/tikv3.log
/tmp/tidb_cdc_test/sequence/cdc.log
/tmp/tidb_cdc_test/sequence/sync_diff_inspector.log
/tmp/tidb_cdc_test/sequence/tidb_down.log
/tmp/tidb_cdc_test/sequence/tikv1.log
/tmp/tidb_cdc_test/sequence/tikv3/db/000005.log
/tmp/tidb_cdc_test/sequence/tikv2/db/000005.log
/tmp/tidb_cdc_test/sequence/tidb-slow.log
/tmp/tidb_cdc_test/sequence/tikv2.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0005/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0002/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0007/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0006/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0004/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0000/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0001/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0003/000002.log
/tmp/tidb_cdc_test/sequence/down_pd.log
/tmp/tidb_cdc_test/sequence/stdout.log
/tmp/tidb_cdc_test/sequence/tidb.log
/tmp/tidb_cdc_test/sequence/tikv_down.log
/tmp/tidb_cdc_test/availability/stdouttest_owner_retryable_error.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_gap_between_watch_capture.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_owner_cleanup_stale_tasks.server1.log
/tmp/tidb_cdc_test/availability/cdctest_owner_cleanup_stale_tasks.server3.log
/tmp/tidb_cdc_test/availability/pd1.log
/tmp/tidb_cdc_test/availability/cdctest_hang_up_owner.server2.log
/tmp/tidb_cdc_test/availability/cdctest_hang_up_capture.server2.log
/tmp/tidb_cdc_test/availability/cdctest_kill_owner.server2.log
/tmp/tidb_cdc_test/availability/tidb_other.log
/tmp/tidb_cdc_test/availability/cdctest_expire_capture.server1.log
/tmp/tidb_cdc_test/availability/cdctest_owner_cleanup_stale_tasks.server1.log
/tmp/tidb_cdc_test/availability/cdctest_hang_up_capture.server1.log
/tmp/tidb_cdc_test/availability/cdctest_gap_between_watch_capture.server2.log
/tmp/tidb_cdc_test/availability/cdctest_kill_owner.server1.log
/tmp/tidb_cdc_test/availability/cdctest_owner_retryable_error.server2.log
/tmp/tidb_cdc_test/availability/cdctest_owner_retryable_error.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_kill_owner.server1.log
/tmp/tidb_cdc_test/availability/tikv3.log
/tmp/tidb_cdc_test/availability/stdouttest_kill_capture.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_gap_between_watch_capture.server2.log
/tmp/tidb_cdc_test/availability/tidb_down.log
/tmp/tidb_cdc_test/availability/stdouttest_stop_processor.log
/tmp/tidb_cdc_test/availability/tikv1.log
/tmp/tidb_cdc_test/availability/stdouttest_hang_up_capture.server2.log
/tmp/tidb_cdc_test/availability/tidb-slow.log
/tmp/tidb_cdc_test/availability/tikv2.log
/tmp/tidb_cdc_test/availability/stdouttest_hang_up_capture.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_owner_retryable_error.server2.log
/tmp/tidb_cdc_test/availability/stdouttest_expire_owner.server1.log
/tmp/tidb_cdc_test/availability/down_pd.log
/tmp/tidb_cdc_test/availability/stdouttest_kill_owner.server2.log
/tmp/tidb_cdc_test/availability/stdouttest_expire_capture.server1.log
/tmp/tidb_cdc_test/availability/cdctest_stop_processor.log
/tmp/tidb_cdc_test/availability/cdctest_hang_up_owner.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_hang_up_owner.server2.log
/tmp/tidb_cdc_test/availability/tidb.log
/tmp/tidb_cdc_test/availability/cdctest_kill_capture.server1.log
/tmp/tidb_cdc_test/availability/cdctest_kill_capture.server2.log
/tmp/tidb_cdc_test/availability/stdouttest_owner_cleanup_stale_tasks.server3.log
/tmp/tidb_cdc_test/availability/tikv_down.log
/tmp/tidb_cdc_test/availability/stdouttest_kill_capture.server2.log
/tmp/tidb_cdc_test/availability/stdouttest_hang_up_owner.server1.log
/tmp/tidb_cdc_test/availability/cdctest_owner_cleanup_stale_tasks.server2.log
/tmp/tidb_cdc_test/availability/cdctest_expire_owner.server1.log
/tmp/tidb_cdc_test/availability/cdctest_gap_between_watch_capture.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_owner_cleanup_stale_tasks.server2.log
+ ls -alh log-G18.tar.gz
-rw-r--r--. 1 jenkins jenkins 9.6M May  7 09:53 log-G18.tar.gz
[Pipeline] archiveArtifacts
Archiving artifacts
Recording fingerprints
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G18'
Sending interrupt signal to process
Killing processes
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
kill finished with exit code 0
++ stop_tidb_cluster
script returned exit code 143
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G09'
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)

script returned exit code 143
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G02'
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE