Skip to content

Console Output

Skipping 2,194 KB.. Full Log
tar: Removing leading `/' from member names
/tmp/tidb_cdc_test/http_proxies/tikv2.log
/tmp/tidb_cdc_test/http_proxies/tidb-slow.log
/tmp/tidb_cdc_test/http_proxies/stdout.log
/tmp/tidb_cdc_test/http_proxies/tikv_down.log
wait process cdc.test exit for 2-th time...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   3478895c2a700e4824bb41940260b6b28013275e
Git Commit Branch: release-7.5
UTC Build Time:    2024-04-28 08:20:54
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   3478895c2a700e4824bb41940260b6b28013275e
Git Commit Branch: release-7.5
UTC Build Time:    2024-04-28 08:20:54
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
/tmp/tidb_cdc_test/http_proxies/tidb_down.log
/tmp/tidb_cdc_test/http_proxies/down_pd.log
/tmp/tidb_cdc_test/http_proxies/pd1.log
/tmp/tidb_cdc_test/http_proxies/tikv3.log
/tmp/tidb_cdc_test/http_proxies/tidb.log
/tmp/tidb_cdc_test/http_proxies/test_proxy.log
/tmp/tidb_cdc_test/http_proxies/tikv1.log
/tmp/tidb_cdc_test/http_proxies/tidb_other.log
/tmp/tidb_cdc_test/http_proxies/cdc.log
/tmp/tidb_cdc_test/sequence/tikv2.log
/tmp/tidb_cdc_test/sequence/down_pd/region-meta/000001.log
/tmp/tidb_cdc_test/sequence/down_pd/hot-region/000001.log
/tmp/tidb_cdc_test/sequence/tidb-slow.log
/tmp/tidb_cdc_test/sequence/stdout.log
/tmp/tidb_cdc_test/sequence/tikv3/db/000005.log
table owner_remove_table_error.finished_mark exists
check diff successfully
/tmp/tidb_cdc_test/sequence/tikv_down.log
/tmp/tidb_cdc_test/sequence/tidb_down.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0001/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0007/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0006/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0004/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0005/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0000/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0002/000002.log
/tmp/tidb_cdc_test/sequence/cdc_data/tmp/sorter/0003/000002.log
/tmp/tidb_cdc_test/sequence/down_pd.log
/tmp/tidb_cdc_test/sequence/sync_diff/output/sync_diff.log
/tmp/tidb_cdc_test/sequence/pd1.log
/tmp/tidb_cdc_test/sequence/tikv2/db/000005.log
/tmp/tidb_cdc_test/sequence/tikv3.log
table big_txn.usertable1 exists
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Tue Apr 30 11:01:52 CST 2024] <<<<<< run test case autorandom success! >>>>>>
/tmp/tidb_cdc_test/sequence/sync_diff_inspector.log
/tmp/tidb_cdc_test/sequence/tidb.log
/tmp/tidb_cdc_test/sequence/tikv_down/db/000005.log
wait process cdc.test exit for 1-th time...
table testSync.usertable exists
table testSync.simple1 exists
table testSync.simple2 exists
/tmp/tidb_cdc_test/sequence/tikv1/db/000005.log
/tmp/tidb_cdc_test/sequence/tikv1.log
/tmp/tidb_cdc_test/sequence/tidb_other.log
/tmp/tidb_cdc_test/sequence/cdc.log
/tmp/tidb_cdc_test/sequence/tiflash/log/error.log
/tmp/tidb_cdc_test/sequence/tiflash/log/server.log
/tmp/tidb_cdc_test/sequence/tiflash/log/proxy.log
/tmp/tidb_cdc_test/sequence/tiflash/db/proxy/db/000005.log
/tmp/tidb_cdc_test/sequence/pd1/region-meta/000001.log
/tmp/tidb_cdc_test/sequence/pd1/hot-region/000001.log
/tmp/tidb_cdc_test/availability/cdctest_hang_up_capture.server1.log
/tmp/tidb_cdc_test/availability/tikv2.log
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63cb3ce2cb80014	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-mysql-test-350-djbfm, pid:6794, start at 2024-04-30 11:01:49.535052314 +0800 CST m=+5.396382285	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240430-11:03:49.542 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240430-11:01:49.536 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240430-10:51:49.536 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	179	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63cb3ce2f040009	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:ap-tiflow-release-7-5-pull-cdc-integration-mysql-test-350-djbfm, pid:6876, start at 2024-04-30 11:01:49.644217194 +0800 CST m=+5.450623601	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240430-11:03:49.652 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240430-11:01:49.633 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240430-10:51:49.633 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v7.5.1-12-g9002cc34d
Edition:         Community
Git Commit Hash: 9002cc34d3b593a718b6c5260ba18f30a45ab314
Git Branch:      HEAD
UTC Build Time:  2024-04-18 07:24:48
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO

Raft Proxy
Git Commit Hash:   521fd9dbc55e58646045d88f91c3c35db50b5981
Git Commit Branch: HEAD
UTC Build Time:    2024-04-18 07:28:40
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:    portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/multi_capture/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/multi_capture/tiflash/log/error.log
arg matches is ArgMatches { args: {"config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/multi_capture/tiflash-proxy.toml"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v7.5.1-12-g9002cc34d"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/multi_capture/tiflash/log/proxy.log"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/multi_capture/tiflash/db/proxy"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["9002cc34d3b593a718b6c5260ba18f30a45ab314"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
start tidb cluster in /tmp/tidb_cdc_test/processor_stop_delay
Starting Upstream PD...
Release Version: v7.5.1-5-g584533652
Edition: Community
Git Commit Hash: 58453365285465cd90bc4472cff2bad7ce4d764b
Git Branch: release-7.5
UTC Build Time:  2024-04-03 10:04:14
Starting Downstream PD...
Release Version: v7.5.1-5-g584533652
Edition: Community
Git Commit Hash: 58453365285465cd90bc4472cff2bad7ce4d764b
Git Branch: release-7.5
UTC Build Time:  2024-04-03 10:04:14
Verifying upstream PD is started...
/tmp/tidb_cdc_test/availability/tidb-slow.log
/tmp/tidb_cdc_test/availability/tikv_down.log
wait process cdc.test exit for 2-th time...
+ set +x
+ tso='449431761365762049
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449431761365762049 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
[Tue Apr 30 11:01:52 CST 2024] <<<<<< START cdc server in capture_session_done_during_task case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info'
+ [[ no != \n\o ]]
+ GO_FAILPOINTS='github.com/pingcap/tiflow/cdc/processor/processorManagerHandleNewChangefeedDelay=sleep(2000)'
+ (( i = 0 ))
+ (( i <= 50 ))
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.capture_session_done_during_task.2167521677.out server --log-file /tmp/tidb_cdc_test/capture_session_done_during_task/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/capture_session_done_during_task/cdc_data --cluster-id default --addr 127.0.0.1:8300 --pd http://127.0.0.1:2379
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
table test.finish_mark not exists for 7-th check, retry later
/tmp/tidb_cdc_test/availability/stdouttest_gap_between_watch_capture.server2.log
/tmp/tidb_cdc_test/availability/tidb_down.log
/tmp/tidb_cdc_test/availability/cdctest_owner_retryable_error.server2.log
/tmp/tidb_cdc_test/availability/stdouttest_owner_cleanup_stale_tasks.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_hang_up_capture.server2.log
/tmp/tidb_cdc_test/availability/cdctest_gap_between_watch_capture.server2.log
/tmp/tidb_cdc_test/availability/cdctest_owner_retryable_error.server1.log
/tmp/tidb_cdc_test/availability/cdctest_kill_capture.server2.log
/tmp/tidb_cdc_test/availability/stdouttest_hang_up_owner.server2.log
/tmp/tidb_cdc_test/availability/cdctest_kill_owner.server2.log
/tmp/tidb_cdc_test/availability/cdctest_stop_processor.log
/tmp/tidb_cdc_test/availability/stdouttest_gap_between_watch_capture.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_kill_owner.server1.log
/tmp/tidb_cdc_test/availability/down_pd.log
/tmp/tidb_cdc_test/availability/pd1.log
/tmp/tidb_cdc_test/availability/tikv3.log
/tmp/tidb_cdc_test/availability/stdouttest_stop_processor.log
/tmp/tidb_cdc_test/availability/tidb.log
/tmp/tidb_cdc_test/availability/cdctest_owner_cleanup_stale_tasks.server2.log
/tmp/tidb_cdc_test/availability/stdouttest_expire_owner.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_owner_cleanup_stale_tasks.server3.log
/tmp/tidb_cdc_test/availability/stdouttest_kill_owner.server2.log
/tmp/tidb_cdc_test/availability/tikv1.log
wait process cdc.test exit for 3-th time...
/tmp/tidb_cdc_test/availability/cdctest_hang_up_owner.server2.log
/tmp/tidb_cdc_test/availability/tidb_other.log
/tmp/tidb_cdc_test/availability/stdouttest_hang_up_capture.server1.log
/tmp/tidb_cdc_test/availability/cdctest_owner_cleanup_stale_tasks.server3.log
/tmp/tidb_cdc_test/availability/stdouttest_owner_retryable_error.server1.log
/tmp/tidb_cdc_test/availability/cdctest_expire_owner.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_owner_cleanup_stale_tasks.server2.log
/tmp/tidb_cdc_test/availability/cdctest_hang_up_owner.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_kill_capture.server1.log
/tmp/tidb_cdc_test/availability/cdctest_expire_capture.server1.log
/tmp/tidb_cdc_test/availability/cdctest_kill_capture.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_hang_up_owner.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_owner_retryable_error.server2.log
/tmp/tidb_cdc_test/availability/stdouttest_expire_capture.server1.log
/tmp/tidb_cdc_test/availability/cdctest_owner_cleanup_stale_tasks.server1.log
/tmp/tidb_cdc_test/availability/stdouttest_kill_capture.server2.log
/tmp/tidb_cdc_test/availability/cdctest_gap_between_watch_capture.server1.log
/tmp/tidb_cdc_test/availability/cdctest_hang_up_capture.server2.log
/tmp/tidb_cdc_test/availability/cdctest_kill_owner.server1.log
+ ls -alh log-G18.tar.gz
-rw-r--r-- 1 jenkins jenkins 10M Apr 30 11:01 log-G18.tar.gz
[Pipeline] archiveArtifacts
Archiving artifacts
cdc.test: no process found
wait process cdc.test exit for 4-th time...
process cdc.test already exit
[Tue Apr 30 11:01:53 CST 2024] <<<<<< run test case owner_remove_table_error success! >>>>>>
Starting Upstream TiDB...
Release Version: v7.5.1-46-g3df1fe2cb9
Edition: Community
Git Commit Hash: 3df1fe2cb94fcc572aaaf15efed0a26269743a0d
Git Branch: release-7.5
UTC Build Time: 2024-04-29 09:35:42
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v7.5.1-46-g3df1fe2cb9
Edition: Community
Git Commit Hash: 3df1fe2cb94fcc572aaaf15efed0a26269743a0d
Git Branch: release-7.5
UTC Build Time: 2024-04-29 09:35:42
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
+ pd_host=127.0.0.1
+ pd_port=2379
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.multi_capture.cli.8188.out cli tso query --pd=http://127.0.0.1:2379
Recording fingerprints
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
table test.finish_mark not exists for 8-th check, retry later
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G18'
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
Sending interrupt signal to process
Killing processes
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
> GET /debug/info HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Tue, 30 Apr 2024 03:01:55 GMT
< Content-Length: 613
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/abd9768f-00c8-4879-afd6-c7bf550ef689
	{"id":"abd9768f-00c8-4879-afd6-c7bf550ef689","address":"127.0.0.1:8300","version":"v7.5.1-23-gbf8c40c1c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f2cf3587f09
	abd9768f-00c8-4879-afd6-c7bf550ef689

/tidb/cdc/default/default/upstream/7363489926507455560
	{"id":7363489926507455560,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/abd9768f-00c8-4879-afd6-c7bf550ef689
	{"id":"abd9768f-00c8-4879-afd6-c7bf550ef689","address":"127.0.0.1:8300","version":"v7.5.1-23-gbf8c40c1c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f2cf3587f09
	abd9768f-00c8-4879-afd6-c7bf550ef689

/tidb/cdc/default/default/upstream/7363489926507455560
	{"id":7363489926507455560,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/abd9768f-00c8-4879-afd6-c7bf550ef689
	{"id":"abd9768f-00c8-4879-afd6-c7bf550ef689","address":"127.0.0.1:8300","version":"v7.5.1-23-gbf8c40c1c"}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f2cf3587f09
	abd9768f-00c8-4879-afd6-c7bf550ef689

/tidb/cdc/default/default/upstream/7363489926507455560
	{"id":7363489926507455560,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ set +x
+ tso='449431762314461185
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449431762314461185 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
***************** properties *****************
"updateproportion"="0"
"readallfields"="true"
"mysql.port"="4000"
"mysql.db"="multi_capture_1"
"threadcount"="2"
"workload"="core"
"mysql.user"="root"
"dotransactions"="false"
"scanproportion"="0"
"insertproportion"="0"
"requestdistribution"="uniform"
"mysql.host"="127.0.0.1"
"readproportion"="0"
"operationcount"="0"
"recordcount"="10"
**********************************************
Run finished, takes 12.90326ms
INSERT - Takes(s): 0.0, Count: 10, OPS: 1591.8, Avg(us): 2484, Min(us): 1225, Max(us): 6526, 95th(us): 7000, 99th(us): 7000
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
script returned exit code 143
***************** properties *****************
"threadcount"="2"
"insertproportion"="0"
"readproportion"="0"
"dotransactions"="false"
"updateproportion"="0"
"workload"="core"
"requestdistribution"="uniform"
"mysql.port"="4000"
"readallfields"="true"
"mysql.host"="127.0.0.1"
"mysql.db"="multi_capture_2"
"scanproportion"="0"
"recordcount"="10"
"operationcount"="0"
"mysql.user"="root"
**********************************************
Run finished, takes 9.735565ms
INSERT - Takes(s): 0.0, Count: 10, OPS: 1950.9, Avg(us): 1865, Min(us): 1035, Max(us): 4507, 95th(us): 5000, 99th(us): 5000
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   3478895c2a700e4824bb41940260b6b28013275e
Git Commit Branch: release-7.5
UTC Build Time:    2024-04-28 08:20:54
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   7.5.2
Edition:           Community
Git Commit Hash:   3478895c2a700e4824bb41940260b6b28013275e
Git Commit Branch: release-7.5
UTC Build Time:    2024-04-28 08:20:54
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Enable Features:   pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure
Profile:           dist_release
***************** properties *****************
"operationcount"="0"
"mysql.port"="4000"
"recordcount"="10"
"readproportion"="0"
"dotransactions"="false"
"updateproportion"="0"
"requestdistribution"="uniform"
"mysql.host"="127.0.0.1"
"mysql.db"="multi_capture_3"
"scanproportion"="0"
"insertproportion"="0"
"mysql.user"="root"
"workload"="core"
"threadcount"="2"
"readallfields"="true"
**********************************************
Run finished, takes 10.608239ms
INSERT - Takes(s): 0.0, Count: 10, OPS: 1761.0, Avg(us): 2035, Min(us): 1246, Max(us): 4876, 95th(us): 5000, 99th(us): 5000
***************** properties *****************
"workload"="core"
"readallfields"="true"
"dotransactions"="false"
"operationcount"="0"
"mysql.user"="root"
"updateproportion"="0"
"requestdistribution"="uniform"
"readproportion"="0"
"mysql.db"="multi_capture_4"
"mysql.host"="127.0.0.1"
"insertproportion"="0"
"mysql.port"="4000"
"threadcount"="2"
"scanproportion"="0"
"recordcount"="10"
**********************************************
Run finished, takes 10.251834ms
INSERT - Takes(s): 0.0, Count: 10, OPS: 1843.8, Avg(us): 1960, Min(us): 1142, Max(us): 4745, 95th(us): 5000, 99th(us): 5000
[Tue Apr 30 11:01:57 CST 2024] <<<<<< START cdc server in multi_capture case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8301/debug/info'
+ [[ no != \n\o ]]
+ GO_FAILPOINTS=
+ (( i = 0 ))
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.multi_capture.83148316.out server --log-file /tmp/tidb_cdc_test/multi_capture/cdc1.log --log-level debug --data-dir /tmp/tidb_cdc_test/multi_capture/cdc_data1 --cluster-id default --addr 127.0.0.1:8301
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8301/debug/info
* About to connect() to 127.0.0.1 port 8301 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8301; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
lease 22318f2cf3587f09 revoked
table test.finish_mark not exists for 9-th check, retry later
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
script returned exit code 143
script returned exit code 143
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
table capture_session_done_during_task.t exists
check diff failed 1-th time, retry later
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
script returned exit code 143
table test.finish_mark not exists for 10-th check, retry later
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
script returned exit code 143
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
{"level":"warn","ts":1714446119.6603773,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0024a9a40/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
script returned exit code 143
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
check diff failed 2-th time, retry later
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
script returned exit code 143
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
table test.finish_mark not exists for 11-th check, retry later
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
{"level":"warn","ts":1714446121.4239173,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001451500/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
{"level":"warn","ts":1714446121.4252596,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002412540/127.0.0.1:2479","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
script returned exit code 143
kill finished with exit code 0
script returned exit code 143
{"level":"warn","ts":1714446121.1552837,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0024d81c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Unavailable desc = closing transport due to: connection error: desc = \"error reading from server: EOF\", received prior goaway: code: NO_ERROR, debug data: \"graceful_stop\""}
script returned exit code 143
script returned exit code 143
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] // cache
[Pipeline] // cache
[Pipeline] // cache
[Pipeline] // cache
[Pipeline] // cache
[Pipeline] // cache
[Pipeline] // cache
{"level":"warn","ts":1714446119.3898268,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0025301c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: read tcp 127.0.0.1:33692->127.0.0.1:2379: read: connection reset by peer"}
script returned exit code 143
[Pipeline] // cache
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] // cache
[Pipeline] }
[Pipeline] // cache
[Pipeline] }
[Pipeline] // cache
[Pipeline] }
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] // dir
{"level":"warn","ts":1714446118.0694842,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00222d500/127.0.0.1:2379","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
{"level":"warn","ts":1714446120.0691679,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00222d500/127.0.0.1:2379","attempt":1,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
script returned exit code 143
[Pipeline] // dir
[Pipeline] // dir
[Pipeline] // dir
script returned exit code 143
[Pipeline] // dir
[Pipeline] // dir
[Pipeline] // dir
[Pipeline] // dir
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
++ stop_tidb_cluster
{"level":"warn","ts":1714446121.9305751,"caller":"v3@v3.5.10/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0024aaa80/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\""}
script returned exit code 143
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // cache
[Pipeline] }
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] // withCredentials
[Pipeline] // withCredentials
[Pipeline] // withCredentials
[Pipeline] // withCredentials
[Pipeline] // withCredentials
[Pipeline] // withCredentials
[Pipeline] // withCredentials
[Pipeline] // withCredentials
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // cache
[Pipeline] }
[Pipeline] // cache
[Pipeline] }
[Pipeline] // cache
[Pipeline] // timeout
[Pipeline] // timeout
[Pipeline] // timeout
[Pipeline] // timeout
[Pipeline] // timeout
[Pipeline] // timeout
[Pipeline] // timeout
[Pipeline] // timeout
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
script returned exit code 143
[Pipeline] // dir
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] // container
[Pipeline] // container
[Pipeline] // container
[Pipeline] // container
[Pipeline] // container
[Pipeline] // container
[Pipeline] // container
[Pipeline] // container
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // cache
[Pipeline] // timeout
[Pipeline] }
[Pipeline] }
[Pipeline] // timeout
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // dir
[Pipeline] // stage
[Pipeline] }
[Pipeline] }
[Pipeline] // stage
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] // container
[Pipeline] }
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // container
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // timeout
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] // node
[Pipeline] }
[Pipeline] }
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // node
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G00'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G01'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G02'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G03'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G06'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G08'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G12'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G20'
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G21'
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G14'
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G11'
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] // container
[Pipeline] }
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G07'
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G05'
[Pipeline] // stage
[Pipeline] // node
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G10'
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G09'
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G04'
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE