Skip to content
Aborted

Console Output

Skipping 140 KB.. Full Log
    image: "quay.io/debezium/connect:2.4"
    name: "connect"
    resources:
      requests:
        cpu: "2000m"
        memory: "4Gi"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "JENKINS_SECRET"
      value: "********"
    - name: "JENKINS_TUNNEL"
      value: "jenkins-agent.apps.svc.cluster.local:50000"
    - name: "JENKINS_AGENT_NAME"
      value: "pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-h8q79--pzt7c"
    - name: "JENKINS_NAME"
      value: "pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-h8q79--pzt7c"
    - name: "JENKINS_AGENT_WORKDIR"
      value: "/home/jenkins/agent"
    - name: "JENKINS_URL"
      value: "http://jenkins.apps.svc.cluster.local:8080/jenkins/"
    image: "jenkins/inbound-agent:3206.vb_15dcf73f6a_9-2"
    name: "jnlp"
    resources:
      requests:
        memory: "256Mi"
        cpu: "100m"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  restartPolicy: "Never"
  securityContext:
    fsGroup: 1000
  volumes:
  - emptyDir: {}
    name: "volume-0"
  - emptyDir:
      medium: ""
    name: "workspace-volume"

Running on pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-h8q79--pzt7c in /home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy
[Pipeline] {
[Pipeline] checkout
The recommended git tool is: git
Obtained pipelines/pingcap/ticdc/latest/pod-pull_cdc_kafka_integration_heavy.yaml from git https://github.com/PingCAP-QE/ci.git
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
No credentials specified
Warning: JENKINS-30600: special launcher org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1@5f9818db; decorates RemoteLauncher[hudson.remoting.Channel@499dbe87:JNLP4-connect connection from 10.233.67.18/10.233.67.18:44136] will be ignored (a typical symptom is the Git executable not being run inside a designated container)
Cloning the remote Git repository
Using shallow clone with depth 1
Cloning repository https://github.com/PingCAP-QE/ci.git
Obtained pipelines/pingcap/ticdc/latest/pod-pull_cdc_kafka_integration_heavy.yaml from git https://github.com/PingCAP-QE/ci.git
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Created Pod: kubernetes jenkins-tiflow/pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-w14dx--pflj6
 > git init /home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy # timeout=10
Fetching upstream changes from https://github.com/PingCAP-QE/ci.git
 > git --version # timeout=10
 > git --version # 'git version 2.39.2'
 > git fetch --tags --force --progress --depth=1 -- https://github.com/PingCAP-QE/ci.git +refs/heads/*:refs/remotes/origin/* # timeout=5
Created Pod: kubernetes jenkins-tiflow/pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-755gs--w461b
 > git config remote.origin.url https://github.com/PingCAP-QE/ci.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
Avoid second fetch
Checking out Revision 22c9b1f535a4301a2efcd8c8a6e9c6032aff7eb3 (origin/main)
Still waiting to schedule task
‘pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-ms2p4--0hzh6’ is offline
Commit message: "fix(monitoring): update base_ref to release-9.0-beta.2 (#3612)"
[Pipeline] withEnv
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] timeout
Timeout set to expire in 45 min
[Pipeline] {
[Pipeline] dir
Running in /home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc
[Pipeline] {
[Pipeline] cache
 > git rev-parse origin/main^{commit} # timeout=10
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 22c9b1f535a4301a2efcd8c8a6e9c6032aff7eb3 # timeout=10
Still waiting to schedule task
‘pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-p84lx--w96th’ is offline
Still waiting to schedule task
‘pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-553hm--4bcs4’ is offline
Still waiting to schedule task
‘pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-bhkj0--8kvhb’ is offline
Still waiting to schedule task
All nodes of label ‘pingcap_ticdc_pull_cdc_kafka_integration_heavy_586-99tv6’ are offline
Still waiting to schedule task
‘pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-jqdbj--b4vwr’ is offline
Still waiting to schedule task
All nodes of label ‘pingcap_ticdc_pull_cdc_kafka_integration_heavy_586-hbh6k’ are offline
Still waiting to schedule task
All nodes of label ‘pingcap_ticdc_pull_cdc_kafka_integration_heavy_586-d609r’ are offline
Still waiting to schedule task
All nodes of label ‘pingcap_ticdc_pull_cdc_kafka_integration_heavy_586-j7ll7’ are offline
Still waiting to schedule task
‘pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-gltvh--qfr2f’ is offline
Still waiting to schedule task
‘pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-2g9jw--2w8w4’ is offline
Created Pod: kubernetes jenkins-tiflow/pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-99tv6--z2wjc
Created Pod: kubernetes jenkins-tiflow/pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-hbh6k--x5d8s
Created Pod: kubernetes jenkins-tiflow/pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-d609r--mblk1
Created Pod: kubernetes jenkins-tiflow/pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-j7ll7--smbnb
Still waiting to schedule task
All nodes of label ‘pingcap_ticdc_pull_cdc_kafka_integration_heavy_586-rjcqt’ are offline
Still waiting to schedule task
‘pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-755gs--w461b’ is offline
Cache restored successfully (ws/jenkins-pingcap-ticdc-pull_cdc_kafka_integration_heavy-586/ticdc)
4110049280 bytes in 8.51 secs (482879747 bytes/sec)
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] timeout
Timeout set to expire in 6 min 0 sec
[Pipeline] {
[Pipeline] sh
Still waiting to schedule task
All nodes of label ‘pingcap_ticdc_pull_cdc_kafka_integration_heavy_586-rqtlz’ are offline
+ echo Waiting for zookeeper to be ready...
Waiting for zookeeper to be ready...
+ nc -z localhost 2181
+ echo Waiting for kafka to be ready...
Waiting for kafka to be ready...
+ nc -z localhost 9092
+ echo Waiting for kafka-broker to be ready...
Waiting for kafka-broker to be ready...
+ echo dump
+ nc localhost 2181
+ grep brokers
+ awk {$1=$1;print}
+ grep -F -w /brokers/ids/1
/brokers/ids/1
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // container
[Pipeline] sh
Still waiting to schedule task
‘pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-w14dx--pflj6’ is offline
+ ./tests/integration_tests/run_heavy_it_in_ci.sh kafka G01
Warning: Found 29 test cases that are not covered by any group:
consistent_partition_table
consistent_replicate_ddl
consistent_replicate_gbk
consistent_replicate_nfs
consistent_replicate_storage_file
consistent_replicate_storage_file_large_value
consistent_replicate_storage_s3
csv_storage_partition_table
csv_storage_update_pk_clustered
csv_storage_update_pk_nonclustered
ds_memory_control
event_filter
kafka_column_selector
kafka_column_selector_avro
kafka_simple_basic
kafka_simple_basic_avro
kafka_simple_claim_check
kafka_simple_claim_check_avro
kafka_simple_handle_key_only
kafka_simple_handle_key_only_avro
kill_owner_with_ddl
mq_sink_error_resume
multi_capture
overwrite_resume_with_syncpoint
owner_resign
sequence
storage_csv_update
synced_status
synced_status_with_redo

These test cases need to be added to appropriate groups in:
1. run_light_it_in_ci.sh - for light test cases
2. run_heavy_it_in_ci.sh - for heavy test cases

Choose the file based on the test case's resource requirements.

Sink Type: kafka
Group Name: G01
Group Number (parsed): 01
Run cases: canal_json_basic canal_json_claim_check canal_json_content_compatible
GIT_COMMIT=22c9b1f535a4301a2efcd8c8a6e9c6032aff7eb3
RUN_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/586/display/redirect
JENKINS_URL=https://do.pingcap.net/jenkins/
EXECUTOR_NUMBER=0
JOB_SPEC={"type":"presubmit","job":"pingcap/ticdc/pull_cdc_kafka_integration_heavy","buildid":"1938535815512592384","prowjobid":"3c9c78c1-b3f4-4132-bd31-3e5fdf6f6ccb","refs":{"org":"pingcap","repo":"ticdc","repo_link":"https://github.com/pingcap/ticdc","base_ref":"master","base_sha":"16f5ddeb62dee025dec646059b2ed6852032b13c","base_link":"https://github.com/pingcap/ticdc/commit/16f5ddeb62dee025dec646059b2ed6852032b13c","pulls":[{"number":1481,"author":"wk989898","sha":"5953e071ff4af33b590617461ef6432bb58792c9","title":"test: fix incorrect failpoint injection","head_ref":"test-failpoints","link":"https://github.com/pingcap/ticdc/pull/1481","commit_link":"https://github.com/pingcap/ticdc/pull/1481/commits/5953e071ff4af33b590617461ef6432bb58792c9","author_link":"https://github.com/wk989898"}]}}
TZ=Asia/Shanghai
BUILD_ID=1938535815512592384
POD_LABEL=pingcap_ticdc_pull_cdc_kafka_integration_heavy_586-h8q79
HOSTNAME=pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-h8q79--pzt7c
OLDPWD=/home/jenkins
RUN_CHANGES_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/586/display/redirect?page=changes
TICDC_NEWARCH=true
JENKINS_NODE_COOKIE=a3f4553a-eaea-445b-bfb9-320435085dae
GOPROXY=http://goproxy.apps.svc,https://proxy.golang.org,direct
JAVA_HOME=/usr/lib/jvm/jre-openjdk
JOB_BASE_NAME=pull_cdc_kafka_integration_heavy
WORKSPACE_TMP=/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy@tmp
KUBERNETES_PORT_443_TCP_PROTO=tcp
TINI_VERSION=v0.19.0
CI=true
GIT_URL=https://github.com/PingCAP-QE/ci.git
CLASSPATH=
KUBERNETES_PORT_443_TCP_ADDR=10.233.0.1
FILE_SERVER_URL=http://fileserver.pingcap.net
POD_CONTAINER=golang
GOPATH=/go
BUILD_NUMBER=586
USE_BAZEL_VERSION=6.5.0
KUBERNETES_PORT=tcp://10.233.0.1:443
WORKSPACE=/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy
PWD=/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc
HUDSON_URL=https://do.pingcap.net/jenkins/
HOME=/home/jenkins
NODE_NAME=pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-h8q79--pzt7c
HUDSON_SERVER_COOKIE=83ef27fe9acccc92
JENKINS_HOME=/var/jenkins_home
JOB_NAME=pingcap/ticdc/pull_cdc_kafka_integration_heavy
RUN_TESTS_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/586/display/redirect?page=tests
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_PORT=443
GIT_PREVIOUS_COMMIT=22c9b1f535a4301a2efcd8c8a6e9c6032aff7eb3
HUDSON_HOME=/var/jenkins_home
JOB_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/display/redirect
PROW_JOB_ID=3c9c78c1-b3f4-4132-bd31-3e5fdf6f6ccb
TEST_GROUP=G01
KUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443
RUN_ARTIFACTS_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/586/display/redirect?page=artifacts
BUILD_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/586/
TERM=xterm
STAGE_NAME=Test
JOB_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/
BUILD_DISPLAY_NAME=#586
SHLVL=5
GIT_BRANCH=origin/main
BUILD_TAG=jenkins-pingcap-ticdc-pull_cdc_kafka_integration_heavy-586
KUBERNETES_SERVICE_PORT=443
NODE_LABELS=pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-h8q79--pzt7c pingcap_ticdc_pull_cdc_kafka_integration_heavy_586-h8q79
PATH=/go/bin:/root/.cargo/bin:/usr/local/go/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/gradle-7.4.2/bin:/opt/apache-maven-3.8.8/bin:/usr/local/pulsar/bin:/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/_utils:/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/../bin:/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/../../bin:/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/../../scripts/bin
GIT_PREVIOUS_SUCCESSFUL_COMMIT=80306c02d4f08ed55c2124bd228aa4d83baaed7b
KUBERNETES_SERVICE_HOST=10.233.0.1
JENKINS_SERVER_COOKIE=durable-d03363c5fa1b9a2e3f8e7a484ee240aba02820b698594a35e70e52d9d5461467
_=/usr/bin/env
find: '/tmp/tidb_cdc_test/*/*': No such file or directory
=================>> Running test /home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/canal_json_basic/run.sh using Sink-Type: kafka... <<=================
The 1 times to try to start tidb cluster...
start tidb cluster in /tmp/tidb_cdc_test/canal_json_basic
Starting Upstream PD...
Release Version: v9.0.0-beta.2.pre-1-ga764d4520
Edition: Community
Kernel Type: Classic
Git Commit Hash: a764d4520776166fb2320543da7e20e48e3ac8b8
Git Branch: master
UTC Build Time:  2025-06-27 08:51:17
Starting Downstream PD...
Release Version: v9.0.0-beta.2.pre-1-ga764d4520
Edition: Community
Kernel Type: Classic
Git Commit Hash: a764d4520776166fb2320543da7e20e48e3ac8b8
Git Branch: master
UTC Build Time:  2025-06-27 08:51:17
Verifying upstream PD is started...
Created Pod: kubernetes jenkins-tiflow/pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-rjcqt--q4cs3
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   9.0.0-beta.2
Edition:           Community
Git Commit Hash:   2029ff019ce689acc113f3bd886b08f861da0fbf
Git Commit Branch: release-9.0-beta.2
UTC Build Time:    2025-06-27 06:48:16
Rust Version:      rustc 1.87.0-nightly (96cfc7558 2025-02-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   9.0.0-beta.2
Edition:           Community
Git Commit Hash:   2029ff019ce689acc113f3bd886b08f861da0fbf
Git Commit Branch: release-9.0-beta.2
UTC Build Time:    2025-06-27 06:48:16
Rust Version:      rustc 1.87.0-nightly (96cfc7558 2025-02-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v9.0.0-beta.2.pre-1-g339f07ae8f
Edition: Community
Git Commit Hash: 339f07ae8f2ba406abc7f4757ffcce918cc86a56
Git Branch: release-9.0-beta.2
UTC Build Time: 2025-06-27 06:54:00
GoVersion: go1.23.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Kernel Type: Classic
Starting Downstream TiDB...
Release Version: v9.0.0-beta.2.pre-1-g339f07ae8f
Edition: Community
Git Commit Hash: 339f07ae8f2ba406abc7f4757ffcce918cc86a56
Git Branch: release-9.0-beta.2
UTC Build Time: 2025-06-27 06:54:00
GoVersion: go1.23.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Kernel Type: Classic
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	250	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	4	DDL Table Version. Do not delete.
cluster_id	7520568675698247393	TiDB Cluster ID.
tikv_gc_leader_uuid	65ec36d2ffc004a	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-h8q79--pzt7c, pid:1352, start at 2025-06-27 18:07:31.34399075 +0800 CST m=+2.437555938	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250627-18:09:31.360 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250627-18:07:31.327 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250627-17:57:31.327 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	250	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	4	DDL Table Version. Do not delete.
cluster_id	7520568675698247393	TiDB Cluster ID.
tikv_gc_leader_uuid	65ec36d2ffc004a	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-h8q79--pzt7c, pid:1352, start at 2025-06-27 18:07:31.34399075 +0800 CST m=+2.437555938	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250627-18:09:31.360 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250627-18:07:31.327 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250627-17:57:31.327 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	250	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	4	DDL Table Version. Do not delete.
cluster_id	7520568667161268917	TiDB Cluster ID.
tikv_gc_leader_uuid	65ec36d32680039	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-h8q79--pzt7c, pid:1427, start at 2025-06-27 18:07:31.495045013 +0800 CST m=+2.514055120	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250627-18:09:31.508 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250627-18:07:31.482 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250627-17:57:31.482 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v9.0.0-beta.2.pre-1-g9e37b7f5fa
Edition:         Community
Git Commit Hash: 9e37b7f5fa985c783d65bc443e0c5942ecb45959
Git Branch:      HEAD
UTC Build Time:  2025-06-27 05:03:32
Enable Features: jemalloc sm4(GmSSL) mem-profiling avx2 avx512 unwind thinlto hnsw.l2=skylake hnsw.cosine=skylake vec.l2=skylake vec.cos=skylake
Profile:         RELWITHDEBINFO
Compiler:        clang++ 17.0.6

Raft Proxy
Git Commit Hash:   3e325b2b9b30e396c846bfeb97a690d070b84087
Git Commit Branch: HEAD
UTC Build Time:    ""   
Rust Version:      rustc 1.77.0-nightly (89e2160c4 2023-12-27)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:   external-jemalloc portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine openssl-vendored
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/canal_json_basic/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/canal_json_basic/tiflash/log/error.log
arg matches is ArgMatches { args: {"advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "data-dir": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/canal_json_basic/tiflash/db/proxy"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [18], vals: ["127.0.0.1:2379"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "addr": MatchedArg { occurs: 1, indices: [22], vals: ["127.0.0.1:9000"] }, "log-file": MatchedArg { occurs: 1, indices: [20], vals: ["/tmp/tidb_cdc_test/canal_json_basic/tiflash/log/proxy.log"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [12], vals: ["9e37b7f5fa985c783d65bc443e0c5942ecb45959"] }, "config": MatchedArg { occurs: 1, indices: [10], vals: ["/tmp/tidb_cdc_test/canal_json_basic/tiflash-proxy.toml"] }, "engine-label": MatchedArg { occurs: 1, indices: [16], vals: ["tiflash"] }, "engine-version": MatchedArg { occurs: 1, indices: [14], vals: ["v9.0.0-beta.2.pre-1-g9e37b7f5fa"] }, "memory-limit-ratio": MatchedArg { occurs: 1, indices: [6], vals: ["0.800000"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
[Fri Jun 27 18:07:35 CST 2025] <<<<<< START cdc server in canal_json_basic case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL'
+ GO_FAILPOINTS=
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.canal_json_basic.28382840.out server --log-file /tmp/tidb_cdc_test/canal_json_basic/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/canal_json_basic/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
*   Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 8300 failed: Connection refused
* Failed to connect to 127.0.0.1 port 8300: Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
Created Pod: kubernetes jenkins-tiflow/pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-rqtlz--hchw5
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
* Server auth using Basic with user 'ticdc'
> GET /debug/info HTTP/1.1
> Host: 127.0.0.1:8300
> Authorization: Basic dGljZGM6dGljZGNfc2VjcmV0
> User-Agent: curl/7.61.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 27 Jun 2025 10:07:38 GMT
< Content-Length: 625
< Content-Type: text/plain; charset=utf-8
< 
{ [625 bytes data]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/e444c87b-97de-4bd7-ac53-b7d341fb0f66
	{"id":"e444c87b-97de-4bd7-ac53-b7d341fb0f66","address":"127.0.0.1:8300","version":"v9.0.0-beta.2.pre-3-g5953e071","git-hash":"5953e071ff4af33b590617461ef6432bb58792c9","deploy-path":"/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/bin/cdc.test","start-timestamp":1751018855,"is-new-arch":true}

/tidb/cdc/default/__cdc_meta__/log_coordinator/223197b0db2c88d8
	e444c87b-97de-4bd7-ac53-b7d341fb0f66

/tidb/cdc/default/__cdc_meta__/owner/223197b0db2c88d8
	e444c87b-97de-4bd7-ac53-b7d341fb0f66'
+ echo '

*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/e444c87b-97de-4bd7-ac53-b7d341fb0f66
	{"id":"e444c87b-97de-4bd7-ac53-b7d341fb0f66","address":"127.0.0.1:8300","version":"v9.0.0-beta.2.pre-3-g5953e071","git-hash":"5953e071ff4af33b590617461ef6432bb58792c9","deploy-path":"/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/bin/cdc.test","start-timestamp":1751018855,"is-new-arch":true}

/tidb/cdc/default/__cdc_meta__/log_coordinator/223197b0db2c88d8
	e444c87b-97de-4bd7-ac53-b7d341fb0f66

/tidb/cdc/default/__cdc_meta__/owner/223197b0db2c88d8
	e444c87b-97de-4bd7-ac53-b7d341fb0f66'
+ grep -q 'failed to get info:'
+ echo '

*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/e444c87b-97de-4bd7-ac53-b7d341fb0f66
	{"id":"e444c87b-97de-4bd7-ac53-b7d341fb0f66","address":"127.0.0.1:8300","version":"v9.0.0-beta.2.pre-3-g5953e071","git-hash":"5953e071ff4af33b590617461ef6432bb58792c9","deploy-path":"/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/bin/cdc.test","start-timestamp":1751018855,"is-new-arch":true}

/tidb/cdc/default/__cdc_meta__/log_coordinator/223197b0db2c88d8
	e444c87b-97de-4bd7-ac53-b7d341fb0f66

/tidb/cdc/default/__cdc_meta__/owner/223197b0db2c88d8
	e444c87b-97de-4bd7-ac53-b7d341fb0f66'
+ grep -q 'etcd info'
+ break
+ set +x
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.canal_json_basic.cli.2923.out cli changefeed create '--sink-uri=kafka://127.0.0.1:9092/ticdc-canal-json-basic-13182?protocol=canal-json&enable-tidb-extension=true'
Create changefeed successfully!
ID: 38406045483731826742772963984297113016
Info: {"upstream_id":7520568675698247393,"id":"38406045483731826742772963984297113016","namespace":"default","sink_uri":"kafka://127.0.0.1:9092/ticdc-canal-json-basic-13182?protocol=canal-json\u0026enable-tidb-extension=true","create_time":"2025-06-27T18:07:38.742187731+08:00","start_ts":459019087689285643,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"enable_table_monitor":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64","output_old_value":false,"output_handle_key":false,"output_field_header":false},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"send-all-bootstrap-at-start":false,"debezium_disable_schema":false,"debezium":{"output_old_value":true},"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"region-count-per-span":100,"write_key_threshold":0,"split_number_per_node":1,"scheduling-task-per-node":20},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v9.0.0-beta.2.pre-3-g5953e071","resolved_ts":459019087689285643,"checkpoint_ts":459019087689285643,"checkpoint_time":"2025-06-27 18:07:38.678","gid":{"low":3840604548373182674,"high":2772963984297113016}}
PASS
coverage: 2.5% of statements in github.com/pingcap/ticdc/...
+ set +x
[Fri Jun 27 18:07:45 CST 2025] <<<<<< START kafka consumer in canal_json_basic case >>>>>>
table test.finish_mark not exists for 1-th check, retry later
table test.finish_mark not exists for 2-th check, retry later
table test.finish_mark not exists for 3-th check, retry later
table test.finish_mark exists
check diff failed 1-th time, retry later
check diff successfully
table test.finish_mark exists
check diff failed 1-th time, retry later
check diff failed 2-th time, retry later
check diff failed 3-th time, retry later
check diff failed 4-th time, retry later
check diff failed 5-th time, retry later
check diff failed 6-th time, retry later
check diff successfully
cdc.test: no process found
wait process cdc.test exit for 1-th time...
process cdc.test already exit
log files: /tmp/tidb_cdc_test/canal_json_basic/stdout.log
no DATA RACE found
[Fri Jun 27 18:08:10 CST 2025] <<<<<< run test case canal_json_basic success! >>>>>>
Agent pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-p84lx--w96th is provisioned from template pingcap_ticdc_pull_cdc_kafka_integration_heavy_586-p84lx-z7xtg
---
apiVersion: "v1"
kind: "Pod"
metadata:
  annotations:
    buildUrl: "http://jenkins.apps.svc.cluster.local:8080/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/586/"
    runUrl: "job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/586/"
  labels:
    jenkins/jenkins-jenkins-agent: "true"
    jenkins/label-digest: "8c56fc77352e9dfb6e4c6a29ea1cdcb8afa6ee56"
    jenkins/label: "pingcap_ticdc_pull_cdc_kafka_integration_heavy_586-p84lx"
  name: "pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-p84lx--w96th"
  namespace: "jenkins-tiflow"
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: "kubernetes.io/arch"
            operator: "In"
            values:
            - "amd64"
  containers:
  - image: "wurstmeister/zookeeper"
    imagePullPolicy: "IfNotPresent"
    name: "zookeeper"
    resources:
      limits:
        cpu: "1000m"
        memory: "4Gi"
      requests:
        cpu: "1000m"
        memory: "4Gi"
    tty: true
    volumeMounts:
    - mountPath: "/tmp"
      name: "volume-0"
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - args:
    - "cat"
    image: "hub.pingcap.net/jenkins/rocky8_golang-1.23:tini"
    imagePullPolicy: "Always"
    name: "golang"
    resources:
      limits:
        cpu: "12"
        memory: "16Gi"
    tty: true
    volumeMounts:
    - mountPath: "/tmp"
      name: "volume-0"
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "KAFKA_CREATE_TOPICS"
      value: "big-message-test:1:1"
    - name: "KAFKA_BROKER_ID"
      value: "1"
    - name: "KAFKA_SSL_KEYSTORE_PASSWORD"
      value: "test1234"
    - name: "KAFKA_ZOOKEEPER_CONNECT"
      value: "localhost:2181"
    - name: "KAFKA_MESSAGE_MAX_BYTES"
      value: "11534336"
    - name: "KAFKA_REPLICA_FETCH_MAX_BYTES"
      value: "11534336"
    - name: "KAFKA_ADVERTISED_LISTENERS"
      value: "SSL://127.0.0.1:9093,PLAINTEXT://127.0.0.1:9092"
    - name: "ZK"
      value: "zk"
    - name: "KAFKA_SSL_KEYSTORE_LOCATION"
      value: "/tmp/kafka.server.keystore.jks"
    - name: "KAFKA_SSL_KEY_PASSWORD"
      value: "test1234"
    - name: "KAFKA_SSL_TRUSTSTORE_PASSWORD"
      value: "test1234"
    - name: "KAFKA_LISTENERS"
      value: "SSL://127.0.0.1:9093,PLAINTEXT://127.0.0.1:9092"
    - name: "KAFKA_SSL_TRUSTSTORE_LOCATION"
      value: "/tmp/kafka.server.truststore.jks"
    - name: "RACK_COMMAND"
      value: "curl -sfL https://github.com/pingcap/tiflow/raw/6e62afcfecc4e3965d8818784327d4bf2600d9fa/tests/_certificates/kafka.server.keystore.jks\
        \ -o /tmp/kafka.server.keystore.jks && curl -sfL https://github.com/pingcap/tiflow/raw/6e62afcfecc4e3965d8818784327d4bf2600d9fa/tests/_certificates/kafka.server.truststore.jks\
        \ -o /tmp/kafka.server.truststore.jks"
    image: "wurstmeister/kafka:2.12-2.4.1"
    imagePullPolicy: "IfNotPresent"
    name: "kafka"
    resources:
      limits:
        cpu: "2000m"
        memory: "6Gi"
      requests:
        cpu: "2000m"
        memory: "6Gi"
    tty: true
    volumeMounts:
    - mountPath: "/tmp"
      name: "volume-0"
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "KAFKA_SERVER"
      value: "127.0.0.1:9092"
    - name: "ZOOKEEPER_SERVER"
      value: "127.0.0.1:2181"
    - name: "DOWNSTREAM_DB_HOST"
      value: "127.0.0.1"
    - name: "USE_FLAT_MESSAGE"
      value: "true"
    - name: "DOWNSTREAM_DB_PORT"
      value: "3306"
    - name: "DB_NAME"
      value: "test"
    image: "rustinliu/ticdc-canal-json-adapter:latest"
    imagePullPolicy: "IfNotPresent"
    name: "canal-adapter"
    resources:
      requests:
        cpu: "2000m"
        memory: "4Gi"
    tty: true
    volumeMounts:
    - mountPath: "/tmp"
      name: "volume-0"
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - image: "hub.pingcap.net/jenkins/network-multitool"
    name: "net-tool"
    resources:
      limits:
        memory: "128Mi"
        cpu: "600m"
    tty: true
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "MYSQL_ROOT_PASSWORD"
      value: ""
    - name: "MYSQL_USER"
      value: "mysqluser"
    - name: "MYSQL_PASSWORD"
      value: "mysqlpw"
    - name: "MYSQL_ALLOW_EMPTY_PASSWORD"
      value: "yes"
    - name: "MYSQL_TCP_PORT"
      value: "3310"
    image: "quay.io/debezium/example-mysql:2.4"
    imagePullPolicy: "IfNotPresent"
    name: "mysql"
    resources:
      requests:
        cpu: "2000m"
        memory: "4Gi"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "BOOTSTRAP_SERVERS"
      value: "127.0.0.1:9092"
    - name: "GROUP_ID"
      value: "1"
    - name: "CONFIG_STORAGE_TOPIC"
      value: "my_connect_configs"
    - name: "OFFSET_STORAGE_TOPIC"
      value: "my_connect_offsets"
    - name: "STATUS_STORAGE_TOPIC"
      value: "my_connect_statuses"
    - name: "LANG"
      value: "C.UTF-8"
    image: "quay.io/debezium/connect:2.4"
    name: "connect"
    resources:
      requests:
        cpu: "2000m"
        memory: "4Gi"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "JENKINS_SECRET"
      value: "********"
    - name: "JENKINS_TUNNEL"
      value: "jenkins-agent.apps.svc.cluster.local:50000"
    - name: "JENKINS_AGENT_NAME"
      value: "pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-p84lx--w96th"
    - name: "JENKINS_NAME"
      value: "pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-p84lx--w96th"
    - name: "JENKINS_AGENT_WORKDIR"
      value: "/home/jenkins/agent"
    - name: "JENKINS_URL"
      value: "http://jenkins.apps.svc.cluster.local:8080/jenkins/"
    image: "jenkins/inbound-agent:3206.vb_15dcf73f6a_9-2"
    name: "jnlp"
    resources:
      requests:
        memory: "256Mi"
        cpu: "100m"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  restartPolicy: "Never"
  securityContext:
    fsGroup: 1000
  volumes:
  - emptyDir: {}
    name: "volume-0"
  - emptyDir:
      medium: ""
    name: "workspace-volume"

Running on pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-p84lx--w96th in /home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy
[Pipeline] {
[Pipeline] checkout
The recommended git tool is: git
No credentials specified
Warning: JENKINS-30600: special launcher org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1@6723051b; decorates RemoteLauncher[hudson.remoting.Channel@15129771:JNLP4-connect connection from 10.233.90.130/10.233.90.130:39010] will be ignored (a typical symptom is the Git executable not being run inside a designated container)
Cloning the remote Git repository
Using shallow clone with depth 1
Cloning repository https://github.com/PingCAP-QE/ci.git
 > git init /home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy # timeout=10
Fetching upstream changes from https://github.com/PingCAP-QE/ci.git
 > git --version # timeout=10
 > git --version # 'git version 2.39.2'
 > git fetch --tags --force --progress --depth=1 -- https://github.com/PingCAP-QE/ci.git +refs/heads/*:refs/remotes/origin/* # timeout=5
Avoid second fetch
Checking out Revision 22c9b1f535a4301a2efcd8c8a6e9c6032aff7eb3 (origin/main)
 > git config remote.origin.url https://github.com/PingCAP-QE/ci.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse origin/main^{commit} # timeout=10
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 22c9b1f535a4301a2efcd8c8a6e9c6032aff7eb3 # timeout=10
Commit message: "fix(monitoring): update base_ref to release-9.0-beta.2 (#3612)"
[Pipeline] withEnv
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] timeout
Timeout set to expire in 45 min
[Pipeline] {
[Pipeline] dir
Running in /home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc
[Pipeline] {
[Pipeline] cache
=================>> Running test /home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/canal_json_claim_check/run.sh using Sink-Type: kafka... <<=================
The 1 times to try to start tidb cluster...
start tidb cluster in /tmp/tidb_cdc_test/canal_json_claim_check
Starting Upstream PD...
Release Version: v9.0.0-beta.2.pre-1-ga764d4520
Edition: Community
Kernel Type: Classic
Git Commit Hash: a764d4520776166fb2320543da7e20e48e3ac8b8
Git Branch: master
UTC Build Time:  2025-06-27 08:51:17
Starting Downstream PD...
Release Version: v9.0.0-beta.2.pre-1-ga764d4520
Edition: Community
Kernel Type: Classic
Git Commit Hash: a764d4520776166fb2320543da7e20e48e3ac8b8
Git Branch: master
UTC Build Time:  2025-06-27 08:51:17
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   9.0.0-beta.2
Edition:           Community
Git Commit Hash:   2029ff019ce689acc113f3bd886b08f861da0fbf
Git Commit Branch: release-9.0-beta.2
UTC Build Time:    2025-06-27 06:48:16
Rust Version:      rustc 1.87.0-nightly (96cfc7558 2025-02-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   9.0.0-beta.2
Edition:           Community
Git Commit Hash:   2029ff019ce689acc113f3bd886b08f861da0fbf
Git Commit Branch: release-9.0-beta.2
UTC Build Time:    2025-06-27 06:48:16
Rust Version:      rustc 1.87.0-nightly (96cfc7558 2025-02-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Cache restored successfully (ws/jenkins-pingcap-ticdc-pull_cdc_kafka_integration_heavy-586/ticdc)
4110049280 bytes in 8.14 secs (504651662 bytes/sec)
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] timeout
Timeout set to expire in 6 min 0 sec
[Pipeline] {
[Pipeline] sh
+ echo Waiting for zookeeper to be ready...
Waiting for zookeeper to be ready...
+ nc -z localhost 2181
+ echo Waiting for kafka to be ready...
Waiting for kafka to be ready...
+ nc -z localhost 9092
+ echo Waiting for kafka-broker to be ready...
Waiting for kafka-broker to be ready...
+ echo dump
+ nc localhost 2181
+ grep brokers
+ awk {$1=$1;print}
+ grep -F -w /brokers/ids/1
/brokers/ids/1
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // container
[Pipeline] sh
Starting Upstream TiDB...
Release Version: v9.0.0-beta.2.pre-1-g339f07ae8f
Edition: Community
Git Commit Hash: 339f07ae8f2ba406abc7f4757ffcce918cc86a56
Git Branch: release-9.0-beta.2
UTC Build Time: 2025-06-27 06:54:00
GoVersion: go1.23.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Kernel Type: Classic
Starting Downstream TiDB...
Release Version: v9.0.0-beta.2.pre-1-g339f07ae8f
Edition: Community
Git Commit Hash: 339f07ae8f2ba406abc7f4757ffcce918cc86a56
Git Branch: release-9.0-beta.2
UTC Build Time: 2025-06-27 06:54:00
GoVersion: go1.23.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Kernel Type: Classic
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
+ ./tests/integration_tests/run_heavy_it_in_ci.sh kafka G02
Warning: Found 29 test cases that are not covered by any group:
consistent_partition_table
consistent_replicate_ddl
consistent_replicate_gbk
consistent_replicate_nfs
consistent_replicate_storage_file
consistent_replicate_storage_file_large_value
consistent_replicate_storage_s3
csv_storage_partition_table
csv_storage_update_pk_clustered
csv_storage_update_pk_nonclustered
ds_memory_control
event_filter
kafka_column_selector
kafka_column_selector_avro
kafka_simple_basic
kafka_simple_basic_avro
kafka_simple_claim_check
kafka_simple_claim_check_avro
kafka_simple_handle_key_only
kafka_simple_handle_key_only_avro
kill_owner_with_ddl
mq_sink_error_resume
multi_capture
overwrite_resume_with_syncpoint
owner_resign
sequence
storage_csv_update
synced_status
synced_status_with_redo

These test cases need to be added to appropriate groups in:
1. run_light_it_in_ci.sh - for light test cases
2. run_heavy_it_in_ci.sh - for heavy test cases

Choose the file based on the test case's resource requirements.

Sink Type: kafka
Group Name: G02
Group Number (parsed): 02
Run cases: canal_json_handle_key_only canal_json_storage_basic canal_json_storage_partition_table
GIT_COMMIT=22c9b1f535a4301a2efcd8c8a6e9c6032aff7eb3
RUN_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/586/display/redirect
JENKINS_URL=https://do.pingcap.net/jenkins/
EXECUTOR_NUMBER=0
JOB_SPEC={"type":"presubmit","job":"pingcap/ticdc/pull_cdc_kafka_integration_heavy","buildid":"1938535815512592384","prowjobid":"3c9c78c1-b3f4-4132-bd31-3e5fdf6f6ccb","refs":{"org":"pingcap","repo":"ticdc","repo_link":"https://github.com/pingcap/ticdc","base_ref":"master","base_sha":"16f5ddeb62dee025dec646059b2ed6852032b13c","base_link":"https://github.com/pingcap/ticdc/commit/16f5ddeb62dee025dec646059b2ed6852032b13c","pulls":[{"number":1481,"author":"wk989898","sha":"5953e071ff4af33b590617461ef6432bb58792c9","title":"test: fix incorrect failpoint injection","head_ref":"test-failpoints","link":"https://github.com/pingcap/ticdc/pull/1481","commit_link":"https://github.com/pingcap/ticdc/pull/1481/commits/5953e071ff4af33b590617461ef6432bb58792c9","author_link":"https://github.com/wk989898"}]}}
TZ=Asia/Shanghai
BUILD_ID=1938535815512592384
POD_LABEL=pingcap_ticdc_pull_cdc_kafka_integration_heavy_586-p84lx
HOSTNAME=pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-p84lx--w96th
OLDPWD=/home/jenkins
RUN_CHANGES_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/586/display/redirect?page=changes
TICDC_NEWARCH=true
JENKINS_NODE_COOKIE=dee6fbc3-2ac7-4953-82fd-a3d005133bad
GOPROXY=http://goproxy.apps.svc,https://proxy.golang.org,direct
JAVA_HOME=/usr/lib/jvm/jre-openjdk
JOB_BASE_NAME=pull_cdc_kafka_integration_heavy
WORKSPACE_TMP=/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy@tmp
KUBERNETES_PORT_443_TCP_PROTO=tcp
TINI_VERSION=v0.19.0
CI=true
GIT_URL=https://github.com/PingCAP-QE/ci.git
CLASSPATH=
KUBERNETES_PORT_443_TCP_ADDR=10.233.0.1
FILE_SERVER_URL=http://fileserver.pingcap.net
POD_CONTAINER=golang
GOPATH=/go
BUILD_NUMBER=586
USE_BAZEL_VERSION=6.5.0
KUBERNETES_PORT=tcp://10.233.0.1:443
WORKSPACE=/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy
PWD=/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc
HUDSON_URL=https://do.pingcap.net/jenkins/
HOME=/home/jenkins
NODE_NAME=pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-p84lx--w96th
HUDSON_SERVER_COOKIE=83ef27fe9acccc92
JENKINS_HOME=/var/jenkins_home
JOB_NAME=pingcap/ticdc/pull_cdc_kafka_integration_heavy
RUN_TESTS_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/586/display/redirect?page=tests
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_PORT=443
GIT_PREVIOUS_COMMIT=22c9b1f535a4301a2efcd8c8a6e9c6032aff7eb3
HUDSON_HOME=/var/jenkins_home
JOB_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/display/redirect
PROW_JOB_ID=3c9c78c1-b3f4-4132-bd31-3e5fdf6f6ccb
TEST_GROUP=G02
KUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443
RUN_ARTIFACTS_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/586/display/redirect?page=artifacts
BUILD_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/586/
TERM=xterm
STAGE_NAME=Test
JOB_URL=https://do.pingcap.net/jenkins/job/pingcap/job/ticdc/job/pull_cdc_kafka_integration_heavy/
BUILD_DISPLAY_NAME=#586
SHLVL=5
GIT_BRANCH=origin/main
BUILD_TAG=jenkins-pingcap-ticdc-pull_cdc_kafka_integration_heavy-586
KUBERNETES_SERVICE_PORT=443
NODE_LABELS=pingcap_ticdc_pull_cdc_kafka_integration_heavy_586-p84lx pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-p84lx--w96th
PATH=/go/bin:/root/.cargo/bin:/usr/local/go/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/gradle-7.4.2/bin:/opt/apache-maven-3.8.8/bin:/usr/local/pulsar/bin:/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/_utils:/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/../bin:/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/../../bin:/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/../../scripts/bin
GIT_PREVIOUS_SUCCESSFUL_COMMIT=80306c02d4f08ed55c2124bd228aa4d83baaed7b
KUBERNETES_SERVICE_HOST=10.233.0.1
JENKINS_SERVER_COOKIE=durable-d03363c5fa1b9a2e3f8e7a484ee240aba02820b698594a35e70e52d9d5461467
_=/usr/bin/env
find: '/tmp/tidb_cdc_test/*/*': No such file or directory
=================>> Running test /home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/canal_json_handle_key_only/run.sh using Sink-Type: kafka... <<=================
The 1 times to try to start tidb cluster...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	250	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	4	DDL Table Version. Do not delete.
cluster_id	7520568940281688499	TiDB Cluster ID.
tikv_gc_leader_uuid	65ec370fb940050	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-h8q79--pzt7c, pid:4337, start at 2025-06-27 18:08:33.530933008 +0800 CST m=+2.583902416	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250627-18:10:33.544 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250627-18:08:33.509 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250627-17:58:33.509 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	250	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	4	DDL Table Version. Do not delete.
cluster_id	7520568940281688499	TiDB Cluster ID.
tikv_gc_leader_uuid	65ec370fb940050	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-h8q79--pzt7c, pid:4337, start at 2025-06-27 18:08:33.530933008 +0800 CST m=+2.583902416	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250627-18:10:33.544 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250627-18:08:33.509 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250627-17:58:33.509 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	250	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	4	DDL Table Version. Do not delete.
cluster_id	7520568939824411565	TiDB Cluster ID.
tikv_gc_leader_uuid	65ec370f9280065	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-h8q79--pzt7c, pid:4406, start at 2025-06-27 18:08:33.403566351 +0800 CST m=+2.393737508	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250627-18:10:33.417 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250627-18:08:33.404 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250627-17:58:33.404 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v9.0.0-beta.2.pre-1-g9e37b7f5fa
Edition:         Community
Git Commit Hash: 9e37b7f5fa985c783d65bc443e0c5942ecb45959
Git Branch:      HEAD
UTC Build Time:  2025-06-27 05:03:32
Enable Features: jemalloc sm4(GmSSL) mem-profiling avx2 avx512 unwind thinlto hnsw.l2=skylake hnsw.cosine=skylake vec.l2=skylake vec.cos=skylake
Profile:         RELWITHDEBINFO
Compiler:        clang++ 17.0.6

Raft Proxy
Git Commit Hash:   3e325b2b9b30e396c846bfeb97a690d070b84087
Git Commit Branch: HEAD
UTC Build Time:    ""   
Rust Version:      rustc 1.77.0-nightly (89e2160c4 2023-12-27)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:   external-jemalloc portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine openssl-vendored
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/canal_json_claim_check/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/canal_json_claim_check/tiflash/log/error.log
arg matches is ArgMatches { args: {"advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "memory-limit-ratio": MatchedArg { occurs: 1, indices: [6], vals: ["0.800000"] }, "data-dir": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/canal_json_claim_check/tiflash/db/proxy"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [12], vals: ["9e37b7f5fa985c783d65bc443e0c5942ecb45959"] }, "engine-label": MatchedArg { occurs: 1, indices: [16], vals: ["tiflash"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [18], vals: ["127.0.0.1:2379"] }, "addr": MatchedArg { occurs: 1, indices: [22], vals: ["127.0.0.1:9000"] }, "engine-version": MatchedArg { occurs: 1, indices: [14], vals: ["v9.0.0-beta.2.pre-1-g9e37b7f5fa"] }, "log-file": MatchedArg { occurs: 1, indices: [20], vals: ["/tmp/tidb_cdc_test/canal_json_claim_check/tiflash/log/proxy.log"] }, "config": MatchedArg { occurs: 1, indices: [10], vals: ["/tmp/tidb_cdc_test/canal_json_claim_check/tiflash-proxy.toml"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
start tidb cluster in /tmp/tidb_cdc_test/canal_json_handle_key_only
Starting Upstream PD...
Release Version: v9.0.0-beta.2.pre-1-ga764d4520
Edition: Community
Kernel Type: Classic
Git Commit Hash: a764d4520776166fb2320543da7e20e48e3ac8b8
Git Branch: master
UTC Build Time:  2025-06-27 08:51:17
Starting Downstream PD...
Release Version: v9.0.0-beta.2.pre-1-ga764d4520
Edition: Community
Kernel Type: Classic
Git Commit Hash: a764d4520776166fb2320543da7e20e48e3ac8b8
Git Branch: master
UTC Build Time:  2025-06-27 08:51:17
Verifying upstream PD is started...
+ pd_host=127.0.0.1
+ pd_port=2379
+ is_tls=false
+ '[' false == true ']'
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
++ grep -v 'Command to ticdc'
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.canal_json_claim_check.cli.5811.out cli tso query --pd=http://127.0.0.1:2379
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   9.0.0-beta.2
Edition:           Community
Git Commit Hash:   2029ff019ce689acc113f3bd886b08f861da0fbf
Git Commit Branch: release-9.0-beta.2
UTC Build Time:    2025-06-27 06:48:16
Rust Version:      rustc 1.87.0-nightly (96cfc7558 2025-02-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   9.0.0-beta.2
Edition:           Community
Git Commit Hash:   2029ff019ce689acc113f3bd886b08f861da0fbf
Git Commit Branch: release-9.0-beta.2
UTC Build Time:    2025-06-27 06:48:16
Rust Version:      rustc 1.87.0-nightly (96cfc7558 2025-02-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
+ set +x
+ tso='459019103163908097
PASS
coverage: 1.9% of statements in github.com/pingcap/ticdc/...'
+ echo 459019103163908097 PASS coverage: 1.9% of statements in github.com/pingcap/ticdc/...
+ awk -F ' ' '{print $1}'
+ set +x
[Fri Jun 27 18:08:39 CST 2025] <<<<<< START cdc server in canal_json_claim_check case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL'
+ [[ no != \n\o ]]
+ GO_FAILPOINTS=
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.canal_json_claim_check.58375839.out server --log-file /tmp/tidb_cdc_test/canal_json_claim_check/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/canal_json_claim_check/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
*   Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 8300 failed: Connection refused
* Failed to connect to 127.0.0.1 port 8300: Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
Starting Upstream TiDB...
Release Version: v9.0.0-beta.2.pre-1-g339f07ae8f
Edition: Community
Git Commit Hash: 339f07ae8f2ba406abc7f4757ffcce918cc86a56
Git Branch: release-9.0-beta.2
UTC Build Time: 2025-06-27 06:54:00
GoVersion: go1.23.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Kernel Type: Classic
Starting Downstream TiDB...
Release Version: v9.0.0-beta.2.pre-1-g339f07ae8f
Edition: Community
Git Commit Hash: 339f07ae8f2ba406abc7f4757ffcce918cc86a56
Git Branch: release-9.0-beta.2
UTC Build Time: 2025-06-27 06:54:00
GoVersion: go1.23.10
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Kernel Type: Classic
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
* Server auth using Basic with user 'ticdc'
> GET /debug/info HTTP/1.1
> Host: 127.0.0.1:8300
> Authorization: Basic dGljZGM6dGljZGNfc2VjcmV0
> User-Agent: curl/7.61.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 27 Jun 2025 10:08:42 GMT
< Content-Length: 625
< Content-Type: text/plain; charset=utf-8
< 
{ [625 bytes data]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/7792b218-a102-4a4b-8d8d-58acd782252d
	{"id":"7792b218-a102-4a4b-8d8d-58acd782252d","address":"127.0.0.1:8300","version":"v9.0.0-beta.2.pre-3-g5953e071","git-hash":"5953e071ff4af33b590617461ef6432bb58792c9","deploy-path":"/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/bin/cdc.test","start-timestamp":1751018919,"is-new-arch":true}

/tidb/cdc/default/__cdc_meta__/log_coordinator/223197b0dc1ed7e6
	7792b218-a102-4a4b-8d8d-58acd782252d

/tidb/cdc/default/__cdc_meta__/owner/223197b0dc1ed7e6
	7792b218-a102-4a4b-8d8d-58acd782252d'
+ echo '

*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/7792b218-a102-4a4b-8d8d-58acd782252d
	{"id":"7792b218-a102-4a4b-8d8d-58acd782252d","address":"127.0.0.1:8300","version":"v9.0.0-beta.2.pre-3-g5953e071","git-hash":"5953e071ff4af33b590617461ef6432bb58792c9","deploy-path":"/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/bin/cdc.test","start-timestamp":1751018919,"is-new-arch":true}

/tidb/cdc/default/__cdc_meta__/log_coordinator/223197b0dc1ed7e6
	7792b218-a102-4a4b-8d8d-58acd782252d

/tidb/cdc/default/__cdc_meta__/owner/223197b0dc1ed7e6
	7792b218-a102-4a4b-8d8d-58acd782252d'
+ grep -q 'failed to get info:'
+ echo '

*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/7792b218-a102-4a4b-8d8d-58acd782252d
	{"id":"7792b218-a102-4a4b-8d8d-58acd782252d","address":"127.0.0.1:8300","version":"v9.0.0-beta.2.pre-3-g5953e071","git-hash":"5953e071ff4af33b590617461ef6432bb58792c9","deploy-path":"/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/bin/cdc.test","start-timestamp":1751018919,"is-new-arch":true}

/tidb/cdc/default/__cdc_meta__/log_coordinator/223197b0dc1ed7e6
	7792b218-a102-4a4b-8d8d-58acd782252d

/tidb/cdc/default/__cdc_meta__/owner/223197b0dc1ed7e6
	7792b218-a102-4a4b-8d8d-58acd782252d'
+ grep -q 'etcd info'
+ break
+ set +x
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.canal_json_claim_check.cli.5934.out cli changefeed create --start-ts=459019103163908097 '--sink-uri=kafka://127.0.0.1:9092/canal-json-claim-check?protocol=canal-json&enable-tidb-extension=true&max-message-bytes=1000&kafka-version=2.4.1' --config=/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/canal_json_claim_check/conf/changefeed.toml
Create changefeed successfully!
ID: 23589279870678460910495534678040248754
Info: {"upstream_id":7520568940281688499,"id":"23589279870678460910495534678040248754","namespace":"default","sink_uri":"kafka://127.0.0.1:9092/canal-json-claim-check?protocol=canal-json\u0026enable-tidb-extension=true\u0026max-message-bytes=1000\u0026kafka-version=2.4.1","create_time":"2025-06-27T18:08:42.511425429+08:00","start_ts":459019103163908097,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"enable_table_monitor":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64","output_old_value":false,"output_handle_key":false,"output_field_header":false},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"kafka_config":{"large_message_handle":{"large_message_handle_option":"claim-check","large_message_handle_compression":"snappy","claim_check_storage_uri":"file:///tmp/canal-json-claim-check","claim_check_raw_value":false}},"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"send-all-bootstrap-at-start":false,"debezium_disable_schema":false,"debezium":{"output_old_value":true},"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"region-count-per-span":100,"write_key_threshold":0,"split_number_per_node":1,"scheduling-task-per-node":20},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v9.0.0-beta.2.pre-3-g5953e071","resolved_ts":459019103163908097,"checkpoint_ts":459019103163908097,"checkpoint_time":"2025-06-27 18:08:37.709","gid":{"low":235892798706784609,"high":10495534678040248754}}
PASS
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
coverage: 2.6% of statements in github.com/pingcap/ticdc/...
+ set +x
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	250	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	4	DDL Table Version. Do not delete.
cluster_id	7520568985631410453	TiDB Cluster ID.
tikv_gc_leader_uuid	65ec3718d240051	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-p84lx--w96th, pid:1439, start at 2025-06-27 18:08:42.863135215 +0800 CST m=+2.362561344	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250627-18:10:42.876 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250627-18:08:42.875 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250627-17:58:42.875 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	250	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	4	DDL Table Version. Do not delete.
cluster_id	7520568985631410453	TiDB Cluster ID.
tikv_gc_leader_uuid	65ec3718d240051	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-p84lx--w96th, pid:1439, start at 2025-06-27 18:08:42.863135215 +0800 CST m=+2.362561344	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250627-18:10:42.876 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250627-18:08:42.875 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250627-17:58:42.875 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	250	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	4	DDL Table Version. Do not delete.
cluster_id	7520568985857146142	TiDB Cluster ID.
tikv_gc_leader_uuid	65ec3718d340051	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-ticdc-pull-cdc-kafka-integration-heavy-586-p84lx--w96th, pid:1527, start at 2025-06-27 18:08:42.852567026 +0800 CST m=+2.264661958	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20250627-18:10:42.866 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20250627-18:08:42.879 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20250627-17:58:42.879 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v9.0.0-beta.2.pre-1-g9e37b7f5fa
Edition:         Community
Git Commit Hash: 9e37b7f5fa985c783d65bc443e0c5942ecb45959
Git Branch:      HEAD
UTC Build Time:  2025-06-27 05:03:32
Enable Features: jemalloc sm4(GmSSL) mem-profiling avx2 avx512 unwind thinlto hnsw.l2=skylake hnsw.cosine=skylake vec.l2=skylake vec.cos=skylake
Profile:         RELWITHDEBINFO
Compiler:        clang++ 17.0.6

Raft Proxy
Git Commit Hash:   3e325b2b9b30e396c846bfeb97a690d070b84087
Git Commit Branch: HEAD
UTC Build Time:    ""   
Rust Version:      rustc 1.77.0-nightly (89e2160c4 2023-12-27)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:   external-jemalloc portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine openssl-vendored
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/canal_json_handle_key_only/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/canal_json_handle_key_only/tiflash/log/error.log
arg matches is ArgMatches { args: {"log-file": MatchedArg { occurs: 1, indices: [20], vals: ["/tmp/tidb_cdc_test/canal_json_handle_key_only/tiflash/log/proxy.log"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [12], vals: ["9e37b7f5fa985c783d65bc443e0c5942ecb45959"] }, "config": MatchedArg { occurs: 1, indices: [10], vals: ["/tmp/tidb_cdc_test/canal_json_handle_key_only/tiflash-proxy.toml"] }, "engine-label": MatchedArg { occurs: 1, indices: [16], vals: ["tiflash"] }, "addr": MatchedArg { occurs: 1, indices: [22], vals: ["127.0.0.1:9000"] }, "memory-limit-ratio": MatchedArg { occurs: 1, indices: [6], vals: ["0.800000"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [18], vals: ["127.0.0.1:2379"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "engine-version": MatchedArg { occurs: 1, indices: [14], vals: ["v9.0.0-beta.2.pre-1-g9e37b7f5fa"] }, "data-dir": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/canal_json_handle_key_only/tiflash/db/proxy"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
table test.finish_mark not exists for 1-th check, retry later
+ pd_host=127.0.0.1
+ pd_port=2379
+ is_tls=false
+ '[' false == true ']'
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
++ grep -v 'Command to ticdc'
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.canal_json_handle_key_only.cli.2875.out cli tso query --pd=http://127.0.0.1:2379
table test.finish_mark exists
check diff failed 1-th time, retry later
+ set +x
+ tso='459019105671577601
PASS
coverage: 1.9% of statements in github.com/pingcap/ticdc/...'
+ echo 459019105671577601 PASS coverage: 1.9% of statements in github.com/pingcap/ticdc/...
+ awk -F ' ' '{print $1}'
+ set +x
[Fri Jun 27 18:08:48 CST 2025] <<<<<< START cdc server in canal_json_handle_key_only case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL'
+ GO_FAILPOINTS=
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.canal_json_handle_key_only.29102912.out server --log-file /tmp/tidb_cdc_test/canal_json_handle_key_only/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/canal_json_handle_key_only/cdc_data --cluster-id default
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
*   Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 8300 failed: Connection refused
* Failed to connect to 127.0.0.1 port 8300: Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
check diff successfully
cdc.test: no process found
wait process cdc.test exit for 1-th time...
process cdc.test already exit
log files: /tmp/tidb_cdc_test/canal_json_claim_check/stdout.log
no DATA RACE found
[Fri Jun 27 18:08:50 CST 2025] <<<<<< run test case canal_json_claim_check success! >>>>>>
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
* Server auth using Basic with user 'ticdc'
> GET /debug/info HTTP/1.1
> Host: 127.0.0.1:8300
> Authorization: Basic dGljZGM6dGljZGNfc2VjcmV0
> User-Agent: curl/7.61.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 27 Jun 2025 10:08:51 GMT
< Content-Length: 625
< Content-Type: text/plain; charset=utf-8
< 
{ [625 bytes data]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/35e7f1d1-9e00-4bbc-b03a-8764852698fd
	{"id":"35e7f1d1-9e00-4bbc-b03a-8764852698fd","address":"127.0.0.1:8300","version":"v9.0.0-beta.2.pre-3-g5953e071","git-hash":"5953e071ff4af33b590617461ef6432bb58792c9","deploy-path":"/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/bin/cdc.test","start-timestamp":1751018929,"is-new-arch":true}

/tidb/cdc/default/__cdc_meta__/log_coordinator/223197b0dc462ae7
	35e7f1d1-9e00-4bbc-b03a-8764852698fd

/tidb/cdc/default/__cdc_meta__/owner/223197b0dc462ae7
	35e7f1d1-9e00-4bbc-b03a-8764852698fd'
+ echo '

*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/35e7f1d1-9e00-4bbc-b03a-8764852698fd
	{"id":"35e7f1d1-9e00-4bbc-b03a-8764852698fd","address":"127.0.0.1:8300","version":"v9.0.0-beta.2.pre-3-g5953e071","git-hash":"5953e071ff4af33b590617461ef6432bb58792c9","deploy-path":"/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/bin/cdc.test","start-timestamp":1751018929,"is-new-arch":true}

/tidb/cdc/default/__cdc_meta__/log_coordinator/223197b0dc462ae7
	35e7f1d1-9e00-4bbc-b03a-8764852698fd

/tidb/cdc/default/__cdc_meta__/owner/223197b0dc462ae7
	35e7f1d1-9e00-4bbc-b03a-8764852698fd'
+ grep -q 'failed to get info:'
+ echo '

*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/35e7f1d1-9e00-4bbc-b03a-8764852698fd
	{"id":"35e7f1d1-9e00-4bbc-b03a-8764852698fd","address":"127.0.0.1:8300","version":"v9.0.0-beta.2.pre-3-g5953e071","git-hash":"5953e071ff4af33b590617461ef6432bb58792c9","deploy-path":"/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/bin/cdc.test","start-timestamp":1751018929,"is-new-arch":true}

/tidb/cdc/default/__cdc_meta__/log_coordinator/223197b0dc462ae7
	35e7f1d1-9e00-4bbc-b03a-8764852698fd

/tidb/cdc/default/__cdc_meta__/owner/223197b0dc462ae7
	35e7f1d1-9e00-4bbc-b03a-8764852698fd'
+ grep -q 'etcd info'
+ break
+ set +x
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.canal_json_handle_key_only.cli.2989.out cli changefeed create --start-ts=459019105671577601 '--sink-uri=kafka://127.0.0.1:9092/canal-json-handle-key-only-17527?protocol=canal-json&enable-tidb-extension=true&max-message-bytes=1000&kafka-version=2.4.1' --config=/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/canal_json_handle_key_only/conf/changefeed.toml
Create changefeed successfully!
ID: 137829349239684536522257349078894675372
Info: {"upstream_id":7520568985631410453,"id":"137829349239684536522257349078894675372","namespace":"default","sink_uri":"kafka://127.0.0.1:9092/canal-json-handle-key-only-17527?protocol=canal-json\u0026enable-tidb-extension=true\u0026max-message-bytes=1000\u0026kafka-version=2.4.1","create_time":"2025-06-27T18:08:52.0674304+08:00","start_ts":459019105671577601,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"enable_table_monitor":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"canal-json","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64","output_old_value":false,"output_handle_key":false,"output_field_header":false},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"kafka_config":{"large_message_handle":{"large_message_handle_option":"handle-key-only","large_message_handle_compression":"snappy","claim_check_storage_uri":"","claim_check_raw_value":false}},"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"send-all-bootstrap-at-start":false,"debezium_disable_schema":false,"debezium":{"output_old_value":true},"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"region-count-per-span":100,"write_key_threshold":0,"split_number_per_node":1,"scheduling-task-per-node":20},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v9.0.0-beta.2.pre-3-g5953e071","resolved_ts":459019105671577601,"checkpoint_ts":459019105671577601,"checkpoint_time":"2025-06-27 18:08:47.275","gid":{"low":13782934923968453652,"high":2257349078894675372}}
PASS
coverage: 2.6% of statements in github.com/pingcap/ticdc/...
+ set +x
table test.finish_mark not exists for 1-th check, retry later
table test.finish_mark exists
check diff failed 1-th time, retry later
check diff failed 2-th time, retry later
check diff successfully
cdc.test: no process found
wait process cdc.test exit for 1-th time...
process cdc.test already exit
log files: /tmp/tidb_cdc_test/canal_json_handle_key_only/stdout.log
no DATA RACE found
[Fri Jun 27 18:09:00 CST 2025] <<<<<< run test case canal_json_handle_key_only success! >>>>>>
/home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/canal_json_claim_check/run.sh: line 1:  5966 Killed                  cdc_kafka_consumer --upstream-uri $SINK_URI --downstream-uri="mysql://root@127.0.0.1:3306/?safe-mode=true&batch-dml-enable=false&enable-ddl-ts=false" --upstream-tidb-dsn="root@tcp(${UP_TIDB_HOST}:${UP_TIDB_PORT})/?" --config="$CUR/conf/changefeed.toml" 2>&1
=================>> Running test /home/jenkins/agent/workspace/pingcap/ticdc/pull_cdc_kafka_integration_heavy/ticdc/tests/integration_tests/canal_json_content_compatible/run.sh using Sink-Type: kafka... <<=================
The 1 times to try to start tidb cluster...
start tidb cluster in /tmp/tidb_cdc_test/canal_json_content_compatible
Starting Upstream PD...
Release Version: v9.0.0-beta.2.pre-1-ga764d4520
Edition: Community
Kernel Type: Classic
Git Commit Hash: a764d4520776166fb2320543da7e20e48e3ac8b8
Git Branch: master
UTC Build Time:  2025-06-27 08:51:17
Starting Downstream PD...
Release Version: v9.0.0-beta.2.pre-1-ga764d4520
Edition: Community
Kernel Type: Classic
Git Commit Hash: a764d4520776166fb2320543da7e20e48e3ac8b8
Git Branch: master
UTC Build Time:  2025-06-27 08:51:17
Verifying upstream PD is started...
Aborted by Jenkins Admin
Sending interrupt signal to process
Killing processes
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
kill finished with exit code 0
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] // node
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
script returned exit code 143
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
script returned exit code 143
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] // cache
[Pipeline] // cache
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] }
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] // dir
[Pipeline] // dir
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G00'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G03'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G04'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G05'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G06'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G07'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G08'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G09'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G10'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G11'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G12'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G13'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G14'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G15'
[Pipeline] }
[Pipeline] }
[Pipeline] // timeout
[Pipeline] // timeout
[Pipeline] }
[Pipeline] }
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] }
[Pipeline] }
[Pipeline] // container
[Pipeline] // container
[Pipeline] }
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] }
[Pipeline] // node
[Pipeline] // node
[Pipeline] }
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] }
[Pipeline] // stage
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G01'
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G02'
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: accc84f2-75d0-47e1-ba07-b961c6221da8
Failed in branch Matrix - TEST_GROUP = 'G00'
Finished: ABORTED