Skip to content

Console Output

Skipping 2,235 KB.. Full Log
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e15b126dc0006	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-kafka-test-2019-jr4g1-fkx17, pid:4380, start at 2024-05-17 15:20:17.596830471 +0800 CST m=+5.310743012	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-15:22:17.604 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-15:20:17.591 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-15:10:17.591 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	198	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e15b126d00014	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-kafka-test-2019-jr4g1-fkx17, pid:4465, start at 2024-05-17 15:20:17.611003478 +0800 CST m=+5.265911240	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-15:22:17.617 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-15:20:17.588 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-15:10:17.588 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v8.2.0-alpha-28-g961f9dded
Edition:         Community
Git Commit Hash: 961f9dded38b814bb41b33c691ab58f3f090a0d9
Git Branch:      HEAD
UTC Build Time:  2024-05-15 03:49:49
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO
Compiler:        clang++ 13.0.0

Raft Proxy
Git Commit Hash:   7dc50b4eb06124e31f03adb06c20ff7ab61c5f79
Git Commit Branch: HEAD
UTC Build Time:    2024-05-15 03:53:57
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:   external-jemalloc portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/force_replicate_table/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/force_replicate_table/tiflash/log/error.log
arg matches is ArgMatches { args: {"advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/force_replicate_table/tiflash/log/proxy.log"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/force_replicate_table/tiflash-proxy.toml"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v8.2.0-alpha-28-g961f9dded"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/force_replicate_table/tiflash/db/proxy"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["961f9dded38b814bb41b33c691ab58f3f090a0d9"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
[Fri May 17 15:20:21 CST 2024] <<<<<< START cdc server in force_replicate_table case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL'
+ [[ no != \n\o ]]
+ GO_FAILPOINTS=
+ (( i = 0 ))
+ (( i <= 50 ))
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.force_replicate_table.58915893.out server --log-file /tmp/tidb_cdc_test/force_replicate_table/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/force_replicate_table/cdc_data --cluster-id default
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
* Server auth using Basic with user 'ticdc'
> GET /debug/info HTTP/1.1
> Authorization: Basic dGljZGM6dGljZGNfc2VjcmV0
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 17 May 2024 07:20:24 GMT
< Content-Length: 815
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/d495426a-dc1c-459a-b8ef-aabbe6cd2689
	{"id":"d495426a-dc1c-459a-b8ef-aabbe6cd2689","address":"127.0.0.1:8300","version":"v8.2.0-alpha-64-g930a58d61","git-hash":"930a58d6174d069a6bdcc46f935bffb2fddccfd4","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/bin/cdc.test","start-timestamp":1715930421}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f856c2112c8
	d495426a-dc1c-459a-b8ef-aabbe6cd2689

/tidb/cdc/default/default/upstream/7369864990434743110
	{"id":7369864990434743110,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/d495426a-dc1c-459a-b8ef-aabbe6cd2689
	{"id":"d495426a-dc1c-459a-b8ef-aabbe6cd2689","address":"127.0.0.1:8300","version":"v8.2.0-alpha-64-g930a58d61","git-hash":"930a58d6174d069a6bdcc46f935bffb2fddccfd4","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/bin/cdc.test","start-timestamp":1715930421}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f856c2112c8
	d495426a-dc1c-459a-b8ef-aabbe6cd2689

/tidb/cdc/default/default/upstream/7369864990434743110
	{"id":7369864990434743110,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/d495426a-dc1c-459a-b8ef-aabbe6cd2689
	{"id":"d495426a-dc1c-459a-b8ef-aabbe6cd2689","address":"127.0.0.1:8300","version":"v8.2.0-alpha-64-g930a58d61","git-hash":"930a58d6174d069a6bdcc46f935bffb2fddccfd4","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/bin/cdc.test","start-timestamp":1715930421}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f856c2112c8
	d495426a-dc1c-459a-b8ef-aabbe6cd2689

/tidb/cdc/default/default/upstream/7369864990434743110
	{"id":7369864990434743110,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
Create changefeed successfully!
ID: a0242a1f-3b2a-4ada-9df8-1137be99edf5
Info: {"upstream_id":7369864990434743110,"namespace":"default","id":"a0242a1f-3b2a-4ada-9df8-1137be99edf5","sink_uri":"kafka://127.0.0.1:9092/ticdc-force_replicate_table-test-8783?protocol=open-protocol\u0026partition-num=4\u0026kafka-version=2.4.1\u0026max-message-bytes=10485760","create_time":"2024-05-17T15:20:24.828099746+08:00","start_ts":449820864398229509,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":true,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"enable_table_monitor":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"open-protocol","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64","output_old_value":false,"output_handle_key":false},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"debezium_disable_schema":false,"debezium":{"output_old_value":true},"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v8.2.0-alpha-64-g930a58d61","resolved_ts":449820864398229509,"checkpoint_ts":449820864398229509,"checkpoint_time":"2024-05-17 15:20:21.441"}
[Fri May 17 15:20:24 CST 2024] <<<<<< START kafka consumer in force_replicate_table case >>>>>>
consumer replica config found: /home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/force_replicate_table/conf/changefeed.toml
table force_replicate_table.t0 not exists for 1-th check, retry later
table force_replicate_table.t0 exists
table force_replicate_table.t1 not exists for 1-th check, retry later
table force_replicate_table.t1 exists
table force_replicate_table.t2 exists
table force_replicate_table.t3 not exists for 1-th check, retry later
table force_replicate_table.t3 exists
table force_replicate_table.t4 not exists for 1-th check, retry later
table force_replicate_table.t4 exists
table force_replicate_table.t5 not exists for 1-th check, retry later
table force_replicate_table.t5 exists
table force_replicate_table.t6 not exists for 1-th check, retry later
table force_replicate_table.t6 not exists for 2-th check, retry later
table force_replicate_table.t6 exists
check_data_subset force_replicate_table.t0 127.0.0.1 4000 127.0.0.1 3306
run task successfully
check_data_subset force_replicate_table.t1 127.0.0.1 4000 127.0.0.1 3306
run task successfully
check_data_subset force_replicate_table.t2 127.0.0.1 4000 127.0.0.1 3306
run task successfully
check_data_subset force_replicate_table.t3 127.0.0.1 4000 127.0.0.1 3306
run task successfully
check_data_subset force_replicate_table.t4 127.0.0.1 4000 127.0.0.1 3306
run task successfully
check_data_subset force_replicate_table.t5 127.0.0.1 4000 127.0.0.1 3306
run task successfully
check_data_subset force_replicate_table.t6 127.0.0.1 4000 127.0.0.1 3306
id=19,a=NULL doesn't exist in downstream table force_replicate_table.t6
run task failed 1-th time, retry later
check_data_subset force_replicate_table.t6 127.0.0.1 4000 127.0.0.1 3306
run task successfully
wait process cdc.test exit for 1-th time...
wait process cdc.test exit for 2-th time...
cdc.test: no process found
wait process cdc.test exit for 3-th time...
process cdc.test already exit
[Fri May 17 15:20:49 CST 2024] <<<<<< run test case force_replicate_table success! >>>>>>
\033[0;36m<<< Run all test success >>>\033[0m
[Pipeline] }
Cache not saved (ws/jenkins-pingcap-tiflow-pull_cdc_integration_kafka_test-2019/tiflow-cdc already exists)
[Pipeline] // cache
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Created Pod: kubernetes jenkins-tiflow/pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-m8pmj
Created Pod: kubernetes jenkins-tiflow/pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-v034d
ERROR: Failed to launch pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-nbcc8
io.fabric8.kubernetes.client.KubernetesClientTimeoutException: Timed out waiting for [1000000] milliseconds for [Pod] with name:[pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-nbcc8] in namespace [jenkins-tiflow].
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilCondition(BaseOperation.java:939)
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilReady(BaseOperation.java:921)
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilReady(BaseOperation.java:97)
	at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:185)
	at hudson.slaves.SlaveComputer.lambda$_connect$0(SlaveComputer.java:297)
	at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
	at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:80)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
ERROR: Failed to launch pingcap-tiflow-pull-cdc-integration-kafka-test-2019-jr4g1-kch2h
io.fabric8.kubernetes.client.KubernetesClientTimeoutException: Timed out waiting for [1000000] milliseconds for [Pod] with name:[pingcap-tiflow-pull-cdc-integration-kafka-test-2019-jr4g1-kch2h] in namespace [jenkins-tiflow].
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilCondition(BaseOperation.java:939)
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilReady(BaseOperation.java:921)
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilReady(BaseOperation.java:97)
	at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:185)
	at hudson.slaves.SlaveComputer.lambda$_connect$0(SlaveComputer.java:297)
	at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
	at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:80)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
ERROR: Failed to launch pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-qj51w
io.fabric8.kubernetes.client.KubernetesClientTimeoutException: Timed out waiting for [1000000] milliseconds for [Pod] with name:[pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-qj51w] in namespace [jenkins-tiflow].
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilCondition(BaseOperation.java:939)
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilReady(BaseOperation.java:921)
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.waitUntilReady(BaseOperation.java:97)
	at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:185)
	at hudson.slaves.SlaveComputer.lambda$_connect$0(SlaveComputer.java:297)
	at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
	at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:80)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Created Pod: kubernetes jenkins-tiflow/pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-b0k3h
Created Pod: kubernetes jenkins-tiflow/pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-9vz44
Agent pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-b0k3h is provisioned from template pingcap_tiflow_pull_cdc_integration_kafka_test_2019-3p6pn-zvn60
---
apiVersion: "v1"
kind: "Pod"
metadata:
  annotations:
    buildUrl: "http://jenkins.apps.svc.cluster.local:8080/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/"
    runUrl: "job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/"
  labels:
    jenkins/jenkins-jenkins-agent: "true"
    jenkins/label-digest: "2a4f5ea3feb6732f205873234a006f93aeb49750"
    jenkins/label: "pingcap_tiflow_pull_cdc_integration_kafka_test_2019-3p6pn"
  name: "pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-b0k3h"
  namespace: "jenkins-tiflow"
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: "kubernetes.io/arch"
            operator: "In"
            values:
            - "amd64"
  containers:
  - image: "wurstmeister/zookeeper"
    imagePullPolicy: "IfNotPresent"
    name: "zookeeper"
    resources:
      limits:
        cpu: "2000m"
        memory: "4Gi"
      requests:
        cpu: "2000m"
        memory: "4Gi"
    tty: true
    volumeMounts:
    - mountPath: "/tmp"
      name: "volume-0"
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - args:
    - "cat"
    image: "hub.pingcap.net/jenkins/golang-tini:1.21"
    imagePullPolicy: "Always"
    name: "golang"
    resources:
      limits:
        cpu: "12"
        memory: "32Gi"
      requests:
        cpu: "12"
        memory: "32Gi"
    tty: true
    volumeMounts:
    - mountPath: "/tmp"
      name: "volume-0"
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "KAFKA_CREATE_TOPICS"
      value: "big-message-test:1:1"
    - name: "KAFKA_BROKER_ID"
      value: "1"
    - name: "KAFKA_SSL_KEYSTORE_PASSWORD"
      value: "test1234"
    - name: "KAFKA_ZOOKEEPER_CONNECT"
      value: "localhost:2181"
    - name: "KAFKA_MESSAGE_MAX_BYTES"
      value: "11534336"
    - name: "KAFKA_REPLICA_FETCH_MAX_BYTES"
      value: "11534336"
    - name: "KAFKA_ADVERTISED_LISTENERS"
      value: "SSL://127.0.0.1:9093,PLAINTEXT://127.0.0.1:9092"
    - name: "ZK"
      value: "zk"
    - name: "KAFKA_SSL_KEYSTORE_LOCATION"
      value: "/tmp/kafka.server.keystore.jks"
    - name: "KAFKA_SSL_KEY_PASSWORD"
      value: "test1234"
    - name: "KAFKA_SSL_TRUSTSTORE_PASSWORD"
      value: "test1234"
    - name: "KAFKA_LISTENERS"
      value: "SSL://127.0.0.1:9093,PLAINTEXT://127.0.0.1:9092"
    - name: "KAFKA_SSL_TRUSTSTORE_LOCATION"
      value: "/tmp/kafka.server.truststore.jks"
    - name: "RACK_COMMAND"
      value: "curl -sfL https://github.com/pingcap/tiflow/raw/6e62afcfecc4e3965d8818784327d4bf2600d9fa/tests/_certificates/kafka.server.keystore.jks\
        \ -o /tmp/kafka.server.keystore.jks && curl -sfL https://github.com/pingcap/tiflow/raw/6e62afcfecc4e3965d8818784327d4bf2600d9fa/tests/_certificates/kafka.server.truststore.jks\
        \ -o /tmp/kafka.server.truststore.jks"
    image: "wurstmeister/kafka:2.12-2.4.1"
    imagePullPolicy: "IfNotPresent"
    name: "kafka"
    resources:
      limits:
        cpu: "4000m"
        memory: "6Gi"
      requests:
        cpu: "4000m"
        memory: "6Gi"
    tty: true
    volumeMounts:
    - mountPath: "/tmp"
      name: "volume-0"
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "KAFKA_SERVER"
      value: "127.0.0.1:9092"
    - name: "ZOOKEEPER_SERVER"
      value: "127.0.0.1:2181"
    - name: "DOWNSTREAM_DB_HOST"
      value: "127.0.0.1"
    - name: "USE_FLAT_MESSAGE"
      value: "true"
    - name: "DOWNSTREAM_DB_PORT"
      value: "3306"
    - name: "DB_NAME"
      value: "test"
    image: "rustinliu/ticdc-canal-json-adapter:latest"
    imagePullPolicy: "IfNotPresent"
    name: "canal-adapter"
    resources:
      requests:
        cpu: "200m"
        memory: "4Gi"
    tty: true
    volumeMounts:
    - mountPath: "/tmp"
      name: "volume-0"
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - image: "hub.pingcap.net/jenkins/network-multitool"
    name: "net-tool"
    resources:
      limits:
        memory: "128Mi"
        cpu: "100m"
    tty: true
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - image: "hub.pingcap.net/jenkins/python3-requests:latest"
    name: "report"
    resources:
      requests:
        cpu: "200m"
        memory: "4Gi"
    tty: true
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "MYSQL_ROOT_PASSWORD"
      value: ""
    - name: "MYSQL_USER"
      value: "mysqluser"
    - name: "MYSQL_PASSWORD"
      value: "mysqlpw"
    - name: "MYSQL_ALLOW_EMPTY_PASSWORD"
      value: "yes"
    - name: "MYSQL_TCP_PORT"
      value: "3310"
    image: "quay.io/debezium/example-mysql:2.4"
    imagePullPolicy: "IfNotPresent"
    name: "mysql"
    resources:
      requests:
        cpu: "200m"
        memory: "4Gi"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "BOOTSTRAP_SERVERS"
      value: "127.0.0.1:9092"
    - name: "GROUP_ID"
      value: "1"
    - name: "CONFIG_STORAGE_TOPIC"
      value: "my_connect_configs"
    - name: "OFFSET_STORAGE_TOPIC"
      value: "my_connect_offsets"
    - name: "STATUS_STORAGE_TOPIC"
      value: "my_connect_statuses"
    image: "quay.io/debezium/connect:2.4"
    name: "connect"
    resources:
      requests:
        cpu: "200m"
        memory: "4Gi"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "JENKINS_SECRET"
      value: "********"
    - name: "JENKINS_TUNNEL"
      value: "jenkins-agent.apps.svc.cluster.local:50000"
    - name: "JENKINS_AGENT_NAME"
      value: "pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-b0k3h"
    - name: "JENKINS_NAME"
      value: "pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-b0k3h"
    - name: "JENKINS_AGENT_WORKDIR"
      value: "/home/jenkins/agent"
    - name: "JENKINS_URL"
      value: "http://jenkins.apps.svc.cluster.local:8080/jenkins/"
    image: "jenkins/inbound-agent:3206.vb_15dcf73f6a_9-2"
    name: "jnlp"
    resources:
      requests:
        memory: "256Mi"
        cpu: "100m"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  restartPolicy: "Never"
  securityContext:
    fsGroup: 1000
  volumes:
  - emptyDir: {}
    name: "volume-0"
  - emptyDir:
      medium: ""
    name: "workspace-volume"

Running on pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-b0k3h in /home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test
[Pipeline] {
[Pipeline] checkout
The recommended git tool is: git
No credentials specified
Warning: JENKINS-30600: special launcher org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1@50ecb16b; decorates RemoteLauncher[hudson.remoting.Channel@4d17481c:JNLP4-connect connection from 10.233.88.208/10.233.88.208:39530] will be ignored (a typical symptom is the Git executable not being run inside a designated container)
Cloning the remote Git repository
Using shallow clone with depth 1
Cloning repository https://github.com/PingCAP-QE/ci.git
 > git init /home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test # timeout=10
Fetching upstream changes from https://github.com/PingCAP-QE/ci.git
 > git --version # timeout=10
 > git --version # 'git version 2.39.2'
 > git fetch --tags --force --progress --depth=1 -- https://github.com/PingCAP-QE/ci.git +refs/heads/*:refs/remotes/origin/* # timeout=5
Avoid second fetch
Checking out Revision 73d1a8209bddec16d8d58403efcd7a20d12cf867 (origin/main)
 > git config remote.origin.url https://github.com/PingCAP-QE/ci.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse origin/main^{commit} # timeout=10
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 73d1a8209bddec16d8d58403efcd7a20d12cf867 # timeout=10
Commit message: "update utf go build image (#2965)"
[Pipeline] withEnv
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] timeout
Timeout set to expire in 45 min
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $TICDC_COVERALLS_TOKEN or $TICDC_CODECOV_TOKEN
[Pipeline] {
[Pipeline] dir
Running in /home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow
[Pipeline] {
[Pipeline] cache
Cache restored successfully (ws/jenkins-pingcap-tiflow-pull_cdc_integration_kafka_test-2019/tiflow-cdc)
3672599040 bytes in 21.86 secs (168003541 bytes/sec)
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] timeout
Timeout set to expire in 6 min 0 sec
[Pipeline] {
[Pipeline] sh
+ echo Waiting for zookeeper to be ready...
Waiting for zookeeper to be ready...
+ nc -z localhost 2181
+ echo Waiting for kafka to be ready...
Waiting for kafka to be ready...
+ nc -z localhost 9092
+ echo Waiting for kafka-broker to be ready...
Waiting for kafka-broker to be ready...
+ echo dump
+ nc localhost 2181
+ grep brokers
+ awk {$1=$1;print}
+ grep -F -w /brokers/ids/1
/brokers/ids/1
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // container
[Pipeline] sh
+ rm -rf /tmp/tidb_cdc_test
+ mkdir -p /tmp/tidb_cdc_test
+ chmod +x ./tests/integration_tests/run_group.sh
+ ./tests/integration_tests/run_group.sh kafka G10
Run cases: default_value simple cdc_server_tips event_filter sql_mode
PROW_JOB_ID=e42b4292-6bbd-42dc-a985-065b6f50e601
JENKINS_NODE_COOKIE=a0cc8b01-e8bf-4acd-bbdd-1468ee896d97
BUILD_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/
GOLANG_VERSION=1.21.0
HOSTNAME=pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-b0k3h
HUDSON_SERVER_COOKIE=83ef27fe9acccc92
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT=tcp://10.233.0.1:443
TERM=xterm
STAGE_NAME=Test
BUILD_TAG=jenkins-pingcap-tiflow-pull_cdc_integration_kafka_test-2019
KUBERNETES_SERVICE_PORT=443
GIT_PREVIOUS_COMMIT=73d1a8209bddec16d8d58403efcd7a20d12cf867
JOB_SPEC={"type":"batch","job":"pingcap/tiflow/pull_cdc_integration_kafka_test","buildid":"1791354743763767300","prowjobid":"e42b4292-6bbd-42dc-a985-065b6f50e601","refs":{"org":"pingcap","repo":"tiflow","base_ref":"master","base_sha":"e75248dd6dc9c2bfb61941a33ef5f9e5f47dfd41","pulls":[{"number":11048,"author":"wk989898","sha":"fb8c1243233c8682a868e1405e7f59b995ccd641","title":"ticdc: fix detecting kafka version "},{"number":11099,"author":"hicqu","sha":"d2621b6f95d22360b304264b4bdeddf18d1832c5","title":"cdc: adjust sorter options to avoid Seek CPU usage exploding"}]}}
KUBERNETES_SERVICE_HOST=10.233.0.1
WORKSPACE=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test
JOB_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/
RUN_CHANGES_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/display/redirect?page=changes
RUN_ARTIFACTS_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/display/redirect?page=artifacts
FILE_SERVER_URL=http://fileserver.pingcap.net
JENKINS_HOME=/var/jenkins_home
GIT_COMMIT=73d1a8209bddec16d8d58403efcd7a20d12cf867
PATH=/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/_utils:/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/../bin:/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/../../bin:/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/../../scripts/bin
RUN_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/display/redirect
GOPROXY=http://goproxy.apps.svc,https://proxy.golang.org,direct
POD_CONTAINER=golang
PWD=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow
HUDSON_URL=https://do.pingcap.net/jenkins/
TICDC_COVERALLS_TOKEN=****
JOB_NAME=pingcap/tiflow/pull_cdc_integration_kafka_test
TZ=Asia/Shanghai
BUILD_DISPLAY_NAME=#2019
TEST_GROUP=G10
JENKINS_URL=https://do.pingcap.net/jenkins/
BUILD_ID=1791354743763767300
TICDC_CODECOV_TOKEN=****
GOLANG_DOWNLOAD_SHA256=d0398903a16ba2232b389fb31032ddf57cac34efda306a0eebac34f0965a0742
JOB_BASE_NAME=pull_cdc_integration_kafka_test
GIT_PREVIOUS_SUCCESSFUL_COMMIT=73d1a8209bddec16d8d58403efcd7a20d12cf867
RUN_TESTS_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/display/redirect?page=tests
SHLVL=5
HOME=/home/jenkins
POD_LABEL=pingcap_tiflow_pull_cdc_integration_kafka_test_2019-3p6pn
GOROOT=/usr/local/go
GIT_BRANCH=origin/main
KUBERNETES_PORT_443_TCP_PROTO=tcp
TINI_VERSION=v0.19.0
CI=true
KUBERNETES_SERVICE_PORT_HTTPS=443
WORKSPACE_TMP=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test@tmp
EXECUTOR_NUMBER=0
JENKINS_SERVER_COOKIE=durable-8f1d433d6527e85b4c28b432b07fa0c56dd3090c9176381d29a5b4531676247a
NODE_LABELS=pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-b0k3h pingcap_tiflow_pull_cdc_integration_kafka_test_2019-3p6pn
GIT_URL=https://github.com/PingCAP-QE/ci.git
HUDSON_HOME=/var/jenkins_home
CLASSPATH=
NODE_NAME=pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-b0k3h
GOPATH=/go
JOB_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/display/redirect
BUILD_NUMBER=2019
KUBERNETES_PORT_443_TCP_ADDR=10.233.0.1
KUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443
GOLANG_DOWNLOAD_URL=https://dl.google.com/go/go1.21.0.linux-amd64.tar.gz
_=/usr/bin/env
find: '/tmp/tidb_cdc_test/*/*': No such file or directory
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/default_value/run.sh using Sink-Type: kafka... <<=================
The 1 times to try to start tidb cluster...
start tidb cluster in /tmp/tidb_cdc_test/default_value
Starting Upstream PD...
Release Version: v8.2.0-alpha-34-g644e904ff
Edition: Community
Git Commit Hash: 644e904ffb32c98c620ece220ca6363f58e4af23
Git Branch: master
UTC Build Time:  2024-05-17 04:39:03
Starting Downstream PD...
Release Version: v8.2.0-alpha-34-g644e904ff
Edition: Community
Git Commit Hash: 644e904ffb32c98c620ece220ca6363f58e4af23
Git Branch: master
UTC Build Time:  2024-05-17 04:39:03
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   8.2.0-alpha
Edition:           Community
Git Commit Hash:   3c2cbcf24e326ea917736538b05605c8143b6e1d
Git Commit Branch: master
UTC Build Time:    2024-05-16 09:05:19
Rust Version:      rustc 1.77.0-nightly (89e2160c4 2023-12-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   8.2.0-alpha
Edition:           Community
Git Commit Hash:   3c2cbcf24e326ea917736538b05605c8143b6e1d
Git Commit Branch: master
UTC Build Time:    2024-05-16 09:05:19
Rust Version:      rustc 1.77.0-nightly (89e2160c4 2023-12-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v8.2.0-alpha-182-g1c4a9c6434
Edition: Community
Git Commit Hash: 1c4a9c643406bb8fbcbbdf039ca167d5373dd134
Git Branch: master
UTC Build Time: 2024-05-17 05:17:02
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v8.2.0-alpha-182-g1c4a9c6434
Edition: Community
Git Commit Hash: 1c4a9c643406bb8fbcbbdf039ca167d5373dd134
Git Branch: master
UTC Build Time: 2024-05-17 05:17:02
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	198	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e15d82f2c0012	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-b0k3h, pid:1430, start at 2024-05-17 15:30:57.120129508 +0800 CST m=+5.211738888	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-15:32:57.126 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-15:30:57.099 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-15:20:57.099 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	198	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e15d82f2c0012	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-b0k3h, pid:1430, start at 2024-05-17 15:30:57.120129508 +0800 CST m=+5.211738888	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-15:32:57.126 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-15:30:57.099 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-15:20:57.099 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	198	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e15d82f240016	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-kafka-test-2019-3p6pn-b0k3h, pid:1515, start at 2024-05-17 15:30:57.138863346 +0800 CST m=+5.174231848	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-15:32:57.145 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-15:30:57.146 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-15:20:57.146 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v8.2.0-alpha-28-g961f9dded
Edition:         Community
Git Commit Hash: 961f9dded38b814bb41b33c691ab58f3f090a0d9
Git Branch:      HEAD
UTC Build Time:  2024-05-15 03:49:49
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO
Compiler:        clang++ 13.0.0

Raft Proxy
Git Commit Hash:   7dc50b4eb06124e31f03adb06c20ff7ab61c5f79
Git Commit Branch: HEAD
UTC Build Time:    2024-05-15 03:53:57
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:   external-jemalloc portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/default_value/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/default_value/tiflash/log/error.log
arg matches is ArgMatches { args: {"engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/default_value/tiflash-proxy.toml"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v8.2.0-alpha-28-g961f9dded"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/default_value/tiflash/log/proxy.log"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["961f9dded38b814bb41b33c691ab58f3f090a0d9"] }, "data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/default_value/tiflash/db/proxy"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
+ pd_host=127.0.0.1
+ pd_port=2379
+ is_tls=false
+ '[' false == true ']'
++ run_cdc_cli tso query --pd=http://127.0.0.1:2379
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.default_value.cli.2920.out cli tso query --pd=http://127.0.0.1:2379
+ set +x
+ tso='449821032460845058
PASS
coverage: 1.8% of statements in github.com/pingcap/tiflow/...'
+ echo 449821032460845058 PASS coverage: 1.8% of statements in github.com/pingcap/tiflow/...
+ awk -F ' ' '{print $1}'
+ set +x
[Fri May 17 15:31:04 CST 2024] <<<<<< START cdc server in default_value case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ GO_FAILPOINTS=
+ '[' -z '' ']'
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.default_value.29572959.out server --log-file /tmp/tidb_cdc_test/default_value/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/default_value/cdc_data --cluster-id default
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL'
+ [[ no != \n\o ]]
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ echo ''
+ grep -q 'failed to get info:'
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
* Server auth using Basic with user 'ticdc'
> GET /debug/info HTTP/1.1
> Authorization: Basic dGljZGM6dGljZGNfc2VjcmV0
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 17 May 2024 07:31:07 GMT
< Content-Length: 815
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/6309c749-6a0a-462b-8d34-7280f3ae9f78
	{"id":"6309c749-6a0a-462b-8d34-7280f3ae9f78","address":"127.0.0.1:8300","version":"v8.2.0-alpha-64-g930a58d61","git-hash":"930a58d6174d069a6bdcc46f935bffb2fddccfd4","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/bin/cdc.test","start-timestamp":1715931064}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f8575e389d5
	6309c749-6a0a-462b-8d34-7280f3ae9f78

/tidb/cdc/default/default/upstream/7369867736192941355
	{"id":7369867736192941355,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/6309c749-6a0a-462b-8d34-7280f3ae9f78
	{"id":"6309c749-6a0a-462b-8d34-7280f3ae9f78","address":"127.0.0.1:8300","version":"v8.2.0-alpha-64-g930a58d61","git-hash":"930a58d6174d069a6bdcc46f935bffb2fddccfd4","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/bin/cdc.test","start-timestamp":1715931064}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f8575e389d5
	6309c749-6a0a-462b-8d34-7280f3ae9f78

/tidb/cdc/default/default/upstream/7369867736192941355
	{"id":7369867736192941355,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/6309c749-6a0a-462b-8d34-7280f3ae9f78
	{"id":"6309c749-6a0a-462b-8d34-7280f3ae9f78","address":"127.0.0.1:8300","version":"v8.2.0-alpha-64-g930a58d61","git-hash":"930a58d6174d069a6bdcc46f935bffb2fddccfd4","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/bin/cdc.test","start-timestamp":1715931064}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f8575e389d5
	6309c749-6a0a-462b-8d34-7280f3ae9f78

/tidb/cdc/default/default/upstream/7369867736192941355
	{"id":7369867736192941355,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.default_value.cli.3012.out cli changefeed create --start-ts=449821032460845058 '--sink-uri=kafka://127.0.0.1:9092/ticdc-default-value-test-23870?protocol=open-protocol&partition-num=4&kafka-version=2.4.1&max-message-bytes=10485760'
Create changefeed successfully!
ID: efaacc51-edc7-43b0-a833-5bc930fc7586
Info: {"upstream_id":7369867736192941355,"namespace":"default","id":"efaacc51-edc7-43b0-a833-5bc930fc7586","sink_uri":"kafka://127.0.0.1:9092/ticdc-default-value-test-23870?protocol=open-protocol\u0026partition-num=4\u0026kafka-version=2.4.1\u0026max-message-bytes=10485760","create_time":"2024-05-17T15:31:08.590503158+08:00","start_ts":449821032460845058,"config":{"memory_quota":1073741824,"case_sensitive":false,"force_replicate":false,"ignore_ineligible_table":false,"check_gc_safe_point":true,"enable_sync_point":false,"enable_table_monitor":false,"bdr_mode":false,"sync_point_interval":600000000000,"sync_point_retention":86400000000000,"filter":{"rules":["*.*"]},"mounter":{"worker_num":16},"sink":{"protocol":"open-protocol","csv":{"delimiter":",","quote":"\"","null":"\\N","include_commit_ts":false,"binary_encoding_method":"base64","output_old_value":false,"output_handle_key":false},"encoder_concurrency":32,"terminator":"\r\n","date_separator":"day","enable_partition_separator":true,"enable_kafka_sink_v2":false,"only_output_updated_columns":false,"delete_only_output_handle_key_columns":false,"content_compatible":false,"advance_timeout":150,"send_bootstrap_interval_in_sec":120,"send_bootstrap_in_msg_count":10000,"send_bootstrap_to_all_partition":true,"debezium_disable_schema":false,"debezium":{"output_old_value":true},"open":{"output_old_value":true}},"consistent":{"level":"none","max_log_size":64,"flush_interval":2000,"meta_flush_interval":200,"encoding_worker_num":16,"flush_worker_num":8,"use_file_backend":false,"memory_usage":{"memory_quota_percentage":50}},"scheduler":{"enable_table_across_nodes":false,"region_threshold":100000,"write_key_threshold":0},"integrity":{"integrity_check_level":"none","corruption_handle_level":"warn"},"changefeed_error_stuck_duration":1800000000000,"synced_status":{"synced_check_interval":300,"checkpoint_interval":15}},"state":"normal","creator_version":"v8.2.0-alpha-64-g930a58d61","resolved_ts":449821032460845058,"checkpoint_ts":449821032460845058,"checkpoint_time":"2024-05-17 15:31:02.549"}
PASS
coverage: 2.4% of statements in github.com/pingcap/tiflow/...
+ set +x
[Fri May 17 15:31:10 CST 2024] <<<<<< START kafka consumer in default_value case >>>>>>
go: downloading go.uber.org/zap v1.27.0
go: downloading github.com/google/uuid v1.6.0
go: downloading github.com/pingcap/errors v0.11.5-0.20240318064555-6bd07397691f
go: downloading github.com/pingcap/log v1.1.1-0.20240314023424-862ccc32f18d
go: downloading github.com/BurntSushi/toml v1.3.2
go: downloading github.com/pingcap/tidb-tools v0.0.0-20240508055508-ee5de104059e
go: downloading github.com/pingcap/tidb v1.1.0-beta.0.20240428083427-66ba419636ce
go: downloading golang.org/x/time v0.5.0
go: downloading github.com/pingcap/failpoint v0.0.0-20220801062533-2eaa32854a6c
go: downloading golang.org/x/sync v0.7.0
go: downloading google.golang.org/grpc v1.62.1
go: downloading github.com/go-sql-driver/mysql v1.7.1
go: downloading gopkg.in/natefinch/lumberjack.v2 v2.2.1
go: downloading go.uber.org/atomic v1.11.0
go: downloading go.uber.org/multierr v1.11.0
go: downloading github.com/pingcap/tidb/pkg/parser v0.0.0-20240428083427-66ba419636ce
go: downloading github.com/coreos/go-semver v0.3.1
go: downloading google.golang.org/genproto/googleapis/rpc v0.0.0-20240401170217-c3f982113cda
go: downloading google.golang.org/protobuf v1.33.0
go: downloading golang.org/x/net v0.24.0
go: downloading golang.org/x/sys v0.19.0
go: downloading github.com/golang/protobuf v1.5.4
go: downloading google.golang.org/genproto v0.0.0-20240401170217-c3f982113cda
go: downloading golang.org/x/text v0.14.0
go: downloading github.com/cznic/mathutil v0.0.0-20181122101859-297441e03548
go: downloading golang.org/x/exp v0.0.0-20240409090435-93d18d7e34b8
go: downloading github.com/opentracing/opentracing-go v1.2.0
go: downloading github.com/grpc-ecosystem/go-grpc-middleware v1.4.0
go: downloading github.com/pingcap/tipb v0.0.0-20240318032315-55a7867ddd50
go: downloading go.etcd.io/etcd/client/v3 v3.5.12
go: downloading github.com/pingcap/kvproto v0.0.0-20240227073058-929ab83f9754
go: downloading github.com/ngaut/pools v0.0.0-20180318154953-b7bc8c42aac7
go: downloading github.com/tiancaiamao/gp v0.0.0-20221230034425-4025bc8a4d4a
go: downloading github.com/tikv/pd/client v0.0.0-20240322051414-fb9e2d561b6e
go: downloading github.com/tidwall/btree v1.7.0
go: downloading github.com/pingcap/sysutil v1.0.1-0.20240311050922-ae81ee01f3a5
go: downloading github.com/gorilla/mux v1.8.0
go: downloading github.com/uber/jaeger-client-go v2.30.0+incompatible
go: downloading github.com/tikv/client-go/v2 v2.0.8-0.20240424052342-0229f4077f0c
go: downloading github.com/danjacques/gofslock v0.0.0-20240212154529-d899e02bfe22
go: downloading github.com/cockroachdb/errors v1.11.1
go: downloading github.com/shirou/gopsutil/v3 v3.24.2
go: downloading github.com/coocood/freecache v1.2.1
go: downloading github.com/jellydator/ttlcache/v3 v3.0.1
go: downloading github.com/prometheus/client_golang v1.19.0
go: downloading github.com/google/btree v1.1.2
go: downloading github.com/influxdata/tdigest v0.0.1
go: downloading github.com/docker/go-units v0.5.0
go: downloading github.com/golang/snappy v0.0.4
go: downloading github.com/spf13/pflag v1.0.5
go: downloading gopkg.in/yaml.v2 v2.4.0
go: downloading github.com/stretchr/testify v1.9.0
go: downloading github.com/opentracing/basictracer-go v1.1.0
go: downloading github.com/twmb/murmur3 v1.1.6
go: downloading github.com/dolthub/swiss v0.2.1
go: downloading github.com/gogo/protobuf v1.3.2
go: downloading golang.org/x/tools v0.20.0
go: downloading github.com/yangkeao/ldap/v3 v3.4.5-0.20230421065457-369a3bab1117
go: downloading github.com/otiai10/copy v1.2.0
go: downloading github.com/prometheus/client_model v0.6.1
go: downloading go.etcd.io/etcd/api/v3 v3.5.12
go: downloading cloud.google.com/go/storage v1.39.1
go: downloading github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.1
go: downloading github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.5.1
go: downloading github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.0.0
go: downloading github.com/tikv/pd v1.1.0-beta.0.20240407022249-7179657d129b
go: downloading github.com/aliyun/alibaba-cloud-sdk-go v1.61.1581
go: downloading github.com/aws/aws-sdk-go v1.50.0
go: downloading github.com/go-resty/resty/v2 v2.11.0
go: downloading github.com/klauspost/compress v1.17.8
go: downloading github.com/ks3sdklib/aws-sdk-go v1.2.9
go: downloading cloud.google.com/go v0.112.2
go: downloading golang.org/x/oauth2 v0.18.0
go: downloading google.golang.org/api v0.170.0
go: downloading github.com/ngaut/sync2 v0.0.0-20141008032647-7a24ed77b2ef
go: downloading github.com/cespare/xxhash/v2 v2.3.0
go: downloading go.uber.org/mock v0.4.0
go: downloading github.com/cockroachdb/pebble v1.1.0
go: downloading github.com/jfcg/sorty/v2 v2.1.0
go: downloading github.com/carlmjohnson/flagext v0.21.0
go: downloading github.com/dgraph-io/ristretto v0.1.1
go: downloading github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2
go: downloading github.com/dolthub/maphash v0.1.0
go: downloading github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec
go: downloading github.com/go-asn1-ber/asn1-ber v1.5.4
go: downloading github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc
go: downloading github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358
go: downloading github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2
go: downloading gopkg.in/yaml.v3 v3.0.1
go: downloading go.etcd.io/etcd/client/pkg/v3 v3.5.12
go: downloading github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.1
go: downloading github.com/AzureAD/microsoft-authentication-library-for-go v1.2.1
go: downloading golang.org/x/crypto v0.22.0
go: downloading github.com/beorn7/perks v1.0.1
go: downloading github.com/prometheus/common v0.53.0
go: downloading github.com/prometheus/procfs v0.13.0
go: downloading github.com/pkg/errors v0.9.1
go: downloading github.com/uber/jaeger-lib v2.4.1+incompatible
go: downloading github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b
go: downloading github.com/cockroachdb/redact v1.1.5
go: downloading github.com/getsentry/sentry-go v0.27.0
go: downloading github.com/joho/sqltocsv v0.0.0-20210428211105-a6d6801d59df
go: downloading github.com/jedib0t/go-pretty/v6 v6.2.2
go: downloading github.com/tklauser/go-sysconf v0.3.12
go: downloading github.com/lestrrat-go/jwx/v2 v2.0.21
go: downloading github.com/cloudfoundry/gosigar v1.3.6
go: downloading github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13
go: downloading github.com/spkg/bom v1.0.0
go: downloading github.com/xitongsys/parquet-go v1.6.0
go: downloading github.com/jfcg/sixb v1.3.8
go: downloading github.com/google/pprof v0.0.0-20240117000934-35fc243c5815
go: downloading github.com/pingcap/badger v1.5.1-0.20230103063557-828f39b09b6d
go: downloading github.com/wangjohn/quickselect v0.0.0-20161129230411-ed8402a42d5f
go: downloading google.golang.org/genproto/googleapis/api v0.0.0-20240401170217-c3f982113cda
go: downloading github.com/kr/pretty v0.3.1
go: downloading github.com/pingcap/goleveldb v0.0.0-20191226122134-f82aafb29989
go: downloading cloud.google.com/go/compute/metadata v0.2.3
go: downloading github.com/coreos/go-systemd/v22 v22.5.0
go: downloading github.com/cheggaaa/pb/v3 v3.0.8
go: downloading github.com/robfig/cron/v3 v3.0.1
go: downloading cloud.google.com/go/compute v1.25.1
go: downloading cloud.google.com/go/iam v1.1.7
go: downloading github.com/robfig/cron v1.2.0
go: downloading github.com/googleapis/gax-go/v2 v2.12.3
go: downloading github.com/kylelemons/godebug v1.1.0
go: downloading github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c
go: downloading github.com/tklauser/numcpus v0.6.1
go: downloading github.com/kr/text v0.2.0
go: downloading github.com/rogpeppe/go-internal v1.12.0
go: downloading github.com/mattn/go-runewidth v0.0.15
go: downloading github.com/apache/thrift v0.16.0
go: downloading github.com/VividCortex/ewma v1.2.0
go: downloading github.com/fatih/color v1.16.0
go: downloading github.com/mattn/go-colorable v0.1.13
go: downloading github.com/mattn/go-isatty v0.0.20
go: downloading go.opencensus.io v0.23.1-0.20220331163232-052120675fac
go: downloading go.opentelemetry.io/otel v1.24.0
go: downloading go.opentelemetry.io/otel/trace v1.24.0
go: downloading github.com/dustin/go-humanize v1.0.1
go: downloading github.com/golang/glog v1.2.0
go: downloading github.com/golang-jwt/jwt/v5 v5.2.0
go: downloading github.com/lestrrat-go/blackmagic v1.0.2
go: downloading github.com/lestrrat-go/httprc v1.0.5
go: downloading github.com/lestrrat-go/iter v1.0.2
go: downloading github.com/lestrrat-go/option v1.0.1
go: downloading github.com/rivo/uniseg v0.4.7
go: downloading github.com/golang-jwt/jwt v3.2.2+incompatible
go: downloading github.com/lestrrat-go/httpcc v1.0.1
go: downloading github.com/ncw/directio v1.0.5
go: downloading github.com/coocood/rtutil v0.0.0-20190304133409-c84515f646f2
go: downloading github.com/coocood/bbloom v0.0.0-20190830030839-58deb6228d64
go: downloading github.com/klauspost/cpuid v1.3.1
go: downloading github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da
go: downloading github.com/go-logr/logr v1.4.1
go: downloading go.opentelemetry.io/otel/metric v1.24.0
go: downloading github.com/go-logr/stdr v1.2.2
go: downloading github.com/DataDog/zstd v1.5.5
go: downloading github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06
go: downloading github.com/google/s2a-go v0.1.7
go: downloading go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0
go: downloading github.com/googleapis/enterprise-certificate-proxy v0.3.2
go: downloading go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0
go: downloading github.com/felixge/httpsnoop v1.0.4
go: downloading github.com/jmespath/go-jmespath v0.4.0
go: downloading github.com/modern-go/reflect2 v1.0.2
go: downloading github.com/json-iterator/go v1.1.12
go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd
Agent pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-9vz44 is provisioned from template pingcap_tiflow_pull_cdc_integration_kafka_test_2019-x4xxg-7rgw1
---
apiVersion: "v1"
kind: "Pod"
metadata:
  annotations:
    buildUrl: "http://jenkins.apps.svc.cluster.local:8080/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/"
    runUrl: "job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/"
  labels:
    jenkins/jenkins-jenkins-agent: "true"
    jenkins/label-digest: "2841d2b7efc22b8f9e8beea85c9bb16876ab1454"
    jenkins/label: "pingcap_tiflow_pull_cdc_integration_kafka_test_2019-x4xxg"
  name: "pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-9vz44"
  namespace: "jenkins-tiflow"
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: "kubernetes.io/arch"
            operator: "In"
            values:
            - "amd64"
  containers:
  - image: "wurstmeister/zookeeper"
    imagePullPolicy: "IfNotPresent"
    name: "zookeeper"
    resources:
      limits:
        cpu: "2000m"
        memory: "4Gi"
      requests:
        cpu: "2000m"
        memory: "4Gi"
    tty: true
    volumeMounts:
    - mountPath: "/tmp"
      name: "volume-0"
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - args:
    - "cat"
    image: "hub.pingcap.net/jenkins/golang-tini:1.21"
    imagePullPolicy: "Always"
    name: "golang"
    resources:
      limits:
        cpu: "12"
        memory: "32Gi"
      requests:
        cpu: "12"
        memory: "32Gi"
    tty: true
    volumeMounts:
    - mountPath: "/tmp"
      name: "volume-0"
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "KAFKA_CREATE_TOPICS"
      value: "big-message-test:1:1"
    - name: "KAFKA_BROKER_ID"
      value: "1"
    - name: "KAFKA_SSL_KEYSTORE_PASSWORD"
      value: "test1234"
    - name: "KAFKA_ZOOKEEPER_CONNECT"
      value: "localhost:2181"
    - name: "KAFKA_MESSAGE_MAX_BYTES"
      value: "11534336"
    - name: "KAFKA_REPLICA_FETCH_MAX_BYTES"
      value: "11534336"
    - name: "KAFKA_ADVERTISED_LISTENERS"
      value: "SSL://127.0.0.1:9093,PLAINTEXT://127.0.0.1:9092"
    - name: "ZK"
      value: "zk"
    - name: "KAFKA_SSL_KEYSTORE_LOCATION"
      value: "/tmp/kafka.server.keystore.jks"
    - name: "KAFKA_SSL_KEY_PASSWORD"
      value: "test1234"
    - name: "KAFKA_SSL_TRUSTSTORE_PASSWORD"
      value: "test1234"
    - name: "KAFKA_LISTENERS"
      value: "SSL://127.0.0.1:9093,PLAINTEXT://127.0.0.1:9092"
    - name: "KAFKA_SSL_TRUSTSTORE_LOCATION"
      value: "/tmp/kafka.server.truststore.jks"
    - name: "RACK_COMMAND"
      value: "curl -sfL https://github.com/pingcap/tiflow/raw/6e62afcfecc4e3965d8818784327d4bf2600d9fa/tests/_certificates/kafka.server.keystore.jks\
        \ -o /tmp/kafka.server.keystore.jks && curl -sfL https://github.com/pingcap/tiflow/raw/6e62afcfecc4e3965d8818784327d4bf2600d9fa/tests/_certificates/kafka.server.truststore.jks\
        \ -o /tmp/kafka.server.truststore.jks"
    image: "wurstmeister/kafka:2.12-2.4.1"
    imagePullPolicy: "IfNotPresent"
    name: "kafka"
    resources:
      limits:
        cpu: "4000m"
        memory: "6Gi"
      requests:
        cpu: "4000m"
        memory: "6Gi"
    tty: true
    volumeMounts:
    - mountPath: "/tmp"
      name: "volume-0"
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "KAFKA_SERVER"
      value: "127.0.0.1:9092"
    - name: "ZOOKEEPER_SERVER"
      value: "127.0.0.1:2181"
    - name: "DOWNSTREAM_DB_HOST"
      value: "127.0.0.1"
    - name: "USE_FLAT_MESSAGE"
      value: "true"
    - name: "DOWNSTREAM_DB_PORT"
      value: "3306"
    - name: "DB_NAME"
      value: "test"
    image: "rustinliu/ticdc-canal-json-adapter:latest"
    imagePullPolicy: "IfNotPresent"
    name: "canal-adapter"
    resources:
      requests:
        cpu: "200m"
        memory: "4Gi"
    tty: true
    volumeMounts:
    - mountPath: "/tmp"
      name: "volume-0"
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - image: "hub.pingcap.net/jenkins/network-multitool"
    name: "net-tool"
    resources:
      limits:
        memory: "128Mi"
        cpu: "100m"
    tty: true
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - image: "hub.pingcap.net/jenkins/python3-requests:latest"
    name: "report"
    resources:
      requests:
        cpu: "200m"
        memory: "4Gi"
    tty: true
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "MYSQL_ROOT_PASSWORD"
      value: ""
    - name: "MYSQL_USER"
      value: "mysqluser"
    - name: "MYSQL_PASSWORD"
      value: "mysqlpw"
    - name: "MYSQL_ALLOW_EMPTY_PASSWORD"
      value: "yes"
    - name: "MYSQL_TCP_PORT"
      value: "3310"
    image: "quay.io/debezium/example-mysql:2.4"
    imagePullPolicy: "IfNotPresent"
    name: "mysql"
    resources:
      requests:
        cpu: "200m"
        memory: "4Gi"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "BOOTSTRAP_SERVERS"
      value: "127.0.0.1:9092"
    - name: "GROUP_ID"
      value: "1"
    - name: "CONFIG_STORAGE_TOPIC"
      value: "my_connect_configs"
    - name: "OFFSET_STORAGE_TOPIC"
      value: "my_connect_offsets"
    - name: "STATUS_STORAGE_TOPIC"
      value: "my_connect_statuses"
    image: "quay.io/debezium/connect:2.4"
    name: "connect"
    resources:
      requests:
        cpu: "200m"
        memory: "4Gi"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "JENKINS_SECRET"
      value: "********"
    - name: "JENKINS_TUNNEL"
      value: "jenkins-agent.apps.svc.cluster.local:50000"
    - name: "JENKINS_AGENT_NAME"
      value: "pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-9vz44"
    - name: "JENKINS_NAME"
      value: "pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-9vz44"
    - name: "JENKINS_AGENT_WORKDIR"
      value: "/home/jenkins/agent"
    - name: "JENKINS_URL"
      value: "http://jenkins.apps.svc.cluster.local:8080/jenkins/"
    image: "jenkins/inbound-agent:3206.vb_15dcf73f6a_9-2"
    name: "jnlp"
    resources:
      requests:
        memory: "256Mi"
        cpu: "100m"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  restartPolicy: "Never"
  securityContext:
    fsGroup: 1000
  volumes:
  - emptyDir: {}
    name: "volume-0"
  - emptyDir:
      medium: ""
    name: "workspace-volume"

Running on pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-9vz44 in /home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test
[Pipeline] {
[Pipeline] checkout
The recommended git tool is: git
No credentials specified
Warning: JENKINS-30600: special launcher org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1@12696d96; decorates RemoteLauncher[hudson.remoting.Channel@3ea86317:JNLP4-connect connection from 10.233.100.65/10.233.100.65:41036] will be ignored (a typical symptom is the Git executable not being run inside a designated container)
Cloning the remote Git repository
Using shallow clone with depth 1
Cloning repository https://github.com/PingCAP-QE/ci.git
 > git init /home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test # timeout=10
Fetching upstream changes from https://github.com/PingCAP-QE/ci.git
 > git --version # timeout=10
 > git --version # 'git version 2.39.2'
 > git fetch --tags --force --progress --depth=1 -- https://github.com/PingCAP-QE/ci.git +refs/heads/*:refs/remotes/origin/* # timeout=5
Avoid second fetch
Checking out Revision 73d1a8209bddec16d8d58403efcd7a20d12cf867 (origin/main)
 > git config remote.origin.url https://github.com/PingCAP-QE/ci.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse origin/main^{commit} # timeout=10
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 73d1a8209bddec16d8d58403efcd7a20d12cf867 # timeout=10
Commit message: "update utf go build image (#2965)"
[Pipeline] withEnv
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] timeout
Timeout set to expire in 45 min
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $TICDC_COVERALLS_TOKEN or $TICDC_CODECOV_TOKEN
[Pipeline] {
[Pipeline] dir
Running in /home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow
[Pipeline] {
[Pipeline] cache
Cache restored successfully (ws/jenkins-pingcap-tiflow-pull_cdc_integration_kafka_test-2019/tiflow-cdc)
3672599040 bytes in 12.11 secs (303291340 bytes/sec)
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] timeout
Timeout set to expire in 6 min 0 sec
[Pipeline] {
[Pipeline] sh
+ echo Waiting for zookeeper to be ready...
Waiting for zookeeper to be ready...
+ nc -z localhost 2181
+ echo Waiting for kafka to be ready...
Waiting for kafka to be ready...
+ nc -z localhost 9092
+ echo Waiting for kafka-broker to be ready...
Waiting for kafka-broker to be ready...
+ echo dump
+ nc localhost 2181
+ grep brokers
+ awk {$1=$1;print}
+ grep -F -w /brokers/ids/1
/brokers/ids/1
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // container
[Pipeline] sh
+ rm -rf /tmp/tidb_cdc_test
+ mkdir -p /tmp/tidb_cdc_test
+ chmod +x ./tests/integration_tests/run_group.sh
+ ./tests/integration_tests/run_group.sh kafka G09
Run cases: gc_safepoint changefeed_pause_resume cli_with_auth savepoint synced_status
PROW_JOB_ID=e42b4292-6bbd-42dc-a985-065b6f50e601
JENKINS_NODE_COOKIE=baae9655-0f80-40d7-b9b9-38d4ff64b8ba
BUILD_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/
GOLANG_VERSION=1.21.0
HOSTNAME=pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-9vz44
HUDSON_SERVER_COOKIE=83ef27fe9acccc92
KUBERNETES_PORT=tcp://10.233.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
TERM=xterm
STAGE_NAME=Test
BUILD_TAG=jenkins-pingcap-tiflow-pull_cdc_integration_kafka_test-2019
KUBERNETES_SERVICE_PORT=443
GIT_PREVIOUS_COMMIT=73d1a8209bddec16d8d58403efcd7a20d12cf867
JOB_SPEC={"type":"batch","job":"pingcap/tiflow/pull_cdc_integration_kafka_test","buildid":"1791354743763767300","prowjobid":"e42b4292-6bbd-42dc-a985-065b6f50e601","refs":{"org":"pingcap","repo":"tiflow","base_ref":"master","base_sha":"e75248dd6dc9c2bfb61941a33ef5f9e5f47dfd41","pulls":[{"number":11048,"author":"wk989898","sha":"fb8c1243233c8682a868e1405e7f59b995ccd641","title":"ticdc: fix detecting kafka version "},{"number":11099,"author":"hicqu","sha":"d2621b6f95d22360b304264b4bdeddf18d1832c5","title":"cdc: adjust sorter options to avoid Seek CPU usage exploding"}]}}
KUBERNETES_SERVICE_HOST=10.233.0.1
WORKSPACE=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test
JOB_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/
RUN_CHANGES_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/display/redirect?page=changes
RUN_ARTIFACTS_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/display/redirect?page=artifacts
FILE_SERVER_URL=http://fileserver.pingcap.net
JENKINS_HOME=/var/jenkins_home
GIT_COMMIT=73d1a8209bddec16d8d58403efcd7a20d12cf867
PATH=/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/_utils:/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/../bin:/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/../../bin:/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/../../scripts/bin
RUN_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/display/redirect
GOPROXY=http://goproxy.apps.svc,https://proxy.golang.org,direct
POD_CONTAINER=golang
PWD=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow
HUDSON_URL=https://do.pingcap.net/jenkins/
TICDC_COVERALLS_TOKEN=****
JOB_NAME=pingcap/tiflow/pull_cdc_integration_kafka_test
TZ=Asia/Shanghai
BUILD_DISPLAY_NAME=#2019
TEST_GROUP=G09
JENKINS_URL=https://do.pingcap.net/jenkins/
BUILD_ID=1791354743763767300
TICDC_CODECOV_TOKEN=****
GOLANG_DOWNLOAD_SHA256=d0398903a16ba2232b389fb31032ddf57cac34efda306a0eebac34f0965a0742
JOB_BASE_NAME=pull_cdc_integration_kafka_test
GIT_PREVIOUS_SUCCESSFUL_COMMIT=73d1a8209bddec16d8d58403efcd7a20d12cf867
RUN_TESTS_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/2019/display/redirect?page=tests
SHLVL=5
HOME=/home/jenkins
POD_LABEL=pingcap_tiflow_pull_cdc_integration_kafka_test_2019-x4xxg
GOROOT=/usr/local/go
GIT_BRANCH=origin/main
KUBERNETES_PORT_443_TCP_PROTO=tcp
TINI_VERSION=v0.19.0
CI=true
KUBERNETES_SERVICE_PORT_HTTPS=443
WORKSPACE_TMP=/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test@tmp
EXECUTOR_NUMBER=0
JENKINS_SERVER_COOKIE=durable-8f1d433d6527e85b4c28b432b07fa0c56dd3090c9176381d29a5b4531676247a
NODE_LABELS=pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-9vz44 pingcap_tiflow_pull_cdc_integration_kafka_test_2019-x4xxg
GIT_URL=https://github.com/PingCAP-QE/ci.git
HUDSON_HOME=/var/jenkins_home
CLASSPATH=
NODE_NAME=pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-9vz44
GOPATH=/go
JOB_DISPLAY_URL=https://do.pingcap.net/jenkins/job/pingcap/job/tiflow/job/pull_cdc_integration_kafka_test/display/redirect
BUILD_NUMBER=2019
KUBERNETES_PORT_443_TCP_ADDR=10.233.0.1
KUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443
GOLANG_DOWNLOAD_URL=https://dl.google.com/go/go1.21.0.linux-amd64.tar.gz
_=/usr/bin/env
find: '/tmp/tidb_cdc_test/*/*': No such file or directory
=================>> Running test /home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/tests/integration_tests/gc_safepoint/run.sh using Sink-Type: kafka... <<=================
The 1 times to try to start tidb cluster...
start tidb cluster in /tmp/tidb_cdc_test/gc_safepoint
Starting Upstream PD...
Release Version: v8.2.0-alpha-34-g644e904ff
Edition: Community
Git Commit Hash: 644e904ffb32c98c620ece220ca6363f58e4af23
Git Branch: master
UTC Build Time:  2024-05-17 04:39:03
Starting Downstream PD...
Release Version: v8.2.0-alpha-34-g644e904ff
Edition: Community
Git Commit Hash: 644e904ffb32c98c620ece220ca6363f58e4af23
Git Branch: master
UTC Build Time:  2024-05-17 04:39:03
Verifying upstream PD is started...
Verifying downstream PD is started...
Starting Upstream TiKV...
TiKV 
Release Version:   8.2.0-alpha
Edition:           Community
Git Commit Hash:   3c2cbcf24e326ea917736538b05605c8143b6e1d
Git Commit Branch: master
UTC Build Time:    2024-05-16 09:05:19
Rust Version:      rustc 1.77.0-nightly (89e2160c4 2023-12-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Downstream TiKV...
TiKV 
Release Version:   8.2.0-alpha
Edition:           Community
Git Commit Hash:   3c2cbcf24e326ea917736538b05605c8143b6e1d
Git Commit Branch: master
UTC Build Time:    2024-05-16 09:05:19
Rust Version:      rustc 1.77.0-nightly (89e2160c4 2023-12-27)
Enable Features:   memory-engine pprof-fp jemalloc mem-profiling portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine trace-async-tasks openssl-vendored
Profile:           dist_release
Starting Upstream TiDB...
Release Version: v8.2.0-alpha-182-g1c4a9c6434
Edition: Community
Git Commit Hash: 1c4a9c643406bb8fbcbbdf039ca167d5373dd134
Git Branch: master
UTC Build Time: 2024-05-17 05:17:02
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Starting Downstream TiDB...
Release Version: v8.2.0-alpha-182-g1c4a9c6434
Edition: Community
Git Commit Hash: 1c4a9c643406bb8fbcbbdf039ca167d5373dd134
Git Branch: master
UTC Build Time: 2024-05-17 05:17:02
GoVersion: go1.21.6
Race Enabled: false
Check Table Before Drop: false
Store: unistore
Verifying Upstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	198	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e15dd3e48000c	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-9vz44, pid:1326, start at 2024-05-17 15:32:20.018335941 +0800 CST m=+7.633676640	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-15:34:20.034 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-15:32:20.037 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-15:22:20.037 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	198	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e15dd3e48000c	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-9vz44, pid:1326, start at 2024-05-17 15:32:20.018335941 +0800 CST m=+7.633676640	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-15:34:20.034 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-15:32:20.037 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-15:22:20.037 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Verifying Downstream TiDB is started...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)
VARIABLE_NAME	VARIABLE_VALUE	COMMENT
bootstrapped	True	Bootstrap flag. Do not delete.
tidb_server_version	198	Bootstrap version. Do not delete.
system_tz	Asia/Shanghai	TiDB Global System Timezone.
new_collation_enabled	True	If the new collations are enabled. Do not edit it.
ddl_table_version	3	DDL Table Version. Do not delete.
tikv_gc_leader_uuid	63e15dd64e40013	Current GC worker leader UUID. (DO NOT EDIT)
tikv_gc_leader_desc	host:pingcap-tiflow-pull-cdc-integration-kafka-test-2019-x4xxg-9vz44, pid:1372, start at 2024-05-17 15:32:22.502784022 +0800 CST m=+10.033557187	Host name and pid of current GC leader. (DO NOT EDIT)
tikv_gc_leader_lease	20240517-15:34:22.524 +0800	Current GC worker leader lease. (DO NOT EDIT)
tikv_gc_auto_concurrency	true	Let TiDB pick the concurrency automatically. If set false, tikv_gc_concurrency will be used
tikv_gc_enable	true	Current GC enable status
tikv_gc_run_interval	10m0s	GC run interval, at least 10m, in Go format.
tikv_gc_life_time	10m0s	All versions within life time will not be collected by GC, at least 10m, in Go format.
tikv_gc_last_run_time	20240517-15:32:22.507 +0800	The time when last GC starts. (DO NOT EDIT)
tikv_gc_safe_point	20240517-15:22:22.507 +0800	All versions after safe point can be accessed. (DO NOT EDIT)
Starting Upstream TiFlash...
TiFlash
Release Version: v8.2.0-alpha-28-g961f9dded
Edition:         Community
Git Commit Hash: 961f9dded38b814bb41b33c691ab58f3f090a0d9
Git Branch:      HEAD
UTC Build Time:  2024-05-15 03:49:49
Enable Features: jemalloc sm4(GmSSL) avx2 avx512 unwind thinlto
Profile:         RELWITHDEBINFO
Compiler:        clang++ 13.0.0

Raft Proxy
Git Commit Hash:   7dc50b4eb06124e31f03adb06c20ff7ab61c5f79
Git Commit Branch: HEAD
UTC Build Time:    2024-05-15 03:53:57
Rust Version:      rustc 1.67.0-nightly (96ddd32c4 2022-11-14)
Storage Engine:    tiflash
Prometheus Prefix: tiflash_proxy_
Profile:           release
Enable Features:   external-jemalloc portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored portable sse test-engine-kv-rocksdb test-engine-raft-raft-engine cloud-aws cloud-gcp cloud-azure openssl-vendored
Verifying Upstream TiFlash is started...
Logging trace to /tmp/tidb_cdc_test/gc_safepoint/tiflash/log/server.log
Logging errors to /tmp/tidb_cdc_test/gc_safepoint/tiflash/log/error.log
arg matches is ArgMatches { args: {"data-dir": MatchedArg { occurs: 1, indices: [6], vals: ["/tmp/tidb_cdc_test/gc_safepoint/tiflash/db/proxy"] }, "addr": MatchedArg { occurs: 1, indices: [20], vals: ["127.0.0.1:9000"] }, "advertise-addr": MatchedArg { occurs: 1, indices: [4], vals: ["127.0.0.1:9000"] }, "engine-addr": MatchedArg { occurs: 1, indices: [2], vals: ["127.0.0.1:9500"] }, "config": MatchedArg { occurs: 1, indices: [8], vals: ["/tmp/tidb_cdc_test/gc_safepoint/tiflash-proxy.toml"] }, "engine-label": MatchedArg { occurs: 1, indices: [14], vals: ["tiflash"] }, "engine-git-hash": MatchedArg { occurs: 1, indices: [10], vals: ["961f9dded38b814bb41b33c691ab58f3f090a0d9"] }, "engine-version": MatchedArg { occurs: 1, indices: [12], vals: ["v8.2.0-alpha-28-g961f9dded"] }, "pd-endpoints": MatchedArg { occurs: 1, indices: [16], vals: ["127.0.0.1:2379"] }, "log-file": MatchedArg { occurs: 1, indices: [18], vals: ["/tmp/tidb_cdc_test/gc_safepoint/tiflash/log/proxy.log"] }}, subcommand: None, usage: Some("USAGE:\n    TiFlash Proxy [FLAGS] [OPTIONS] --engine-git-hash <engine-git-hash> --engine-label <engine-label> --engine-version <engine-version>") }
[Fri May 17 15:32:26 CST 2024] <<<<<< START cdc server in gc_safepoint case >>>>>>
+ [[ '' == \t\r\u\e ]]
+ set +e
+ get_info_fail_msg='failed to get info:'
+ etcd_info_msg='etcd info'
+ '[' -z '' ']'
+ curl_status_cmd='curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL'
+ GO_FAILPOINTS='github.com/pingcap/tiflow/pkg/txnutil/gc/InjectGcSafepointUpdateInterval=return(500)'
+ [[ no != \n\o ]]
+ cdc.test -test.coverprofile=/tmp/tidb_cdc_test/cov.gc_safepoint.26812683.out server --log-file /tmp/tidb_cdc_test/gc_safepoint/cdc.log --log-level debug --data-dir /tmp/tidb_cdc_test/gc_safepoint/cdc_data --cluster-id default --addr 127.0.0.1:8300 --pd http://127.0.0.1:2379
+ (( i = 0 ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connection refused
* Failed connect to 127.0.0.1:8300; Connection refused
* Closing connection 0
+ res=
+ grep -q 'failed to get info:'
+ echo ''
+ echo ''
+ grep -q 'etcd info'
+ '[' 0 -eq 50 ']'
+ sleep 3
+ (( i++ ))
+ (( i <= 50 ))
++ curl -vsL --max-time 20 http://127.0.0.1:8300/debug/info --user ticdc:ticdc_secret -vsL
* About to connect() to 127.0.0.1 port 8300 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8300 (#0)
* Server auth using Basic with user 'ticdc'
> GET /debug/info HTTP/1.1
> Authorization: Basic dGljZGM6dGljZGNfc2VjcmV0
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8300
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 17 May 2024 07:32:29 GMT
< Content-Length: 815
< Content-Type: text/plain; charset=utf-8
< 
{ [data not shown]
* Connection #0 to host 127.0.0.1 left intact
+ res='

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/a2cd8739-0491-43a2-bca5-4a9c93e95756
	{"id":"a2cd8739-0491-43a2-bca5-4a9c93e95756","address":"127.0.0.1:8300","version":"v8.2.0-alpha-64-g930a58d61","git-hash":"930a58d6174d069a6bdcc46f935bffb2fddccfd4","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/bin/cdc.test","start-timestamp":1715931147}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f85771b93d8
	a2cd8739-0491-43a2-bca5-4a9c93e95756

/tidb/cdc/default/default/upstream/7369868077969485054
	{"id":7369868077969485054,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/a2cd8739-0491-43a2-bca5-4a9c93e95756
	{"id":"a2cd8739-0491-43a2-bca5-4a9c93e95756","address":"127.0.0.1:8300","version":"v8.2.0-alpha-64-g930a58d61","git-hash":"930a58d6174d069a6bdcc46f935bffb2fddccfd4","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/bin/cdc.test","start-timestamp":1715931147}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f85771b93d8
	a2cd8739-0491-43a2-bca5-4a9c93e95756

/tidb/cdc/default/default/upstream/7369868077969485054
	{"id":7369868077969485054,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'failed to get info:'
+ echo '

*** owner info ***:



*** processors info ***:



*** etcd info ***:

/tidb/cdc/default/__cdc_meta__/capture/a2cd8739-0491-43a2-bca5-4a9c93e95756
	{"id":"a2cd8739-0491-43a2-bca5-4a9c93e95756","address":"127.0.0.1:8300","version":"v8.2.0-alpha-64-g930a58d61","git-hash":"930a58d6174d069a6bdcc46f935bffb2fddccfd4","deploy-path":"/home/jenkins/agent/workspace/pingcap/tiflow/pull_cdc_integration_kafka_test/tiflow/bin/cdc.test","start-timestamp":1715931147}

/tidb/cdc/default/__cdc_meta__/meta/meta-version
	1

/tidb/cdc/default/__cdc_meta__/owner/22318f85771b93d8
	a2cd8739-0491-43a2-bca5-4a9c93e95756

/tidb/cdc/default/default/upstream/7369868077969485054
	{"id":7369868077969485054,"pd-endpoints":"http://127.0.0.1:2379,http://127.0.0.1:2379","key-path":"","cert-path":"","ca-path":"","cert-allowed-cn":null}'
+ grep -q 'etcd info'
+ break
+ set +x
[Fri May 17 15:32:30 CST 2024] <<<<<< START kafka consumer in gc_safepoint case >>>>>>
0
check diff failed 1-th time, retry later
check diff failed 2-th time, retry later
check diff successfully
check_safepoint_forward http://127.0.0.1:2379 7369868077969485054 449821056339017728 449821055316393988
run task successfully
check_changefeed_state http://127.0.0.1:2379 9f88fc4f-272c-40d7-b89f-7c2e23332004 stopped null
+ endpoints=http://127.0.0.1:2379
+ changefeed_id=9f88fc4f-272c-40d7-b89f-7c2e23332004
+ expected_state=stopped
+ error_msg=null
+ tls_dir=null
+ [[ http://127.0.0.1:2379 =~ https ]]
++ cdc cli changefeed query --pd=http://127.0.0.1:2379 -c 9f88fc4f-272c-40d7-b89f-7c2e23332004 -s
+ info='{
  "upstream_id": 7369868077969485054,
  "namespace": "default",
  "id": "9f88fc4f-272c-40d7-b89f-7c2e23332004",
  "state": "stopped",
  "checkpoint_tso": 449821056863043585,
  "checkpoint_time": "2024-05-17 15:32:35.636",
  "error": null
}'
+ echo '{
  "upstream_id": 7369868077969485054,
  "namespace": "default",
  "id": "9f88fc4f-272c-40d7-b89f-7c2e23332004",
  "state": "stopped",
  "checkpoint_tso": 449821056863043585,
  "checkpoint_time": "2024-05-17 15:32:35.636",
  "error": null
}'
{
  "upstream_id": 7369868077969485054,
  "namespace": "default",
  "id": "9f88fc4f-272c-40d7-b89f-7c2e23332004",
  "state": "stopped",
  "checkpoint_tso": 449821056863043585,
  "checkpoint_time": "2024-05-17 15:32:35.636",
  "error": null
}
++ jq -r .state
++ echo '{' '"upstream_id":' 7369868077969485054, '"namespace":' '"default",' '"id":' '"9f88fc4f-272c-40d7-b89f-7c2e23332004",' '"state":' '"stopped",' '"checkpoint_tso":' 449821056863043585, '"checkpoint_time":' '"2024-05-17' '15:32:35.636",' '"error":' null '}'
+ state=stopped
+ [[ ! stopped == \s\t\o\p\p\e\d ]]
++ echo '{' '"upstream_id":' 7369868077969485054, '"namespace":' '"default",' '"id":' '"9f88fc4f-272c-40d7-b89f-7c2e23332004",' '"state":' '"stopped",' '"checkpoint_tso":' 449821056863043585, '"checkpoint_time":' '"2024-05-17' '15:32:35.636",' '"error":' null '}'
++ jq -r .error.message
+ message=null
+ [[ ! null =~ null ]]
run task successfully
check_safepoint_equal http://127.0.0.1:2379 7369868077969485054
[2024/05/17 15:32:35.634 +08:00] [INFO] [main.go:99] ["running ddl test: 1 modifyColumnDefaultValueDDL2"]
[2024/05/17 15:32:35.634 +08:00] [INFO] [main.go:99] ["running ddl test: 0 modifyColumnDefaultValueDDL1"]
[2024/05/17 15:32:36.154 +08:00] [INFO] [main.go:835] ["running ddl test: testMultiDDLs1263b400_2576_4a5f_8e12_4d52af784152"]
[2024/05/17 15:32:36.161 +08:00] [INFO] [main.go:835] ["running ddl test: testMultiDDLs8c4bceb7_f35d_4e12_b22f_517e473456ed"]
[2024/05/17 15:32:36.163 +08:00] [INFO] [main.go:835] ["running ddl test: testMultiDDLsac11ded6_47bd_41cb_9c4a_1b47cf5a9e07"]
[2024/05/17 15:32:36.164 +08:00] [INFO] [main.go:835] ["running ddl test: testMultiDDLsf3761770_c11d_4f07_ac09_1ede08c43150"]
[2024/05/17 15:32:36.165 +08:00] [INFO] [main.go:835] ["running ddl test: testMultiDDLs7d78d104_09bb_4855_b7be_053912dee03a"]
[2024/05/17 15:32:36.166 +08:00] [INFO] [main.go:835] ["running ddl test: testMultiDDLs42ba5299_a800_40da_9934_7e8576a7dc12"]
[2024/05/17 15:32:36.168 +08:00] [INFO] [main.go:835] ["running ddl test: testMultiDDLs0811beed_f632_4d26_a56f_cd2a861f3edb"]
[2024/05/17 15:32:36.169 +08:00] [INFO] [main.go:835] ["running ddl test: testMultiDDLs532ed696_5269_422c_959d_62a36a7acf4b"]
[2024/05/17 15:32:36.195 +08:00] [INFO] [main.go:178] ["1 insert success: 100"]
[2024/05/17 15:32:36.195 +08:00] [INFO] [main.go:178] ["1 insert success: 100"]
[2024/05/17 15:32:36.279 +08:00] [INFO] [main.go:178] ["0 insert success: 100"]
[2024/05/17 15:32:36.279 +08:00] [INFO] [main.go:178] ["0 insert success: 100"]
[2024/05/17 15:32:36.714 +08:00] [INFO] [main.go:178] ["72 insert success: 100"]
[2024/05/17 15:32:36.716 +08:00] [INFO] [main.go:178] ["73 insert success: 100"]
[2024/05/17 15:32:36.725 +08:00] [INFO] [main.go:178] ["73 insert success: 100"]
[2024/05/17 15:32:36.727 +08:00] [INFO] [main.go:178] ["72 insert success: 100"]
[2024/05/17 15:32:36.727 +08:00] [INFO] [main.go:178] ["73 insert success: 100"]
[2024/05/17 15:32:36.728 +08:00] [INFO] [main.go:178] ["72 insert success: 100"]
[2024/05/17 15:32:36.728 +08:00] [INFO] [main.go:178] ["73 insert success: 100"]
[2024/05/17 15:32:36.729 +08:00] [INFO] [main.go:178] ["73 insert success: 100"]
[2024/05/17 15:32:36.731 +08:00] [INFO] [main.go:178] ["72 insert success: 100"]
[2024/05/17 15:32:36.732 +08:00] [INFO] [main.go:178] ["73 insert success: 100"]
[2024/05/17 15:32:36.732 +08:00] [INFO] [main.go:178] ["72 insert success: 100"]
[2024/05/17 15:32:36.733 +08:00] [INFO] [main.go:178] ["72 insert success: 100"]
[2024/05/17 15:32:36.735 +08:00] [INFO] [main.go:178] ["73 insert success: 100"]
[2024/05/17 15:32:36.735 +08:00] [INFO] [main.go:178] ["72 insert success: 100"]
[2024/05/17 15:32:36.739 +08:00] [INFO] [main.go:178] ["72 insert success: 100"]
[2024/05/17 15:32:36.751 +08:00] [INFO] [main.go:178] ["1 insert success: 200"]
[2024/05/17 15:32:36.753 +08:00] [INFO] [main.go:178] ["1 insert success: 200"]
[2024/05/17 15:32:36.776 +08:00] [INFO] [main.go:178] ["73 insert success: 100"]
[2024/05/17 15:32:36.932 +08:00] [INFO] [main.go:178] ["0 insert success: 200"]
[2024/05/17 15:32:36.935 +08:00] [INFO] [main.go:178] ["0 insert success: 200"]
[2024/05/17 15:32:36.937 +08:00] [INFO] [main.go:199] ["0 delete success: 100"]
[2024/05/17 15:32:36.940 +08:00] [INFO] [main.go:199] ["0 delete success: 100"]
[2024/05/17 15:32:37.251 +08:00] [INFO] [main.go:178] ["72 insert success: 200"]
[2024/05/17 15:32:37.315 +08:00] [INFO] [main.go:178] ["73 insert success: 200"]
[2024/05/17 15:32:37.319 +08:00] [INFO] [main.go:178] ["72 insert success: 200"]
[2024/05/17 15:32:37.320 +08:00] [INFO] [main.go:178] ["73 insert success: 200"]
[2024/05/17 15:32:37.321 +08:00] [INFO] [main.go:178] ["72 insert success: 200"]
[2024/05/17 15:32:37.324 +08:00] [INFO] [main.go:178] ["73 insert success: 200"]
[2024/05/17 15:32:37.324 +08:00] [INFO] [main.go:178] ["72 insert success: 200"]
[2024/05/17 15:32:37.325 +08:00] [INFO] [main.go:178] ["73 insert success: 200"]
[2024/05/17 15:32:37.325 +08:00] [INFO] [main.go:178] ["72 insert success: 200"]
[2024/05/17 15:32:37.327 +08:00] [INFO] [main.go:178] ["72 insert success: 200"]
[2024/05/17 15:32:37.329 +08:00] [INFO] [main.go:178] ["73 insert success: 200"]
[2024/05/17 15:32:37.330 +08:00] [INFO] [main.go:178] ["72 insert success: 200"]
[2024/05/17 15:32:37.332 +08:00] [INFO] [main.go:178] ["73 insert success: 200"]
[2024/05/17 15:32:37.333 +08:00] [INFO] [main.go:178] ["72 insert success: 200"]
[2024/05/17 15:32:37.334 +08:00] [INFO] [main.go:178] ["73 insert success: 200"]
[2024/05/17 15:32:37.348 +08:00] [INFO] [main.go:178] ["1 insert success: 300"]
[2024/05/17 15:32:37.350 +08:00] [INFO] [main.go:178] ["1 insert success: 300"]
[2024/05/17 15:32:37.369 +08:00] [INFO] [main.go:178] ["73 insert success: 200"]
[2024/05/17 15:32:37.618 +08:00] [INFO] [main.go:178] ["0 insert success: 300"]
[2024/05/17 15:32:37.625 +08:00] [INFO] [main.go:178] ["0 insert success: 300"]
[2024/05/17 15:32:37.841 +08:00] [INFO] [main.go:178] ["72 insert success: 300"]
[2024/05/17 15:32:37.849 +08:00] [INFO] [main.go:178] ["72 insert success: 300"]
[2024/05/17 15:32:37.851 +08:00] [INFO] [main.go:178] ["72 insert success: 300"]
[2024/05/17 15:32:37.911 +08:00] [INFO] [main.go:178] ["72 insert success: 300"]
[2024/05/17 15:32:37.911 +08:00] [INFO] [main.go:178] ["72 insert success: 300"]
[2024/05/17 15:32:37.911 +08:00] [INFO] [main.go:178] ["72 insert success: 300"]
[2024/05/17 15:32:37.917 +08:00] [INFO] [main.go:178] ["72 insert success: 300"]
[2024/05/17 15:32:37.917 +08:00] [INFO] [main.go:178] ["72 insert success: 300"]
[2024/05/17 15:32:37.921 +08:00] [INFO] [main.go:178] ["73 insert success: 300"]
[2024/05/17 15:32:37.922 +08:00] [INFO] [main.go:178] ["73 insert success: 300"]
[2024/05/17 15:32:37.922 +08:00] [INFO] [main.go:178] ["73 insert success: 300"]
[2024/05/17 15:32:37.926 +08:00] [INFO] [main.go:178] ["73 insert success: 300"]
[2024/05/17 15:32:37.927 +08:00] [INFO] [main.go:178] ["73 insert success: 300"]
[2024/05/17 15:32:37.932 +08:00] [INFO] [main.go:178] ["73 insert success: 300"]
[2024/05/17 15:32:37.934 +08:00] [INFO] [main.go:178] ["73 insert success: 300"]
[2024/05/17 15:32:37.946 +08:00] [INFO] [main.go:178] ["1 insert success: 400"]
[2024/05/17 15:32:37.958 +08:00] [INFO] [main.go:178] ["1 insert success: 400"]
[2024/05/17 15:32:37.966 +08:00] [INFO] [main.go:178] ["73 insert success: 300"]
[2024/05/17 15:32:38.247 +08:00] [INFO] [main.go:178] ["0 insert success: 400"]
[2024/05/17 15:32:38.251 +08:00] [INFO] [main.go:199] ["0 delete success: 200"]
[2024/05/17 15:32:38.311 +08:00] [INFO] [main.go:178] ["0 insert success: 400"]
[2024/05/17 15:32:38.316 +08:00] [INFO] [main.go:199] ["0 delete success: 200"]
[2024/05/17 15:32:38.319 +08:00] [INFO] [main.go:178] ["72 insert success: 400"]
[2024/05/17 15:32:38.329 +08:00] [INFO] [main.go:178] ["72 insert success: 400"]
[2024/05/17 15:32:38.330 +08:00] [INFO] [main.go:178] ["72 insert success: 400"]
[2024/05/17 15:32:38.444 +08:00] [INFO] [main.go:178] ["72 insert success: 400"]
[2024/05/17 15:32:38.446 +08:00] [INFO] [main.go:178] ["72 insert success: 400"]
[2024/05/17 15:32:38.447 +08:00] [INFO] [main.go:178] ["72 insert success: 400"]
[2024/05/17 15:32:38.450 +08:00] [INFO] [main.go:178] ["72 insert success: 400"]
[2024/05/17 15:32:38.455 +08:00] [INFO] [main.go:178] ["72 insert success: 400"]
[2024/05/17 15:32:38.525 +08:00] [INFO] [main.go:178] ["73 insert success: 400"]
[2024/05/17 15:32:38.527 +08:00] [INFO] [main.go:178] ["73 insert success: 400"]
[2024/05/17 15:32:38.529 +08:00] [INFO] [main.go:178] ["73 insert success: 400"]
[2024/05/17 15:32:38.530 +08:00] [INFO] [main.go:178] ["73 insert success: 400"]
[2024/05/17 15:32:38.534 +08:00] [INFO] [main.go:178] ["73 insert success: 400"]
[2024/05/17 15:32:38.537 +08:00] [INFO] [main.go:178] ["73 insert success: 400"]
[2024/05/17 15:32:38.538 +08:00] [INFO] [main.go:178] ["73 insert success: 400"]
[2024/05/17 15:32:38.545 +08:00] [INFO] [main.go:178] ["1 insert success: 500"]
[2024/05/17 15:32:38.556 +08:00] [INFO] [main.go:178] ["1 insert success: 500"]
[2024/05/17 15:32:38.564 +08:00] [INFO] [main.go:178] ["73 insert success: 400"]
[2024/05/17 15:32:38.854 +08:00] [INFO] [main.go:178] ["72 insert success: 500"]
[2024/05/17 15:32:38.919 +08:00] [INFO] [main.go:178] ["72 insert success: 500"]
[2024/05/17 15:32:38.920 +08:00] [INFO] [main.go:178] ["72 insert success: 500"]
[2024/05/17 15:32:39.032 +08:00] [INFO] [main.go:178] ["72 insert success: 500"]
[2024/05/17 15:32:39.035 +08:00] [INFO] [main.go:178] ["0 insert success: 500"]
[2024/05/17 15:32:39.036 +08:00] [INFO] [main.go:178] ["72 insert success: 500"]
[2024/05/17 15:32:39.038 +08:00] [INFO] [main.go:178] ["72 insert success: 500"]
[2024/05/17 15:32:39.041 +08:00] [INFO] [main.go:178] ["72 insert success: 500"]
[2024/05/17 15:32:39.052 +08:00] [INFO] [main.go:178] ["0 insert success: 500"]
[2024/05/17 15:32:39.119 +08:00] [INFO] [main.go:178] ["73 insert success: 500"]
[2024/05/17 15:32:39.120 +08:00] [INFO] [main.go:178] ["72 insert success: 500"]
[2024/05/17 15:32:39.123 +08:00] [INFO] [main.go:178] ["73 insert success: 500"]
[2024/05/17 15:32:39.126 +08:00] [INFO] [main.go:178] ["73 insert success: 500"]
[2024/05/17 15:32:39.130 +08:00] [INFO] [main.go:178] ["73 insert success: 500"]
[2024/05/17 15:32:39.131 +08:00] [INFO] [main.go:178] ["73 insert success: 500"]
[2024/05/17 15:32:39.133 +08:00] [INFO] [main.go:178] ["73 insert success: 500"]
[2024/05/17 15:32:39.133 +08:00] [INFO] [main.go:178] ["73 insert success: 500"]
[2024/05/17 15:32:39.143 +08:00] [INFO] [main.go:178] ["1 insert success: 600"]
[2024/05/17 15:32:39.155 +08:00] [INFO] [main.go:178] ["1 insert success: 600"]
[2024/05/17 15:32:39.156 +08:00] [INFO] [main.go:178] ["73 insert success: 500"]
[2024/05/17 15:32:39.421 +08:00] [INFO] [main.go:178] ["72 insert success: 600"]
[2024/05/17 15:32:39.432 +08:00] [INFO] [main.go:178] ["72 insert success: 600"]
[2024/05/17 15:32:39.433 +08:00] [INFO] [main.go:178] ["72 insert success: 600"]
[2024/05/17 15:32:39.548 +08:00] [INFO] [main.go:178] ["72 insert success: 600"]
[2024/05/17 15:32:39.553 +08:00] [INFO] [main.go:178] ["72 insert success: 600"]
[2024/05/17 15:32:39.558 +08:00] [INFO] [main.go:178] ["72 insert success: 600"]
[2024/05/17 15:32:39.614 +08:00] [INFO] [main.go:178] ["72 insert success: 600"]
[2024/05/17 15:32:39.662 +08:00] [INFO] [main.go:178] ["72 insert success: 600"]
[2024/05/17 15:32:39.717 +08:00] [INFO] [main.go:178] ["73 insert success: 600"]
[2024/05/17 15:32:39.720 +08:00] [INFO] [main.go:178] ["73 insert success: 600"]
[2024/05/17 15:32:39.724 +08:00] [INFO] [main.go:178] ["73 insert success: 600"]
[2024/05/17 15:32:39.731 +08:00] [INFO] [main.go:178] ["73 insert success: 600"]
[2024/05/17 15:32:39.731 +08:00] [INFO] [main.go:178] ["73 insert success: 600"]
[2024/05/17 15:32:39.732 +08:00] [INFO] [main.go:178] ["73 insert success: 600"]
[2024/05/17 15:32:39.734 +08:00] [INFO] [main.go:178] ["73 insert success: 600"]
[2024/05/17 15:32:39.739 +08:00] [INFO] [main.go:178] ["1 insert success: 700"]
[2024/05/17 15:32:39.752 +08:00] [INFO] [main.go:178] ["1 insert success: 700"]
[2024/05/17 15:32:39.754 +08:00] [INFO] [main.go:178] ["73 insert success: 600"]
[2024/05/17 15:32:39.759 +08:00] [INFO] [main.go:178] ["0 insert success: 600"]
[2024/05/17 15:32:39.763 +08:00] [INFO] [main.go:199] ["0 delete success: 300"]
[2024/05/17 15:32:39.814 +08:00] [INFO] [main.go:178] ["0 insert success: 600"]
[2024/05/17 15:32:39.818 +08:00] [INFO] [main.go:199] ["0 delete success: 300"]
[2024/05/17 15:32:39.926 +08:00] [INFO] [main.go:178] ["72 insert success: 700"]
[2024/05/17 15:32:39.935 +08:00] [INFO] [main.go:178] ["72 insert success: 700"]
[2024/05/17 15:32:39.936 +08:00] [INFO] [main.go:178] ["72 insert success: 700"]
run task successfully
[2024/05/17 15:32:40.056 +08:00] [INFO] [main.go:178] ["72 insert success: 700"]
[2024/05/17 15:32:40.115 +08:00] [INFO] [main.go:178] ["72 insert success: 700"]
[2024/05/17 15:32:40.120 +08:00] [INFO] [main.go:178] ["72 insert success: 700"]
[2024/05/17 15:32:40.121 +08:00] [INFO] [main.go:178] ["72 insert success: 700"]
[2024/05/17 15:32:40.240 +08:00] [INFO] [main.go:178] ["72 insert success: 700"]
[2024/05/17 15:32:40.253 +08:00] [INFO] [main.go:178] ["73 insert success: 700"]
[2024/05/17 15:32:40.256 +08:00] [INFO] [main.go:178] ["73 insert success: 700"]
check_changefeed_state http://127.0.0.1:2379 9f88fc4f-272c-40d7-b89f-7c2e23332004 normal null
+ endpoints=http://127.0.0.1:2379
+ changefeed_id=9f88fc4f-272c-40d7-b89f-7c2e23332004
+ expected_state=normal
+ error_msg=null
+ tls_dir=null
+ [[ http://127.0.0.1:2379 =~ https ]]
++ cdc cli changefeed query --pd=http://127.0.0.1:2379 -c 9f88fc4f-272c-40d7-b89f-7c2e23332004 -s
[2024/05/17 15:32:40.312 +08:00] [INFO] [main.go:178] ["73 insert success: 700"]
[2024/05/17 15:32:40.320 +08:00] [INFO] [main.go:178] ["73 insert success: 700"]
[2024/05/17 15:32:40.320 +08:00] [INFO] [main.go:178] ["73 insert success: 700"]
[2024/05/17 15:32:40.324 +08:00] [INFO] [main.go:178] ["73 insert success: 700"]
[2024/05/17 15:32:40.325 +08:00] [INFO] [main.go:178] ["73 insert success: 700"]
[2024/05/17 15:32:40.328 +08:00] [INFO] [main.go:178] ["1 insert success: 800"]
[2024/05/17 15:32:40.342 +08:00] [INFO] [main.go:178] ["73 insert success: 700"]
[2024/05/17 15:32:40.342 +08:00] [INFO] [main.go:178] ["1 insert success: 800"]
[2024/05/17 15:32:40.449 +08:00] [INFO] [main.go:178] ["72 insert success: 800"]
[2024/05/17 15:32:40.454 +08:00] [INFO] [main.go:178] ["72 insert success: 800"]
[2024/05/17 15:32:40.459 +08:00] [INFO] [main.go:178] ["72 insert success: 800"]
+ info='{
  "upstream_id": 7369868077969485054,
  "namespace": "default",
  "id": "9f88fc4f-272c-40d7-b89f-7c2e23332004",
  "state": "normal",
  "checkpoint_tso": 449821056863043585,
  "checkpoint_time": "2024-05-17 15:32:35.636",
  "error": null
}'
+ echo '{
  "upstream_id": 7369868077969485054,
  "namespace": "default",
  "id": "9f88fc4f-272c-40d7-b89f-7c2e23332004",
  "state": "normal",
  "checkpoint_tso": 449821056863043585,
  "checkpoint_time": "2024-05-17 15:32:35.636",
  "error": null
}'
{
  "upstream_id": 7369868077969485054,
  "namespace": "default",
  "id": "9f88fc4f-272c-40d7-b89f-7c2e23332004",
  "state": "normal",
  "checkpoint_tso": 449821056863043585,
  "checkpoint_time": "2024-05-17 15:32:35.636",
  "error": null
}
++ jq -r .state
++ echo '{' '"upstream_id":' 7369868077969485054, '"namespace":' '"default",' '"id":' '"9f88fc4f-272c-40d7-b89f-7c2e23332004",' '"state":' '"normal",' '"checkpoint_tso":' 449821056863043585, '"checkpoint_time":' '"2024-05-17' '15:32:35.636",' '"error":' null '}'
+ state=normal
+ [[ ! normal == \n\o\r\m\a\l ]]
++ echo '{' '"upstream_id":' 7369868077969485054, '"namespace":' '"default",' '"id":' '"9f88fc4f-272c-40d7-b89f-7c2e23332004",' '"state":' '"normal",' '"checkpoint_tso":' 449821056863043585, '"checkpoint_time":' '"2024-05-17' '15:32:35.636",' '"error":' null '}'
++ jq -r .error.message
+ message=null
+ [[ ! null =~ null ]]
run task successfully
check_safepoint_forward http://127.0.0.1:2379 7369868077969485054 449821056863043584 449821056863043585
[2024/05/17 15:32:40.522 +08:00] [INFO] [main.go:178] ["0 insert success: 700"]
[2024/05/17 15:32:40.537 +08:00] [INFO] [main.go:178] ["0 insert success: 700"]
[2024/05/17 15:32:41.107 +08:00] [INFO] [main.go:178] ["72 insert success: 800"]
[2024/05/17 15:32:41.113 +08:00] [INFO] [main.go:178] ["72 insert success: 800"]
[2024/05/17 15:32:41.117 +08:00] [INFO] [main.go:178] ["72 insert success: 800"]
[2024/05/17 15:32:41.236 +08:00] [INFO] [main.go:178] ["72 insert success: 800"]
[2024/05/17 15:32:41.255 +08:00] [INFO] [main.go:178] ["73 insert success: 800"]
[2024/05/17 15:32:41.669 +08:00] [INFO] [main.go:178] ["73 insert success: 800"]
[2024/05/17 15:32:41.680 +08:00] [INFO] [main.go:178] ["73 insert success: 800"]
[2024/05/17 15:32:41.685 +08:00] [INFO] [main.go:178] ["73 insert success: 800"]
[2024/05/17 15:32:41.687 +08:00] [INFO] [main.go:178] ["73 insert success: 800"]
[2024/05/17 15:32:41.692 +08:00] [INFO] [main.go:178] ["73 insert success: 800"]
[2024/05/17 15:32:41.692 +08:00] [INFO] [main.go:178] ["73 insert success: 800"]
[2024/05/17 15:32:41.694 +08:00] [INFO] [main.go:178] ["1 insert success: 900"]
[2024/05/17 15:32:41.697 +08:00] [INFO] [main.go:178] ["72 insert success: 800"]
[2024/05/17 15:32:41.705 +08:00] [INFO] [main.go:178] ["73 insert success: 800"]
[2024/05/17 15:32:41.706 +08:00] [INFO] [main.go:178] ["1 insert success: 900"]
[2024/05/17 15:32:41.753 +08:00] [INFO] [main.go:178] ["72 insert success: 900"]
[2024/05/17 15:32:41.754 +08:00] [INFO] [main.go:178] ["72 insert success: 900"]
[2024/05/17 15:32:41.757 +08:00] [INFO] [main.go:178] ["72 insert success: 900"]
[2024/05/17 15:32:41.843 +08:00] [INFO] [main.go:178] ["72 insert success: 900"]
[2024/05/17 15:32:41.850 +08:00] [INFO] [main.go:178] ["72 insert success: 900"]
[2024/05/17 15:32:41.850 +08:00] [INFO] [main.go:178] ["72 insert success: 900"]
[2024/05/17 15:32:41.858 +08:00] [INFO] [main.go:178] ["0 insert success: 800"]
[2024/05/17 15:32:41.859 +08:00] [INFO] [main.go:178] ["0 insert success: 800"]
[2024/05/17 15:32:41.861 +08:00] [INFO] [main.go:199] ["0 delete success: 400"]
run task successfully
check_changefeed_state http://127.0.0.1:2379 9f88fc4f-272c-40d7-b89f-7c2e23332004 stopped null
+ endpoints=http://127.0.0.1:2379
+ changefeed_id=9f88fc4f-272c-40d7-b89f-7c2e23332004
+ expected_state=stopped
+ error_msg=null
+ tls_dir=null
+ [[ http://127.0.0.1:2379 =~ https ]]
++ cdc cli changefeed query --pd=http://127.0.0.1:2379 -c 9f88fc4f-272c-40d7-b89f-7c2e23332004 -s
[2024/05/17 15:32:41.863 +08:00] [INFO] [main.go:199] ["0 delete success: 400"]
[2024/05/17 15:32:41.942 +08:00] [INFO] [main.go:178] ["72 insert success: 900"]
[2024/05/17 15:32:41.949 +08:00] [INFO] [main.go:178] ["73 insert success: 900"]
+ info='{
  "upstream_id": 7369868077969485054,
  "namespace": "default",
  "id": "9f88fc4f-272c-40d7-b89f-7c2e23332004",
  "state": "stopped",
  "checkpoint_tso": 449821056863043585,
  "checkpoint_time": "2024-05-17 15:32:35.636",
  "error": null
}'
+ echo '{
  "upstream_id": 7369868077969485054,
  "namespace": "default",
  "id": "9f88fc4f-272c-40d7-b89f-7c2e23332004",
  "state": "stopped",
  "checkpoint_tso": 449821056863043585,
  "checkpoint_time": "2024-05-17 15:32:35.636",
  "error": null
}'
{
  "upstream_id": 7369868077969485054,
  "namespace": "default",
  "id": "9f88fc4f-272c-40d7-b89f-7c2e23332004",
  "state": "stopped",
  "checkpoint_tso": 449821056863043585,
  "checkpoint_time": "2024-05-17 15:32:35.636",
  "error": null
}
++ echo '{' '"upstream_id":' 7369868077969485054, '"namespace":' '"default",' '"id":' '"9f88fc4f-272c-40d7-b89f-7c2e23332004",' '"state":' '"stopped",' '"checkpoint_tso":' 449821056863043585, '"checkpoint_time":' '"2024-05-17' '15:32:35.636",' '"error":' null '}'
++ jq -r .state
+ state=stopped
+ [[ ! stopped == \s\t\o\p\p\e\d ]]
++ echo '{' '"upstream_id":' 7369868077969485054, '"namespace":' '"default",' '"id":' '"9f88fc4f-272c-40d7-b89f-7c2e23332004",' '"state":' '"stopped",' '"checkpoint_tso":' 449821056863043585, '"checkpoint_time":' '"2024-05-17' '15:32:35.636",' '"error":' null '}'
++ jq -r .error.message
+ message=null
+ [[ ! null =~ null ]]
run task successfully
[2024/05/17 15:32:42.232 +08:00] [INFO] [main.go:178] ["73 insert success: 900"]
[2024/05/17 15:32:42.245 +08:00] [INFO] [main.go:178] ["73 insert success: 900"]
[2024/05/17 15:32:42.250 +08:00] [INFO] [main.go:178] ["72 insert success: 900"]
[2024/05/17 15:32:42.251 +08:00] [INFO] [main.go:178] ["73 insert success: 900"]
[2024/05/17 15:32:42.252 +08:00] [INFO] [main.go:178] ["1 insert success: 1000"]
[2024/05/17 15:32:42.252 +08:00] [INFO] [main.go:178] ["73 insert success: 900"]
[2024/05/17 15:32:42.252 +08:00] [INFO] [main.go:178] ["73 insert success: 900"]
[2024/05/17 15:32:42.253 +08:00] [INFO] [main.go:178] ["73 insert success: 900"]
[2024/05/17 15:32:42.313 +08:00] [INFO] [main.go:178] ["1 insert success: 1000"]
[2024/05/17 15:32:42.316 +08:00] [INFO] [main.go:178] ["73 insert success: 900"]
[2024/05/17 15:32:42.354 +08:00] [INFO] [main.go:178] ["72 insert success: 1000"]
[2024/05/17 15:32:42.358 +08:00] [INFO] [main.go:178] ["72 insert success: 1000"]
[2024/05/17 15:32:42.359 +08:00] [INFO] [main.go:178] ["72 insert success: 1000"]
check_changefeed_state http://127.0.0.1:2379 b7e3f270-501c-48ea-a633-8a5e1d269702 normal null
+ endpoints=http://127.0.0.1:2379
+ changefeed_id=b7e3f270-501c-48ea-a633-8a5e1d269702
+ expected_state=normal
+ error_msg=null
+ tls_dir=null
+ [[ http://127.0.0.1:2379 =~ https ]]
++ cdc cli changefeed query --pd=http://127.0.0.1:2379 -c b7e3f270-501c-48ea-a633-8a5e1d269702 -s
Cancelling nested steps due to timeout
Sending interrupt signal to process
Killing processes
[2024/05/17 15:32:42.425 +08:00] [INFO] [main.go:178] ["72 insert success: 1000"]
[2024/05/17 15:32:42.434 +08:00] [INFO] [main.go:178] ["72 insert success: 1000"]
[2024/05/17 15:32:42.435 +08:00] [INFO] [main.go:178] ["72 insert success: 1000"]
[2024/05/17 15:32:42.521 +08:00] [INFO] [main.go:178] ["73 insert success: 1000"]
[2024/05/17 15:32:42.522 +08:00] [INFO] [main.go:178] ["72 insert success: 1000"]
[2024/05/17 15:32:42.639 +08:00] [INFO] [main.go:178] ["0 insert success: 900"]
[2024/05/17 15:32:42.641 +08:00] [INFO] [main.go:178] ["0 insert success: 900"]
+ info='{
  "upstream_id": 7369868077969485054,
  "namespace": "default",
  "id": "b7e3f270-501c-48ea-a633-8a5e1d269702",
  "state": "normal",
  "checkpoint_tso": 449821058606563332,
  "checkpoint_time": "2024-05-17 15:32:42.287",
  "error": null
}'
+ echo '{
  "upstream_id": 7369868077969485054,
  "namespace": "default",
  "id": "b7e3f270-501c-48ea-a633-8a5e1d269702",
  "state": "normal",
  "checkpoint_tso": 449821058606563332,
  "checkpoint_time": "2024-05-17 15:32:42.287",
  "error": null
}'
{
  "upstream_id": 7369868077969485054,
  "namespace": "default",
  "id": "b7e3f270-501c-48ea-a633-8a5e1d269702",
  "state": "normal",
  "checkpoint_tso": 449821058606563332,
  "checkpoint_time": "2024-05-17 15:32:42.287",
  "error": null
}
++ echo '{' '"upstream_id":' 7369868077969485054, '"namespace":' '"default",' '"id":' '"b7e3f270-501c-48ea-a633-8a5e1d269702",' '"state":' '"normal",' '"checkpoint_tso":' 449821058606563332, '"checkpoint_time":' '"2024-05-17' '15:32:42.287",' '"error":' null '}'
++ jq -r .state
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
[2024/05/17 15:32:42.812 +08:00] [INFO] [main.go:178] ["73 insert success: 1000"]
script returned exit code 143
+ state=normal
+ [[ ! normal == \n\o\r\m\a\l ]]
++ echo '{' '"upstream_id":' 7369868077969485054, '"namespace":' '"default",' '"id":' '"b7e3f270-501c-48ea-a633-8a5e1d269702",' '"state":' '"normal",' '"checkpoint_tso":' 449821058606563332, '"checkpoint_time":' '"2024-05-17' '15:32:42.287",' '"error":' null '}'
++ jq -r .error.message
+ message=null
+ [[ ! null =~ null ]]
run task successfully
check_safepoint_equal http://127.0.0.1:2379 7369868077969485054
run task successfully
kill finished with exit code 0
Sending interrupt signal to process
Killing processes
script returned exit code 143
kill finished with exit code 0
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] // cache
[Pipeline] }
[Pipeline] }
Cache not saved (inner-step execution failed)
[Pipeline] // dir
[Pipeline] // cache
[Pipeline] }
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] // dir
[Pipeline] }
[Pipeline] }
[Pipeline] // timeout
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] }
[Pipeline] // stage
[Pipeline] // timeout
[Pipeline] }
[Pipeline] }
[Pipeline] // container
[Pipeline] // stage
[Pipeline] }
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] // container
[Pipeline] }
[Pipeline] }
[Pipeline] // node
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] // node
[Pipeline] }
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] // podTemplate
[Pipeline] }
[Pipeline] }
[Pipeline] // stage
[Pipeline] // withEnv
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G10'
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch Matrix - TEST_GROUP = 'G09'
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Timeout has been exceeded
org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 1bf740b2-ee19-4d2f-a3cb-43951d313001
Failed in branch Matrix - TEST_GROUP = 'G10'
Finished: ABORTED