Skip to content

Instantly share code, notes, and snippets.

@Mayowa-Ojo
Last active July 6, 2022 16:32
Show Gist options
  • Save Mayowa-Ojo/73a2ff2f57ebf6808bb87bfb83d411f5 to your computer and use it in GitHub Desktop.
Save Mayowa-Ojo/73a2ff2f57ebf6808bb87bfb83d411f5 to your computer and use it in GitHub Desktop.
temporal-planetscale
version: "3.5"
services:
temporal:
container_name: temporal
environment:
- DB=mysql
- DB_PORT=3306
- VISIBILITY_MYSQL_USER=${TEMPORAL_VISIBILITY_USER}
- VISIBILITY_MYSQL_PWD=${TEMPORAL_VISIBILITY_PASSWORD}
- VISIBILITY_MYSQL_SEEDS=${TEMPORAL_VISIBILITY_PSCALE_HOSTSTRING}
- MYSQL_USER=${TEMPORAL_USER}
- MYSQL_PWD=${TEMPORAL_PASSWORD}
- MYSQL_SEEDS=${TEMPORAL_PSCALE_HOSTSTRING}
- SQL_TLS=true
- SKIP_DB_CREATE=true
- SKIP_SCHEMA_SETUP=true
- SQL_TLS_ENABLED=true
- DYNAMIC_CONFIG_FILE_PATH=config/dynamicconfig/development-sql.yaml
image: temporalio/auto-setup:${TEMPORAL_VERSION}
networks:
- temporal-network
ports:
- 7233:7233
volumes:
- ./dynamicconfig:/etc/temporal/config/dynamicconfig
temporal-admin-tools:
container_name: temporal-admin-tools
depends_on:
- temporal
environment:
- TEMPORAL_CLI_ADDRESS=temporal:7233
image: temporalio/admin-tools:${TEMPORAL_VERSION}
networks:
- temporal-network
stdin_open: true
tty: true
temporal-web:
container_name: temporal-web
depends_on:
- temporal
environment:
- TEMPORAL_GRPC_ENDPOINT=temporal:7233
- TEMPORAL_PERMIT_WRITE_API=true
image: temporalio/web:${TEMPORAL_WEB_VERSION}
networks:
- temporal-network
ports:
- 8088:8088
networks:
temporal-network:
driver: bridge
name: temporal-network

temporal | {"level":"error","ts":"2022-07-04T20:25:39.553Z","msg":"Critical error processing task, retrying.","shard-id":4,"address":"172.25.0.2:7234","component":"visibility-queue-processor","wf-namespace-id":"32049b68-7872-4094-8e63-d0dd59896a83","wf-id":"temporal-sys-tq-scanner","wf-run-id":"d034be6e-fac2-4bb1-bdde-a5e36749c7e2","queue-task-id":1048580,"queue-task-visibility-timestamp":"2022-07-04T12:28:03.160Z","queue-task-type":"VisibilityStartExecution","queue-task":{"NamespaceID":"32049b68-7872-4094-8e63-d0dd59896a83","WorkflowID":"temporal-sys-tq-scanner","RunID":"d034be6e-fac2-4bb1-bdde-a5e36749c7e2","VisibilityTimestamp":"2022-07-04T12:28:03.160160056Z","TaskID":1048580,"Version":0},"wf-history-event-id":0,"error":"context deadline exceeded","operation-result":"OperationCritical","logging-call-at":"lazy_logger.go:68","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/log.(*lazyLogger).Error\n\t/home/builder/temporal/common/log/lazy_logger.go:68\ngo.temporal.io/server/service/history/queues.(*executableImpl).HandleErr.func1\n\t/home/builder/temporal/service/history/queues/executable.go:184\ngo.temporal.io/server/service/history/queues.(*executableImpl).HandleErr\n\t/home/builder/temporal/service/history/queues/executable.go:232\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask.func1\n\t/home/builder/temporal/common/tasks/parallel_processor.go:208\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/home/builder/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/home/builder/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:217\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).processTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:195"}

temporal | {"level":"error","ts":"2022-07-04T20:25:42.025Z","msg":"Operation failed with internal error.","error":"GetWorkflowExecution: failed to get request cancel info. Error: Failed to get request cancel info. Error: context deadline exceeded","metric-scope":5,"logging-call-at":"persistenceMetricClients.go:1424","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/persistence.(*metricEmitter).updateErrorMetric\n\t/home/builder/temporal/common/persistence/persistenceMetricClients.go:1424\ngo.temporal.io/server/common/persistence.(*executionPersistenceClient).GetWorkflowExecution\n\t/home/builder/temporal/common/persistence/persistenceMetricClients.go:241\ngo.temporal.io/server/service/history/shard.(*ContextImpl).GetWorkflowExecution\n\t/home/builder/temporal/service/history/shard/context_impl.go:840\ngo.temporal.io/server/service/history/workflow.getWorkflowExecutionWithRetry.func1\n\t/home/builder/temporal/service/history/workflow/transaction_impl.go:462\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/service/history/workflow.getWorkflowExecutionWithRetry\n\t/home/builder/temporal/service/history/workflow/transaction_impl.go:467\ngo.temporal.io/server/service/history/workflow.(*ContextImpl).LoadWorkflowExecution\n\t/home/builder/temporal/service/history/workflow/context.go:274\ngo.temporal.io/server/service/history.LoadMutableStateForTask\n\t/home/builder/temporal/service/history/nDCTaskUtil.go:142\ngo.temporal.io/server/service/history.loadMutableStateForTimerTask\n\t/home/builder/temporal/service/history/nDCTaskUtil.go:123\ngo.temporal.io/server/service/history.(*timerQueueActiveTaskExecutor).executeActivityTimeoutTask\n\t/home/builder/temporal/service/history/timerQueueActiveTaskExecutor.go:189\ngo.temporal.io/server/service/history.(*timerQueueActiveTaskExecutor).Execute\n\t/home/builder/temporal/service/history/timerQueueActiveTaskExecutor.go:107\ngo.temporal.io/server/service/history/queues.(*executorWrapper).Execute\n\t/home/builder/temporal/service/history/queues/executor_wrapper.go:67\ngo.temporal.io/server/service/history/queues.(*executableImpl).Execute\n\t/home/builder/temporal/service/history/queues/executable.go:161\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask.func1\n\t/home/builder/temporal/common/tasks/parallel_processor.go:207\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/home/builder/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/home/builder/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:217\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).processTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:195"}

temporal | {"level":"error","ts":"2022-07-04T20:25:42.025Z","msg":"Persistent fetch operation Failure","shard-id":3,"address":"172.25.0.2:7234","wf-namespace-id":"32049b68-7872-4094-8e63-d0dd59896a83","wf-id":"temporal-sys-add-search-attributes-workflow","wf-run-id":"4a509be1-ad39-42e2-b126-ae6e9d160c57","store-operation":"get-wf-execution","error":"context deadline exceeded","logging-call-at":"transaction_impl.go:489","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/service/history/workflow.getWorkflowExecutionWithRetry\n\t/home/builder/temporal/service/history/workflow/transaction_impl.go:489\ngo.temporal.io/server/service/history/workflow.(*ContextImpl).LoadWorkflowExecution\n\t/home/builder/temporal/service/history/workflow/context.go:274\ngo.temporal.io/server/service/history.LoadMutableStateForTask\n\t/home/builder/temporal/service/history/nDCTaskUtil.go:142\ngo.temporal.io/server/service/history.loadMutableStateForTimerTask\n\t/home/builder/temporal/service/history/nDCTaskUtil.go:123\ngo.temporal.io/server/service/history.(*timerQueueActiveTaskExecutor).executeActivityTimeoutTask\n\t/home/builder/temporal/service/history/timerQueueActiveTaskExecutor.go:189\ngo.temporal.io/server/service/history.(*timerQueueActiveTaskExecutor).Execute\n\t/home/builder/temporal/service/history/timerQueueActiveTaskExecutor.go:107\ngo.temporal.io/server/service/history/queues.(*executorWrapper).Execute\n\t/home/builder/temporal/service/history/queues/executor_wrapper.go:67\ngo.temporal.io/server/service/history/queues.(*executableImpl).Execute\n\t/home/builder/temporal/service/history/queues/executable.go:161\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask.func1\n\t/home/builder/temporal/common/tasks/parallel_processor.go:207\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/home/builder/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/home/builder/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:217\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).processTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:195"}

temporal | {"level":"error","ts":"2022-07-04T20:25:42.025Z","msg":"Fail to process task","shard-id":3,"address":"172.25.0.2:7234","component":"timer-queue-processor","cluster-name":"active","wf-namespace-id":"32049b68-7872-4094-8e63-d0dd59896a83","wf-id":"temporal-sys-add-search-attributes-workflow","wf-run-id":"4a509be1-ad39-42e2-b126-ae6e9d160c57","queue-task-id":2097166,"queue-task-visibility-timestamp":"2022-07-04T20:22:43.578Z","queue-task-type":"ActivityTimeout","queue-task":{"NamespaceID":"32049b68-7872-4094-8e63-d0dd59896a83","WorkflowID":"temporal-sys-add-search-attributes-workflow","RunID":"4a509be1-ad39-42e2-b126-ae6e9d160c57","VisibilityTimestamp":"2022-07-04T20:22:43.578495508Z","TaskID":2097166,"TimeoutType":2,"EventID":5,"Attempt":1,"Version":0},"wf-history-event-id":5,"error":"context deadline exceeded","lifecycle":"ProcessingFailed","logging-call-at":"lazy_logger.go:68","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/log.(*lazyLogger).Error\n\t/home/builder/temporal/common/log/lazy_logger.go:68\ngo.temporal.io/server/service/history/queues.(*executableImpl).HandleErr\n\t/home/builder/temporal/service/history/queues/executable.go:231\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask.func1\n\t/home/builder/temporal/common/tasks/parallel_processor.go:208\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/home/builder/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/home/builder/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:217\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).processTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:195"}

temporal | {"level":"error","ts":"2022-07-04T20:25:42.025Z","msg":"Critical error processing task, retrying.","shard-id":3,"address":"172.25.0.2:7234","component":"timer-queue-processor","cluster-name":"active","wf-namespace-id":"32049b68-7872-4094-8e63-d0dd59896a83","wf-id":"temporal-sys-add-search-attributes-workflow","wf-run-id":"4a509be1-ad39-42e2-b126-ae6e9d160c57","queue-task-id":2097166,"queue-task-visibility-timestamp":"2022-07-04T20:22:43.578Z","queue-task-type":"ActivityTimeout","queue-task":{"NamespaceID":"32049b68-7872-4094-8e63-d0dd59896a83","WorkflowID":"temporal-sys-add-search-attributes-workflow","RunID":"4a509be1-ad39-42e2-b126-ae6e9d160c57","VisibilityTimestamp":"2022-07-04T20:22:43.578495508Z","TaskID":2097166,"TimeoutType":2,"EventID":5,"Attempt":1,"Version":0},"wf-history-event-id":5,"error":"context deadline exceeded","operation-result":"OperationCritical","logging-call-at":"lazy_logger.go:68","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/log.(*lazyLogger).Error\n\t/home/builder/temporal/common/log/lazy_logger.go:68\ngo.temporal.io/server/service/history/queues.(*executableImpl).HandleErr.func1\n\t/home/builder/temporal/service/history/queues/executable.go:184\ngo.temporal.io/server/service/history/queues.(*executableImpl).HandleErr\n\t/home/builder/temporal/service/history/queues/executable.go:232\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask.func1\n\t/home/builder/temporal/common/tasks/parallel_processor.go:208\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/home/builder/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/home/builder/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:217\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).processTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:195"}

docker-compose git:(main) ✗ docker-compose -f docker-compose-planetscale.yml up
Creating network "temporal-network" with driver "bridge"
Creating temporal ... done
Creating temporal-web         ... done
Creating temporal-admin-tools ... done
Attaching to temporal, temporal-admin-tools, temporal-web
temporal                | + : mysql
temporal                | + : true
temporal                | + : true
temporal                | Temporal CLI address: 172.26.0.2:7233.
temporal                | + : temporal
temporal                | + : temporal_visibility
temporal                | + : ''
temporal                | + : 9042
temporal                | + : ''
temporal                | + : ''
temporal                | + : ''
temporal                | + : ''
temporal                | + : ''
temporal                | + : ''
temporal                | + : 1
temporal                | + : temporal
temporal                | + : temporal_visibility
temporal                | + : 3306
temporal                | + : f0es1algq55z.us-east-1.psdb.cloud
temporal                | + : 708niiqkqzaz
temporal                | + : pscale_pw_9C2cRL_q9yAnqbLz_7IPoqXs7H7hVFIW94W7ZgtlQf0
temporal                | + : false
temporal                | + : ''
temporal                | + : ''
temporal                | + : ''
temporal                | + : false
temporal                | + : http
temporal                | + : ''
temporal                | + : 9200
temporal                | + : ''
temporal                | + : ''
temporal                | + : v7
temporal                | + : temporal_visibility_v1_dev
temporal                | + : 0
temporal                | + : 172.26.0.2:7233
temporal                | + : false
temporal                | + : default
temporal                | + : 1
temporal                | + : false
temporal                | + [[ true != true ]]
temporal                | + [[ false == true ]]
temporal                | + setup_server
temporal                | + echo 'Temporal CLI address: 172.26.0.2:7233.'
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | Waiting for Temporal server to start...
temporal                | + sleep 1
temporal                | 2022/07/05 15:01:24 Loading config; env=docker,zone=,configDir=config
temporal                | 2022/07/05 15:01:24 Loading config files=[config/docker.yaml]
temporal                | {"level":"info","ts":"2022-07-05T15:01:24.810Z","msg":"Build info.","git-time":"2022-06-17T22:49:04.000Z","git-revision":"79844fd704eb6fd873760503ba57cc9c6bae65d5","git-modified":false,"go-arch":"amd64","go-os":"linux","go-version":"go1.18.2","cgo-enabled":false,"server-version":"1.17.0","logging-call-at":"main.go:136"}
temporal                | {"level":"info","ts":"2022-07-05T15:01:24.876Z","msg":"dynamic config changed for the key: limit.maxidlength oldValue: nil newValue: { constraints: {} value: 255 }","logging-call-at":"basic_client.go:299"}
temporal                | {"level":"info","ts":"2022-07-05T15:01:24.876Z","msg":"dynamic config changed for the key: system.forcesearchattributescacherefreshonread oldValue: nil newValue: { constraints: {} value: true }","logging-call-at":"basic_client.go:299"}
temporal                | {"level":"info","ts":"2022-07-05T15:01:24.876Z","msg":"Updated dynamic config","logging-call-at":"file_based_client.go:184"}
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | Waiting for Temporal server to start...
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal-web            | [2022-07-05T15:01:28.776Z] Auth is disabled in config
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | Waiting for Temporal server to start...
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | Waiting for Temporal server to start...
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | + + grep tctl -q cluster health
temporal                | SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | Waiting for Temporal server to start...
temporal                | + sleep 1
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | Waiting for Temporal server to start...
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal-web            | [2022-07-05T15:01:38.579Z] will use insecure connection with Temporal server...
temporal                | {"level":"info","ts":"2022-07-05T15:01:39.149Z","msg":"Created gRPC listener","service":"history","address":"172.26.0.2:7234","logging-call-at":"rpc.go:154"}
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | + + tctl grep cluster -q health
temporal                | SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | {"level":"info","ts":"2022-07-05T15:01:42.841Z","msg":"Created gRPC listener","service":"matching","address":"172.26.0.2:7235","logging-call-at":"rpc.go:154"}
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | Waiting for Temporal server to start...
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal-web            | temporal-web ssl is not enabled
temporal-web            | temporal-web up and listening on port 8088
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | Waiting for Temporal server to start...
temporal                | + sleep 1
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | {"level":"info","ts":"2022-07-05T15:01:48.997Z","msg":"Created gRPC listener","service":"frontend","address":"172.26.0.2:7233","logging-call-at":"rpc.go:154"}
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | Waiting for Temporal server to start...
temporal                | {"level":"info","ts":"2022-07-05T15:01:54.121Z","msg":"PProf not started due to port not set","logging-call-at":"pprof.go:67"}
temporal                | {"level":"info","ts":"2022-07-05T15:01:54.121Z","msg":"Starting server for services","value":{"frontend":{},"history":{},"matching":{},"worker":{}},"logging-call-at":"server_impl.go:99"}
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | {"level":"info","ts":"2022-07-05T15:01:57.371Z","msg":"Membership heartbeat upserted successfully","service":"history","address":"172.26.0.2","port":6934,"hostId":"5dd34184-fc73-11ec-8de6-0242ac1a0002","logging-call-at":"rpMonitor.go:229"}
temporal                | {"level":"info","ts":"2022-07-05T15:01:58.980Z","msg":"bootstrap hosts fetched","service":"history","bootstrap-hostports":"172.26.0.2:6934","logging-call-at":"rpMonitor.go:271"}
temporal                | {"level":"info","ts":"2022-07-05T15:01:59.004Z","msg":"Current reachable members","service":"history","component":"service-resolver","service":"history","addresses":["172.26.0.2:7234"],"logging-call-at":"rpServiceResolver.go:266"}
temporal                | Waiting for Temporal server to start...
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | {"level":"info","ts":"2022-07-05T15:02:00.147Z","msg":"RuntimeMetricsReporter started","service":"history","logging-call-at":"runtime.go:138"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:00.148Z","msg":"history starting","service":"history","logging-call-at":"service.go:96"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:00.148Z","msg":"Replication task fetchers started.","logging-call-at":"task_fetcher.go:141"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:00.148Z","msg":"none","shard-id":1,"address":"172.26.0.2:7234","lifecycle":"Started","component":"shard-context","logging-call-at":"controller_impl.go:263"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:00.148Z","msg":"none","shard-id":2,"address":"172.26.0.2:7234","lifecycle":"Started","component":"shard-context","logging-call-at":"controller_impl.go:263"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:00.148Z","msg":"none","shard-id":3,"address":"172.26.0.2:7234","lifecycle":"Started","component":"shard-context","logging-call-at":"controller_impl.go:263"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:00.148Z","msg":"none","shard-id":4,"address":"172.26.0.2:7234","lifecycle":"Started","component":"shard-context","logging-call-at":"controller_impl.go:263"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:00.149Z","msg":"none","component":"shard-controller","address":"172.26.0.2:7234","lifecycle":"Started","logging-call-at":"controller_impl.go:118"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:00.149Z","msg":"Starting to serve on history listener","service":"history","logging-call-at":"service.go:107"}
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.090Z","msg":"RuntimeMetricsReporter started","service":"matching","logging-call-at":"runtime.go:138"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.252Z","msg":"Range updated for shardID","shard-id":4,"address":"172.26.0.2:7234","shard-range-id":3,"previous-shard-range-id":2,"number":0,"next-number":0,"logging-call-at":"context_impl.go:1122"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.252Z","msg":"Acquired shard","shard-id":4,"address":"172.26.0.2:7234","logging-call-at":"context_impl.go:1755"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.252Z","msg":"none","shard-id":4,"address":"172.26.0.2:7234","lifecycle":"Starting","component":"shard-engine","logging-call-at":"context_impl.go:1386"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"none","shard-id":4,"address":"172.26.0.2:7234","component":"history-engine","lifecycle":"Starting","logging-call-at":"historyEngine.go:252"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"Parallel task processor started","shard-id":4,"address":"172.26.0.2:7234","component":"timer-queue-processor","cluster-name":"active","logging-call-at":"parallel_processor.go:98"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"interleaved weighted round robin task scheduler started","shard-id":4,"address":"172.26.0.2:7234","component":"timer-queue-processor","cluster-name":"active","logging-call-at":"interleaved_weighted_round_robin.go:109"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"Timer queue processor started.","shard-id":4,"address":"172.26.0.2:7234","component":"timer-queue-processor","cluster-name":"active","component":"timer-queue-processor","logging-call-at":"timerQueueProcessorBase.go:139"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"Parallel task processor started","shard-id":4,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","logging-call-at":"parallel_processor.go:98"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"interleaved weighted round robin task scheduler started","shard-id":4,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","logging-call-at":"interleaved_weighted_round_robin.go:109"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"none","shard-id":4,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","lifecycle":"Starting","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:128"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"none","shard-id":4,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","lifecycle":"Started","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:134"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"Parallel task processor started","shard-id":4,"address":"172.26.0.2:7234","component":"visibility-queue-processor","logging-call-at":"parallel_processor.go:98"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"interleaved weighted round robin task scheduler started","shard-id":4,"address":"172.26.0.2:7234","component":"visibility-queue-processor","logging-call-at":"interleaved_weighted_round_robin.go:109"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"none","shard-id":4,"address":"172.26.0.2:7234","component":"visibility-queue-processor","lifecycle":"Starting","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:128"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"none","shard-id":4,"address":"172.26.0.2:7234","component":"visibility-queue-processor","lifecycle":"Started","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:134"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"none","shard-id":4,"address":"172.26.0.2:7234","component":"history-engine","lifecycle":"Started","logging-call-at":"historyEngine.go:269"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.273Z","msg":"none","shard-id":4,"address":"172.26.0.2:7234","lifecycle":"Started","component":"shard-engine","logging-call-at":"context_impl.go:1389"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.992Z","msg":"Range updated for shardID","shard-id":1,"address":"172.26.0.2:7234","shard-range-id":4,"previous-shard-range-id":3,"number":0,"next-number":0,"logging-call-at":"context_impl.go:1122"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.993Z","msg":"Acquired shard","shard-id":1,"address":"172.26.0.2:7234","logging-call-at":"context_impl.go:1755"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.993Z","msg":"none","shard-id":1,"address":"172.26.0.2:7234","lifecycle":"Starting","component":"shard-engine","logging-call-at":"context_impl.go:1386"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.993Z","msg":"none","shard-id":1,"address":"172.26.0.2:7234","component":"history-engine","lifecycle":"Starting","logging-call-at":"historyEngine.go:252"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.993Z","msg":"Parallel task processor started","shard-id":1,"address":"172.26.0.2:7234","component":"timer-queue-processor","cluster-name":"active","logging-call-at":"parallel_processor.go:98"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.993Z","msg":"interleaved weighted round robin task scheduler started","shard-id":1,"address":"172.26.0.2:7234","component":"timer-queue-processor","cluster-name":"active","logging-call-at":"interleaved_weighted_round_robin.go:109"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.993Z","msg":"Timer queue processor started.","shard-id":1,"address":"172.26.0.2:7234","component":"timer-queue-processor","cluster-name":"active","component":"timer-queue-processor","logging-call-at":"timerQueueProcessorBase.go:139"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.993Z","msg":"Parallel task processor started","shard-id":1,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","logging-call-at":"parallel_processor.go:98"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.993Z","msg":"interleaved weighted round robin task scheduler started","shard-id":1,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","logging-call-at":"interleaved_weighted_round_robin.go:109"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.993Z","msg":"none","shard-id":1,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","lifecycle":"Starting","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:128"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.994Z","msg":"none","shard-id":1,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","lifecycle":"Started","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:134"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.994Z","msg":"Parallel task processor started","shard-id":1,"address":"172.26.0.2:7234","component":"visibility-queue-processor","logging-call-at":"parallel_processor.go:98"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.994Z","msg":"interleaved weighted round robin task scheduler started","shard-id":1,"address":"172.26.0.2:7234","component":"visibility-queue-processor","logging-call-at":"interleaved_weighted_round_robin.go:109"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.994Z","msg":"none","shard-id":1,"address":"172.26.0.2:7234","component":"visibility-queue-processor","lifecycle":"Starting","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:128"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.994Z","msg":"none","shard-id":1,"address":"172.26.0.2:7234","component":"visibility-queue-processor","lifecycle":"Started","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:134"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.994Z","msg":"none","shard-id":1,"address":"172.26.0.2:7234","component":"history-engine","lifecycle":"Started","logging-call-at":"historyEngine.go:269"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:03.994Z","msg":"none","shard-id":1,"address":"172.26.0.2:7234","lifecycle":"Started","component":"shard-engine","logging-call-at":"context_impl.go:1389"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:05.050Z","msg":"Membership heartbeat upserted successfully","service":"matching","address":"172.26.0.2","port":6935,"hostId":"617921bf-fc73-11ec-8de6-0242ac1a0002","logging-call-at":"rpMonitor.go:229"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.089Z","msg":"Range updated for shardID","shard-id":2,"address":"172.26.0.2:7234","shard-range-id":3,"previous-shard-range-id":2,"number":0,"next-number":0,"logging-call-at":"context_impl.go:1122"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.089Z","msg":"Acquired shard","shard-id":2,"address":"172.26.0.2:7234","logging-call-at":"context_impl.go:1755"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.089Z","msg":"none","shard-id":2,"address":"172.26.0.2:7234","lifecycle":"Starting","component":"shard-engine","logging-call-at":"context_impl.go:1386"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.089Z","msg":"bootstrap hosts fetched","service":"matching","bootstrap-hostports":"172.26.0.2:6934,172.26.0.2:6935","logging-call-at":"rpMonitor.go:271"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"none","shard-id":2,"address":"172.26.0.2:7234","component":"history-engine","lifecycle":"Starting","logging-call-at":"historyEngine.go:252"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"Parallel task processor started","shard-id":2,"address":"172.26.0.2:7234","component":"timer-queue-processor","cluster-name":"active","logging-call-at":"parallel_processor.go:98"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"interleaved weighted round robin task scheduler started","shard-id":2,"address":"172.26.0.2:7234","component":"timer-queue-processor","cluster-name":"active","logging-call-at":"interleaved_weighted_round_robin.go:109"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"Timer queue processor started.","shard-id":2,"address":"172.26.0.2:7234","component":"timer-queue-processor","cluster-name":"active","component":"timer-queue-processor","logging-call-at":"timerQueueProcessorBase.go:139"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"Parallel task processor started","shard-id":2,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","logging-call-at":"parallel_processor.go:98"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"interleaved weighted round robin task scheduler started","shard-id":2,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","logging-call-at":"interleaved_weighted_round_robin.go:109"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"none","shard-id":2,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","lifecycle":"Starting","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:128"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"none","shard-id":2,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","lifecycle":"Started","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:134"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"Parallel task processor started","shard-id":2,"address":"172.26.0.2:7234","component":"visibility-queue-processor","logging-call-at":"parallel_processor.go:98"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"interleaved weighted round robin task scheduler started","shard-id":2,"address":"172.26.0.2:7234","component":"visibility-queue-processor","logging-call-at":"interleaved_weighted_round_robin.go:109"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"none","shard-id":2,"address":"172.26.0.2:7234","component":"visibility-queue-processor","lifecycle":"Starting","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:128"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"none","shard-id":2,"address":"172.26.0.2:7234","component":"visibility-queue-processor","lifecycle":"Started","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:134"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"none","shard-id":2,"address":"172.26.0.2:7234","component":"history-engine","lifecycle":"Started","logging-call-at":"historyEngine.go:269"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.090Z","msg":"none","shard-id":2,"address":"172.26.0.2:7234","lifecycle":"Started","component":"shard-engine","logging-call-at":"context_impl.go:1389"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.116Z","msg":"Current reachable members","service":"matching","component":"service-resolver","service":"history","addresses":["172.26.0.2:7234"],"logging-call-at":"rpServiceResolver.go:266"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.117Z","msg":"Current reachable members","service":"matching","component":"service-resolver","service":"matching","addresses":["172.26.0.2:7235"],"logging-call-at":"rpServiceResolver.go:266"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.117Z","msg":"matching starting","service":"matching","logging-call-at":"service.go:91"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.117Z","msg":"Starting to serve on matching listener","service":"matching","logging-call-at":"service.go:102"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.118Z","msg":"none","component":"shard-controller","address":"172.26.0.2:7234","shard-update":"RingMembershipChangedEvent","number-processed":1,"number-deleted":0,"number":0,"logging-call-at":"controller_impl.go:310"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.118Z","msg":"Current reachable members","service":"history","component":"service-resolver","service":"matching","addresses":["172.26.0.2:7235"],"logging-call-at":"rpServiceResolver.go:266"}
temporal                | Waiting for Temporal server to start...
temporal                | + echo 'Waiting for Temporal server to start...'
temporal                | + sleep 1
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.724Z","msg":"Range updated for shardID","shard-id":3,"address":"172.26.0.2:7234","shard-range-id":4,"previous-shard-range-id":3,"number":0,"next-number":0,"logging-call-at":"context_impl.go:1122"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.724Z","msg":"Acquired shard","shard-id":3,"address":"172.26.0.2:7234","logging-call-at":"context_impl.go:1755"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.724Z","msg":"none","shard-id":3,"address":"172.26.0.2:7234","lifecycle":"Starting","component":"shard-engine","logging-call-at":"context_impl.go:1386"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.724Z","msg":"none","shard-id":3,"address":"172.26.0.2:7234","component":"history-engine","lifecycle":"Starting","logging-call-at":"historyEngine.go:252"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.725Z","msg":"Parallel task processor started","shard-id":3,"address":"172.26.0.2:7234","component":"timer-queue-processor","cluster-name":"active","logging-call-at":"parallel_processor.go:98"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.725Z","msg":"interleaved weighted round robin task scheduler started","shard-id":3,"address":"172.26.0.2:7234","component":"timer-queue-processor","cluster-name":"active","logging-call-at":"interleaved_weighted_round_robin.go:109"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.725Z","msg":"Timer queue processor started.","shard-id":3,"address":"172.26.0.2:7234","component":"timer-queue-processor","cluster-name":"active","component":"timer-queue-processor","logging-call-at":"timerQueueProcessorBase.go:139"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.725Z","msg":"Parallel task processor started","shard-id":3,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","logging-call-at":"parallel_processor.go:98"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.725Z","msg":"interleaved weighted round robin task scheduler started","shard-id":3,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","logging-call-at":"interleaved_weighted_round_robin.go:109"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.725Z","msg":"none","shard-id":3,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","lifecycle":"Starting","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:128"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.725Z","msg":"none","shard-id":3,"address":"172.26.0.2:7234","component":"transfer-queue-processor","cluster-name":"active","lifecycle":"Started","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:134"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.725Z","msg":"Parallel task processor started","shard-id":3,"address":"172.26.0.2:7234","component":"visibility-queue-processor","logging-call-at":"parallel_processor.go:98"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.725Z","msg":"interleaved weighted round robin task scheduler started","shard-id":3,"address":"172.26.0.2:7234","component":"visibility-queue-processor","logging-call-at":"interleaved_weighted_round_robin.go:109"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.725Z","msg":"none","shard-id":3,"address":"172.26.0.2:7234","component":"visibility-queue-processor","lifecycle":"Starting","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:128"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.725Z","msg":"none","shard-id":3,"address":"172.26.0.2:7234","component":"visibility-queue-processor","lifecycle":"Started","component":"transfer-queue-processor","logging-call-at":"queueProcessor.go:134"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.725Z","msg":"none","shard-id":3,"address":"172.26.0.2:7234","component":"history-engine","lifecycle":"Started","logging-call-at":"historyEngine.go:269"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:06.725Z","msg":"none","shard-id":3,"address":"172.26.0.2:7234","lifecycle":"Started","component":"shard-engine","logging-call-at":"context_impl.go:1389"}
temporal                | + tctl cluster health
temporal                | + grep -q SERVING
temporal                | {"level":"error","ts":"2022-07-05T15:02:07.810Z","msg":"Operation failed with internal error.","error":"GetWorkflowExecution: failed to get timer info. Error: Failed to get timer info. Error: context deadline exceeded","metric-scope":5,"logging-call-at":"persistenceMetricClients.go:1424","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/persistence.(*metricEmitter).updateErrorMetric\n\t/home/builder/temporal/common/persistence/persistenceMetricClients.go:1424\ngo.temporal.io/server/common/persistence.(*executionPersistenceClient).GetWorkflowExecution\n\t/home/builder/temporal/common/persistence/persistenceMetricClients.go:241\ngo.temporal.io/server/service/history/shard.(*ContextImpl).GetWorkflowExecution\n\t/home/builder/temporal/service/history/shard/context_impl.go:840\ngo.temporal.io/server/service/history/workflow.getWorkflowExecutionWithRetry.func1\n\t/home/builder/temporal/service/history/workflow/transaction_impl.go:462\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/service/history/workflow.getWorkflowExecutionWithRetry\n\t/home/builder/temporal/service/history/workflow/transaction_impl.go:467\ngo.temporal.io/server/service/history/workflow.(*ContextImpl).LoadWorkflowExecution\n\t/home/builder/temporal/service/history/workflow/context.go:274\ngo.temporal.io/server/service/history.(*visibilityQueueTaskExecutor).processStartExecution\n\t/home/builder/temporal/service/history/visibilityQueueTaskExecutor.go:115\ngo.temporal.io/server/service/history.(*visibilityQueueTaskExecutor).Execute\n\t/home/builder/temporal/service/history/visibilityQueueTaskExecutor.go:90\ngo.temporal.io/server/service/history/queues.(*executableImpl).Execute\n\t/home/builder/temporal/service/history/queues/executable.go:161\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask.func1\n\t/home/builder/temporal/common/tasks/parallel_processor.go:207\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/home/builder/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/home/builder/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:217\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).processTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:195"}
temporal                | {"level":"error","ts":"2022-07-05T15:02:07.810Z","msg":"Persistent fetch operation Failure","shard-id":4,"address":"172.26.0.2:7234","wf-namespace-id":"32049b68-7872-4094-8e63-d0dd59896a83","wf-id":"temporal-sys-tq-scanner","wf-run-id":"d034be6e-fac2-4bb1-bdde-a5e36749c7e2","store-operation":"get-wf-execution","error":"context deadline exceeded","logging-call-at":"transaction_impl.go:489","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/service/history/workflow.getWorkflowExecutionWithRetry\n\t/home/builder/temporal/service/history/workflow/transaction_impl.go:489\ngo.temporal.io/server/service/history/workflow.(*ContextImpl).LoadWorkflowExecution\n\t/home/builder/temporal/service/history/workflow/context.go:274\ngo.temporal.io/server/service/history.(*visibilityQueueTaskExecutor).processStartExecution\n\t/home/builder/temporal/service/history/visibilityQueueTaskExecutor.go:115\ngo.temporal.io/server/service/history.(*visibilityQueueTaskExecutor).Execute\n\t/home/builder/temporal/service/history/visibilityQueueTaskExecutor.go:90\ngo.temporal.io/server/service/history/queues.(*executableImpl).Execute\n\t/home/builder/temporal/service/history/queues/executable.go:161\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask.func1\n\t/home/builder/temporal/common/tasks/parallel_processor.go:207\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/home/builder/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/home/builder/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:217\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).processTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:195"}
temporal                | {"level":"error","ts":"2022-07-05T15:02:07.811Z","msg":"Fail to process task","shard-id":4,"address":"172.26.0.2:7234","component":"visibility-queue-processor","wf-namespace-id":"32049b68-7872-4094-8e63-d0dd59896a83","wf-id":"temporal-sys-tq-scanner","wf-run-id":"d034be6e-fac2-4bb1-bdde-a5e36749c7e2","queue-task-id":1048580,"queue-task-visibility-timestamp":"2022-07-04T12:28:03.160Z","queue-task-type":"VisibilityStartExecution","queue-task":{"NamespaceID":"32049b68-7872-4094-8e63-d0dd59896a83","WorkflowID":"temporal-sys-tq-scanner","RunID":"d034be6e-fac2-4bb1-bdde-a5e36749c7e2","VisibilityTimestamp":"2022-07-04T12:28:03.160160056Z","TaskID":1048580,"Version":0},"wf-history-event-id":0,"error":"context deadline exceeded","lifecycle":"ProcessingFailed","logging-call-at":"lazy_logger.go:68","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/log.(*lazyLogger).Error\n\t/home/builder/temporal/common/log/lazy_logger.go:68\ngo.temporal.io/server/service/history/queues.(*executableImpl).HandleErr\n\t/home/builder/temporal/service/history/queues/executable.go:231\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask.func1\n\t/home/builder/temporal/common/tasks/parallel_processor.go:208\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/home/builder/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/home/builder/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:217\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).processTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:195"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:08.534Z","msg":"RuntimeMetricsReporter started","service":"worker","logging-call-at":"runtime.go:138"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:09.667Z","msg":"Membership heartbeat upserted successfully","service":"worker","address":"172.26.0.2","port":6939,"hostId":"6711f647-fc73-11ec-8de6-0242ac1a0002","logging-call-at":"rpMonitor.go:229"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:10.106Z","msg":"bootstrap hosts fetched","service":"worker","bootstrap-hostports":"172.26.0.2:6934,172.26.0.2:6935,172.26.0.2:6939","logging-call-at":"rpMonitor.go:271"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:10.108Z","msg":"Current reachable members","service":"worker","component":"service-resolver","service":"matching","addresses":["172.26.0.2:7235"],"logging-call-at":"rpServiceResolver.go:266"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:10.108Z","msg":"Current reachable members","service":"worker","component":"service-resolver","service":"history","addresses":["172.26.0.2:7234"],"logging-call-at":"rpServiceResolver.go:266"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:10.108Z","msg":"Current reachable members","service":"worker","component":"service-resolver","service":"worker","addresses":["172.26.0.2:7239"],"logging-call-at":"rpServiceResolver.go:266"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:10.109Z","msg":"worker starting","service":"worker","component":"worker","logging-call-at":"service.go:332"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:10.109Z","msg":"none","component":"shard-controller","address":"172.26.0.2:7234","shard-update":"RingMembershipChangedEvent","number-processed":1,"number-deleted":0,"number":0,"logging-call-at":"controller_impl.go:310"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:10.109Z","msg":"Current reachable members","service":"history","component":"service-resolver","service":"worker","addresses":["172.26.0.2:7239"],"logging-call-at":"rpServiceResolver.go:266"}
temporal                | {"level":"info","ts":"2022-07-05T15:02:10.145Z","msg":"Current reachable members","service":"matching","component":"service-resolver","service":"worker","addresses":["172.26.0.2:7239"],"logging-call-at":"rpServiceResolver.go:266"}
temporal                | {"level":"error","ts":"2022-07-05T15:02:10.812Z","msg":"Operation failed with internal error.","error":"GetWorkflowExecution: failed to get child executionsRow info. Error: Failed to get timer info. Error: context deadline exceeded","metric-scope":5,"logging-call-at":"persistenceMetricClients.go:1424","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/persistence.(*metricEmitter).updateErrorMetric\n\t/home/builder/temporal/common/persistence/persistenceMetricClients.go:1424\ngo.temporal.io/server/common/persistence.(*executionPersistenceClient).GetWorkflowExecution\n\t/home/builder/temporal/common/persistence/persistenceMetricClients.go:241\ngo.temporal.io/server/service/history/shard.(*ContextImpl).GetWorkflowExecution\n\t/home/builder/temporal/service/history/shard/context_impl.go:840\ngo.temporal.io/server/service/history/workflow.getWorkflowExecutionWithRetry.func1\n\t/home/builder/temporal/service/history/workflow/transaction_impl.go:462\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/service/history/workflow.getWorkflowExecutionWithRetry\n\t/home/builder/temporal/service/history/workflow/transaction_impl.go:467\ngo.temporal.io/server/service/history/workflow.(*ContextImpl).LoadWorkflowExecution\n\t/home/builder/temporal/service/history/workflow/context.go:274\ngo.temporal.io/server/service/history.(*visibilityQueueTaskExecutor).processStartExecution\n\t/home/builder/temporal/service/history/visibilityQueueTaskExecutor.go:115\ngo.temporal.io/server/service/history.(*visibilityQueueTaskExecutor).Execute\n\t/home/builder/temporal/service/history/visibilityQueueTaskExecutor.go:90\ngo.temporal.io/server/service/history/queues.(*executableImpl).Execute\n\t/home/builder/temporal/service/history/queues/executable.go:161\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask.func1\n\t/home/builder/temporal/common/tasks/parallel_processor.go:207\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/home/builder/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/home/builder/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:217\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).processTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:195"}
temporal                | {"level":"error","ts":"2022-07-05T15:02:10.812Z","msg":"Persistent fetch operation Failure","shard-id":4,"address":"172.26.0.2:7234","wf-namespace-id":"32049b68-7872-4094-8e63-d0dd59896a83","wf-id":"temporal-sys-tq-scanner","wf-run-id":"d034be6e-fac2-4bb1-bdde-a5e36749c7e2","store-operation":"get-wf-execution","error":"context deadline exceeded","logging-call-at":"transaction_impl.go:489","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/service/history/workflow.getWorkflowExecutionWithRetry\n\t/home/builder/temporal/service/history/workflow/transaction_impl.go:489\ngo.temporal.io/server/service/history/workflow.(*ContextImpl).LoadWorkflowExecution\n\t/home/builder/temporal/service/history/workflow/context.go:274\ngo.temporal.io/server/service/history.(*visibilityQueueTaskExecutor).processStartExecution\n\t/home/builder/temporal/service/history/visibilityQueueTaskExecutor.go:115\ngo.temporal.io/server/service/history.(*visibilityQueueTaskExecutor).Execute\n\t/home/builder/temporal/service/history/visibilityQueueTaskExecutor.go:90\ngo.temporal.io/server/service/history/queues.(*executableImpl).Execute\n\t/home/builder/temporal/service/history/queues/executable.go:161\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask.func1\n\t/home/builder/temporal/common/tasks/parallel_processor.go:207\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/home/builder/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/home/builder/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:217\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).processTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:195"}
temporal                | {"level":"error","ts":"2022-07-05T15:02:10.812Z","msg":"Fail to process task","shard-id":4,"address":"172.26.0.2:7234","component":"visibility-queue-processor","wf-namespace-id":"32049b68-7872-4094-8e63-d0dd59896a83","wf-id":"temporal-sys-tq-scanner","wf-run-id":"d034be6e-fac2-4bb1-bdde-a5e36749c7e2","queue-task-id":1048580,"queue-task-visibility-timestamp":"2022-07-04T12:28:03.160Z","queue-task-type":"VisibilityStartExecution","queue-task":{"NamespaceID":"32049b68-7872-4094-8e63-d0dd59896a83","WorkflowID":"temporal-sys-tq-scanner","RunID":"d034be6e-fac2-4bb1-bdde-a5e36749c7e2","VisibilityTimestamp":"2022-07-04T12:28:03.160160056Z","TaskID":1048580,"Version":0},"wf-history-event-id":0,"error":"context deadline exceeded","lifecycle":"ProcessingFailed","logging-call-at":"lazy_logger.go:68","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/log.(*lazyLogger).Error\n\t/home/builder/temporal/common/log/lazy_logger.go:68\ngo.temporal.io/server/service/history/queues.(*executableImpl).HandleErr\n\t/home/builder/temporal/service/history/queues/executable.go:231\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask.func1\n\t/home/builder/temporal/common/tasks/parallel_processor.go:208\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/home/builder/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/home/builder/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:217\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).processTask\n\t/home/builder/temporal/common/tasks/parallel_processor.go:195"}
temporal                | {"level":"error","ts":"2022-07-05T15:02:11.534Z","msg":"Operation failed with internal error.","error":"GetWorkflowExecution: failed to get buffered events. Error: getBufferedEvents operation failed. Select failed: context deadline exceeded","metric-scope":5,"logging-call-at":"persistenceMetricClients.go:1424","stacktrace":"go.temporal.io/server/common/log.(*zapLogger).Error\n\t/home/builder/temporal/common/log/zap_logger.go:142\ngo.temporal.io/server/common/persistence.(*metricEmitter).updateErrorMetric\n\t/home/builder/temporal/common/persistence/persistenceMetricClients.go:1424\ngo.temporal.io/server/common/persistence.(*executionPersistenceClient).GetWorkflowExecution\n\t/home/builder/temporal/common/persistence/persistenceMetricClients.go:241\ngo.temporal.io/server/service/history/shard.(*ContextImpl).GetWorkflowExecution\n\t/home/builder/temporal/service/history/shard/context_impl.go:840\ngo.temporal.io/server/service/history/workflow.getWorkflowExecutionWithRetry.func1\n\t/home/builder/temporal/service/history/workflow/transaction_impl.go:462\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/service/history/workflow.getWorkflowExecutionWithRetry\n\t/home/builder/temporal/service/history/workflow/transaction_impl.go:467\ngo.temporal.io/server/service/history/workflow.(*ContextImpl).LoadWorkflowExecution\n\t/home/builder/temporal/service/history/workflow/context.go:274\ngo.temporal.io/server/service/history.LoadMutableStateForTask\n\t/home/builder/temporal/service/history/nDCTaskUtil.go:142\ngo.temporal.io/server/service/history.loadMutableStateForTimerTask\n\t/home/builder/temporal/service/history/nDCTaskUtil.go:123\ngo.temporal.io/server/service/history.(*timerQueueActiveTaskExecutor).executeWorkflowTaskTimeoutTask\n\t/home/builder/temporal/service/history/timerQueueActiveTaskExecutor.go:292\ngo.temporal.io/server/service/history.(*timerQueueActiveTaskExecutor).Execute\n\t/home/builder/temporal/service/history/timerQueueActiveTaskExecutor.go:109\ngo.temporal.io/server/service/history/queues.(*executorWrapper).Execute\n\t/home/builder/temporal/service/history/queues/executor_wrapper.go:67\ngo.temporal.io/server/service/history/queues.(*executableImpl).Execute\n\t/home/builder/temporal/service/history/queues/executable.go:161\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask.func1\n\t/home/builder/temporal/common/tasks/parallel_processor.go:207\ngo.temporal.io/server/common/backoff.Retry.func1\n\t/home/builder/temporal/common/backoff/retry.go:104\ngo.temporal.io/server/common/backoff.RetryContext\n\t/home/builder/temporal/common/backoff/retry.go:125\ngo.temporal.io/server/common/backoff.Retry\n\t/home/builder/temporal/common/backoff/retry.go:105\ngo.temporal.io/server/common/tasks.(*ParallelProcessor).executeTask\n\t/home/builder/temporal/common/tasks/parallel_proce

activity_info_maps

CREATE TABLE `activity_info_maps` (
	`shard_id` int NOT NULL,
	`namespace_id` binary(16) NOT NULL,
	`workflow_id` varchar(255) NOT NULL,
	`run_id` binary(16) NOT NULL,
	`schedule_id` bigint NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16),
	PRIMARY KEY (`shard_id`, `namespace_id`, `workflow_id`, `run_id`, `schedule_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

buffered_events

CREATE TABLE `buffered_events` (
	`shard_id` int NOT NULL,
	`namespace_id` binary(16) NOT NULL,
	`workflow_id` varchar(255) NOT NULL,
	`run_id` binary(16) NOT NULL,
	`id` bigint NOT NULL AUTO_INCREMENT,
	`data` mediumblob NOT NULL,
	`data_encoding` varchar(16) NOT NULL,
	PRIMARY KEY (`shard_id`, `namespace_id`, `workflow_id`, `run_id`, `id`),
	UNIQUE KEY `id` (`id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

child_execution_info_maps

CREATE TABLE `child_execution_info_maps` (
	`shard_id` int NOT NULL,
	`namespace_id` binary(16) NOT NULL,
	`workflow_id` varchar(255) NOT NULL,
	`run_id` binary(16) NOT NULL,
	`initiated_id` bigint NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16),
	PRIMARY KEY (`shard_id`, `namespace_id`, `workflow_id`, `run_id`, `initiated_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

cluster_membership

CREATE TABLE `cluster_membership` (
	`membership_partition` int NOT NULL,
	`host_id` binary(16) NOT NULL,
	`rpc_address` varchar(128),
	`rpc_port` smallint NOT NULL,
	`role` tinyint NOT NULL,
	`session_start` timestamp NULL DEFAULT '1970-01-01 00:00:01',
	`last_heartbeat` timestamp NULL DEFAULT '1970-01-01 00:00:01',
	`record_expiry` timestamp NULL DEFAULT '1970-01-01 00:00:01',
	PRIMARY KEY (`membership_partition`, `host_id`),
	KEY `role` (`role`, `host_id`),
	KEY `role_2` (`role`, `last_heartbeat`),
	KEY `rpc_address` (`rpc_address`, `role`),
	KEY `last_heartbeat` (`last_heartbeat`),
	KEY `record_expiry` (`record_expiry`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

cluster_metadata

CREATE TABLE `cluster_metadata` (
	`metadata_partition` int NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16) NOT NULL DEFAULT 'Proto3',
	`version` bigint NOT NULL DEFAULT '1',
	PRIMARY KEY (`metadata_partition`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

cluster_metadata_info

CREATE TABLE `cluster_metadata_info` (
	`metadata_partition` int NOT NULL,
	`cluster_name` varchar(255) NOT NULL,
	`data` mediumblob NOT NULL,
	`data_encoding` varchar(16) NOT NULL,
	`version` bigint NOT NULL,
	PRIMARY KEY (`metadata_partition`, `cluster_name`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

current_executions

CREATE TABLE `current_executions` (
	`shard_id` int NOT NULL,
	`namespace_id` binary(16) NOT NULL,
	`workflow_id` varchar(255) NOT NULL,
	`run_id` binary(16) NOT NULL,
	`create_request_id` varchar(255),
	`state` int NOT NULL,
	`status` int NOT NULL,
	`start_version` bigint NOT NULL DEFAULT '0',
	`last_write_version` bigint NOT NULL,
	PRIMARY KEY (`shard_id`, `namespace_id`, `workflow_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

executions

CREATE TABLE `executions` (
	`shard_id` int NOT NULL,
	`namespace_id` binary(16) NOT NULL,
	`workflow_id` varchar(255) NOT NULL,
	`run_id` binary(16) NOT NULL,
	`next_event_id` bigint NOT NULL,
	`last_write_version` bigint NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16) NOT NULL,
	`state` mediumblob,
	`state_encoding` varchar(16) NOT NULL,
	`db_record_version` bigint NOT NULL DEFAULT '0',
	PRIMARY KEY (`shard_id`, `namespace_id`, `workflow_id`, `run_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

history_node

CREATE TABLE `history_node` (
	`shard_id` int NOT NULL,
	`tree_id` binary(16) NOT NULL,
	`branch_id` binary(16) NOT NULL,
	`node_id` bigint NOT NULL,
	`txn_id` bigint NOT NULL,
	`data` mediumblob NOT NULL,
	`data_encoding` varchar(16) NOT NULL,
	`prev_txn_id` bigint NOT NULL DEFAULT '0',
	PRIMARY KEY (`shard_id`, `tree_id`, `branch_id`, `node_id`, `txn_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

history_tree

CREATE TABLE `history_tree` (
	`shard_id` int NOT NULL,
	`tree_id` binary(16) NOT NULL,
	`branch_id` binary(16) NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16) NOT NULL,
	PRIMARY KEY (`shard_id`, `tree_id`, `branch_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

namespace_metadata

CREATE TABLE `namespace_metadata` (
	`partition_id` int NOT NULL,
	`notification_version` bigint NOT NULL,
	PRIMARY KEY (`partition_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

namespaces

CREATE TABLE `namespaces` (
	`partition_id` int NOT NULL,
	`id` binary(16) NOT NULL,
	`name` varchar(255) NOT NULL,
	`notification_version` bigint NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16) NOT NULL,
	`is_global` tinyint(1) NOT NULL,
	PRIMARY KEY (`partition_id`, `id`),
	UNIQUE KEY `name` (`name`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

queue

CREATE TABLE `queue` (
	`queue_type` int NOT NULL,
	`message_id` bigint NOT NULL,
	`message_payload` mediumblob,
	`message_encoding` varchar(16) NOT NULL DEFAULT 'Json',
	PRIMARY KEY (`queue_type`, `message_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

queue_metadata

CREATE TABLE `queue_metadata` (
	`queue_type` int NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16) NOT NULL DEFAULT 'Json',
	`version` bigint NOT NULL DEFAULT '0',
	PRIMARY KEY (`queue_type`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

replication_tasks

CREATE TABLE `replication_tasks` (
	`shard_id` int NOT NULL,
	`task_id` bigint NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16) NOT NULL,
	PRIMARY KEY (`shard_id`, `task_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

replication_tasks_dlq

CREATE TABLE `replication_tasks_dlq` (
	`source_cluster_name` varchar(255) NOT NULL,
	`shard_id` int NOT NULL,
	`task_id` bigint NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16) NOT NULL,
	PRIMARY KEY (`source_cluster_name`, `shard_id`, `task_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

request_cancel_info_maps

CREATE TABLE `request_cancel_info_maps` (
	`shard_id` int NOT NULL,
	`namespace_id` binary(16) NOT NULL,
	`workflow_id` varchar(255) NOT NULL,
	`run_id` binary(16) NOT NULL,
	`initiated_id` bigint NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16),
	PRIMARY KEY (`shard_id`, `namespace_id`, `workflow_id`, `run_id`, `initiated_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

schema_update_history

CREATE TABLE `schema_update_history` (
	`version_partition` int NOT NULL,
	`year` int NOT NULL,
	`month` int NOT NULL,
	`update_time` datetime(6) NOT NULL,
	`description` varchar(255),
	`manifest_md5` varchar(64),
	`new_version` varchar(64),
	`old_version` varchar(64),
	PRIMARY KEY (`version_partition`, `year`, `month`, `update_time`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

schema_version

CREATE TABLE `schema_version` (
	`version_partition` int NOT NULL,
	`db_name` varchar(255) NOT NULL,
	`creation_time` datetime(6),
	`curr_version` varchar(64),
	`min_compatible_version` varchar(64),
	PRIMARY KEY (`version_partition`, `db_name`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

shards

CREATE TABLE `shards` (
	`shard_id` int NOT NULL,
	`range_id` bigint NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16) NOT NULL,
	PRIMARY KEY (`shard_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

signal_info_maps

CREATE TABLE `signal_info_maps` (
	`shard_id` int NOT NULL,
	`namespace_id` binary(16) NOT NULL,
	`workflow_id` varchar(255) NOT NULL,
	`run_id` binary(16) NOT NULL,
	`initiated_id` bigint NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16),
	PRIMARY KEY (`shard_id`, `namespace_id`, `workflow_id`, `run_id`, `initiated_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

signals_requested_sets

CREATE TABLE `signals_requested_sets` (
	`shard_id` int NOT NULL,
	`namespace_id` binary(16) NOT NULL,
	`workflow_id` varchar(255) NOT NULL,
	`run_id` binary(16) NOT NULL,
	`signal_id` varchar(255) NOT NULL,
	PRIMARY KEY (`shard_id`, `namespace_id`, `workflow_id`, `run_id`, `signal_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

task_queues

CREATE TABLE `task_queues` (
	`range_hash` int unsigned NOT NULL,
	`task_queue_id` varbinary(272) NOT NULL,
	`range_id` bigint NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16) NOT NULL,
	PRIMARY KEY (`range_hash`, `task_queue_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

tasks

CREATE TABLE `tasks` (
	`range_hash` int unsigned NOT NULL,
	`task_queue_id` varbinary(272) NOT NULL,
	`task_id` bigint NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16) NOT NULL,
	PRIMARY KEY (`range_hash`, `task_queue_id`, `task_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

timer_info_maps

CREATE TABLE `timer_info_maps` (
	`shard_id` int NOT NULL,
	`namespace_id` binary(16) NOT NULL,
	`workflow_id` varchar(255) NOT NULL,
	`run_id` binary(16) NOT NULL,
	`timer_id` varchar(255) NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16),
	PRIMARY KEY (`shard_id`, `namespace_id`, `workflow_id`, `run_id`, `timer_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

timer_tasks

CREATE TABLE `timer_tasks` (
	`shard_id` int NOT NULL,
	`visibility_timestamp` datetime(6) NOT NULL,
	`task_id` bigint NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16) NOT NULL,
	PRIMARY KEY (`shard_id`, `visibility_timestamp`, `task_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

transfer_tasks

CREATE TABLE `transfer_tasks` (
	`shard_id` int NOT NULL,
	`task_id` bigint NOT NULL,
	`data` mediumblob,
	`data_encoding` varchar(16) NOT NULL,
	PRIMARY KEY (`shard_id`, `task_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

visibility_tasks

CREATE TABLE `visibility_tasks` (
	`shard_id` int NOT NULL,
	`task_id` bigint NOT NULL,
	`data` mediumblob NOT NULL,
	`data_encoding` varchar(16) NOT NULL,
	PRIMARY KEY (`shard_id`, `task_id`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

executions_visibility

CREATE TABLE `executions_visibility` (
	`namespace_id` char(64) NOT NULL,
	`run_id` char(64) NOT NULL,
	`start_time` datetime(6) NOT NULL,
	`execution_time` datetime(6) NOT NULL,
	`workflow_id` varchar(255) NOT NULL,
	`workflow_type_name` varchar(255) NOT NULL,
	`status` int NOT NULL,
	`close_time` datetime(6),
	`history_length` bigint,
	`memo` blob,
	`encoding` varchar(64) NOT NULL,
	`task_queue` varchar(255) NOT NULL DEFAULT '',
	PRIMARY KEY (`namespace_id`, `run_id`),
	KEY `by_type_start_time` (`namespace_id`, `workflow_type_name`, `status`, `start_time` DESC, `run_id`),
	KEY `by_workflow_id_start_time` (`namespace_id`, `workflow_id`, `status`, `start_time` DESC, `run_id`),
	KEY `by_status_by_start_time` (`namespace_id`, `status`, `start_time` DESC, `run_id`),
	KEY `by_type_close_time` (`namespace_id`, `workflow_type_name`, `status`, `close_time` DESC, `run_id`),
	KEY `by_workflow_id_close_time` (`namespace_id`, `workflow_id`, `status`, `close_time` DESC, `run_id`),
	KEY `by_status_by_close_time` (`namespace_id`, `status`, `close_time` DESC, `run_id`),
	KEY `by_close_time_by_status` (`namespace_id`, `close_time` DESC, `run_id`, `status`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

schema_update_history

CREATE TABLE `schema_update_history` (
	`version_partition` int NOT NULL,
	`year` int NOT NULL,
	`month` int NOT NULL,
	`update_time` datetime(6) NOT NULL,
	`description` varchar(255),
	`manifest_md5` varchar(64),
	`new_version` varchar(64),
	`old_version` varchar(64),
	PRIMARY KEY (`version_partition`, `year`, `month`, `update_time`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;

schema_version

CREATE TABLE `schema_version` (
	`version_partition` int NOT NULL,
	`db_name` varchar(255) NOT NULL,
	`creation_time` datetime(6),
	`curr_version` varchar(64),
	`min_compatible_version` varchar(64),
	PRIMARY KEY (`version_partition`, `db_name`)
) ENGINE InnoDB,
  CHARSET utf8mb4,
  COLLATE utf8mb4_0900_ai_ci;
@jonico
Copy link

jonico commented Jul 5, 2022

@Mayowa-Ojo: I would need some more information to debug successfully:

  1. Can you paste the schema (as you see it n PlanetScale's Web UI) for both the temporal as well as the temporal_internaldatabase?
  2. Can you attach the full docker-compose command you run plus its console output from the very beginning here? I remember seeing problems in the later startup phase being caused by the initial phase.
  3. Can you paste the content of the following environmental variables:
  • ${TEMPORAL_VISIBILITY_USER}
  • ${TEMPORAL_VISIBILITY_PSCALE_HOSTSTRING}
  • {TEMPORAL_USER}
  • {TEMPORAL_PSCALE_HOSTSTRING}
    Those values are not confidential (only the password is), but I like to rule out that they may refer to the same database, especially as you have seen a unknown database 'temporal'error in Insights

@jonico
Copy link

jonico commented Jul 5, 2022

... and one follow up question: DO you see those errors ☝️ very frequently or "just" every 20 minutes? I am asking because I noticed that temporal.io does not seem to always properly close its transactions and the default timeout is 20 minutes.

@Mayowa-Ojo
Copy link
Author

Mayowa-Ojo commented Jul 5, 2022

@jonico Thanks for your time with this.

I've added two files for the temporal schema and temporal_visibility schema. I'm assumuing by temporal_internal you mean temporal_visibility

I've also added the logs after running docker-compose up until the first few error logs

These are the environment variable values:

- VISIBILITY_MYSQL_USER=thnccus39ow2
- VISIBILITY_MYSQL_PWD=[radacted]
- VISIBILITY_MYSQL_SEEDS=hly4e25zt9ni.us-east-3.psdb.cloud
- MYSQL_USER=708niiqkqzaz
- MYSQL_PWD=[redacted]
- MYSQL_SEEDS=f0es1algq55z.us-east-1.psdb.cloud

Yes the errors are logged very frequently, like every few seconds infact. I can open the temporal web ui after running docker-compose, but can't run the typescript hello word example successfully. I get the context deadline exceeded error.

@jonico
Copy link

jonico commented Jul 5, 2022

@Mayowa-Ojo: I did not notice any significant differences in your log and configuration until the error messages start to appear, would it be possible to invite me as an administrator to your databases? My e-mail is jonico@planetscale.com

The only other thing I would try is to drop all tables in both databases and re-initialize them.

@Mayowa-Ojo
Copy link
Author

@jonico Yes definitley, I just invited you to the two organizations hosting the databases.

@jonico
Copy link

jonico commented Jul 6, 2022

@Mayowa-Ojo: I tried to reproduce with your two databases and succeeded to run the typescript example. I was using the docker compose file from this very gist 😕

Now, there are at least three things that differ from my setup and yours:

  1. The connection strings / env variables I used (by creating additional credentials for your dbs). May I ask you to try using mine? They are password protected and in format that you can only retrieve them once by clicking on this link. The password to unprotect is the name of your PlanetScale org where you host the temporal DB.

  2. The specific version of https://github.com/temporalio/temporal - I am running successfully with https://github.com/planetscale/temporal/tree/main which is currently 28 commits behind the upstream master branch. If changing the env variables, does not change anything, I would suggest trying planetscale/temporal@a431b8a - I will also retest with the latest HEAD

  3. The actual machine / architecture and location you are running on - I was testing successfully on my local Mac with Intel chipset and amd64 Linux using GitHub CodeSpaces (same Go version as in your logs). If 1) and 2) are not the culprits - it may be worth checking whether you can get it to run in a GitHub Codespace or GitPod.

Looking forward to your feedback 😊

@jonico
Copy link

jonico commented Jul 6, 2022

Update: Updated both https://github.com/planetscale/docker-compose and https://github.com/planetscale/temporal to latest upstream and your databases still work for me - so for step 2, just double check that both repos are on the latest version.

You might wonder why step 1) is even suggested, but you'll notice that my connection values are in a slightly different format than yours - using our new edge network, and I just like to rule out different behavior in those aspects first.

@Mayowa-Ojo
Copy link
Author

@jonico Wow, It still beats me how you're able to run this with the exact same configs and I can't 😄

So here's what I've tried:

  • First I switched to the env variables you shared in the secure link. I ran docker-compose again and still the same errors
  • Next, I pulled the latest versions of the temporal repo and ran docker-compose again, this time with the new env configs, but got the same errors
  • After that I was left with the third point you mentioned. So I spinned up a gitpod instance and ran the docker-compose command after copying the compose file to gitpod and also using the new env configs and to my surprise it worked fine!
  • This helped me narrow down the possibilities. It's either the docker and docker-compose versions I have locally (v19.x.x and 1.25.x respectively) or there's a problem with my hardware (I'm running Ubuntu 20.04 - 16gb ram - core i5 cpu - x86_64). So I ugpraded my docker and docker-compose versions to the latest and tried again. Still got the same errors.

I think it's safe to say that the most likely problem here is my computer (either the hardware, underlying OS or both). There might also be other factors i'm not considering but I can't think of any other likelihood.

For now, I'll just stick with the Gitpod option. Thanks a lot for your help with this!

@jonico
Copy link

jonico commented Jul 6, 2022

wow, computers ... - glad you narrowed it down - another potential culprit could be your actual network interface - but local hardware differences are definitely a thing - it has a reason whole GitHub engineering switched to CodeSpaces and avoided M1 incompatibilities 😅

@Mayowa-Ojo
Copy link
Author

Haha! It's really mind blowing. Glad there's a working solution at least.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment