Skip to content

Instantly share code, notes, and snippets.

@morozov
Created January 23, 2021 01:37
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save morozov/fd7dbf6a6eb2f3d44201af3bfce21fd0 to your computer and use it in GitHub Desktop.
Save morozov/fd7dbf6a6eb2f3d44201af3bfce21fd0 to your computer and use it in GitHub Desktop.
Support for Multiple Databases per SQL Server Connector

Support for Multiple Databases per SQL Server Connector

Background

Unlike the Debezium connector for MySQL which consumes the changes from all databases via a single binlog, the SQL Server connector interacts with the CDC schema which is deployed individually for each database.

At SugarCRM, we have a number of SQL Server instances hosting more than a hundred databases each, the changes from which we'd like to capture using the Debezium connector for SQL Server. Having to deploy a hundred connectors per instance may be suboptimal both from the resourcing and the operations standpoints.

Research

Unlike the MySQL connector which receives events from the server as they get committed to the binlog, the SQL server connector periodically calls certain stored procedures on the server, and if there are new updates, queries them on a per-table basis.

Proposal

The above "poll and iterate over tables" logic could be extending by adding the "databases" dimension on top of the existing loop.

  1. Instead of using a fixed database name from the configuration, the connector lists all CDC-enabled databases on the instance. The list can be filtered through the include and exclude lists, similar to the MySQL connector.
  2. Instead of a single task, the connector can have multiple tasks, each of which will capture the changes from a subset of the databases.
  3. In addition to (or instead of?) the SERVER_PARTITION_KEY, the keys of the committed offset messages will contain the database name, so Kafka Connect will commit one offset per database, not one offset per connector.
  4. During iteration over the databases, the connector will either change the active database on the JDBC connection using a USE <dbname> SQL statement or use a fully qualified name for all stored procedures and table names to avoid a dependency on the global connection state.

Technical details and challenges (identified so far)

  1. Both the data and the metadata connections will have to execute statements against multiple databases.
  2. Probably, the snapshots of all databases should be done prior to streaming the changes. Given that the snapshot and the CDC phases are coordinated by the connector-agnostic ChangeEventSourceCoordinator, it's not clear how to implement this.
  3. The ChangeEventSourceCoordinator class, the OffsetContext, and its loader are designed for a single offset per connector/task while in this case, we need multiple offsets per task (one for each database). This seems supported only by the more generic Kafka Connect's OffsetStorageReader.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment