Skip to content

Instantly share code, notes, and snippets.

@fdmanana
Created February 17, 2011 20:27
Show Gist options
  • Star 73 You must be signed in to star a gist
  • Fork 16 You must be signed in to fork a gist
  • Save fdmanana/832610 to your computer and use it in GitHub Desktop.
Save fdmanana/832610 to your computer and use it in GitHub Desktop.
The CouchDB replicator database

1. Introduction to the replicator database

A database where you PUT/POST documents to trigger replications and you DELETE to cancel ongoing replications. These documents have exactly the same content as the JSON objects we used to POST to /_replicate/ (fields "source", "target", "create_target", "continuous", "doc_ids", "filter", "query_params".

Replication documents can have a user defined "_id". Design documents (and _local documents) added to the replicator database are ignored.

The default name of this database is _replicator. The name can be changed in the .ini configuration, section [replicator], parameter db.

2. Basics

Let's say you PUT the following document into _replicator:

{
    "_id": "my_rep",
    "source":  "http://myserver.com:5984/foo",
    "target":  "bar",
    "create_target":  true
}

In the couch log you'll see 2 entries like these:

[Thu, 17 Feb 2011 19:43:59 GMT] [info] [<0.291.0>] Document `my_rep` triggered replication `c0ebe9256695ff083347cbf95f93e280+create_target`
[Thu, 17 Feb 2011 19:44:37 GMT] [info] [<0.124.0>] Replication `c0ebe9256695ff083347cbf95f93e280+create_target` finished (triggered by document `my_rep`)

As soon as the replication is triggered, the document will be updated by CouchDB with 3 new fields:

{
    "_id": "my_rep",
    "source":  "http://myserver.com:5984/foo",
    "target":  "bar",
    "create_target":  true,
    "_replication_id":  "c0ebe9256695ff083347cbf95f93e280",
    "_replication_state":  "triggered",
    "_replication_state_time":  "2011-06-07T16:54:35+01:00"
}

Note: special fields set by the replicator start with the prefix "_replication_".

  • _replication_id: the ID internally assigned to the replication. This is the ID exposed by the output from /_active_tasks/;
  • _replication_state: the current state of the replication;
  • _replication_state_time: an RFC3339 compliant timestamp that tells us when the current replication state (defined in _replication_state) was set.

When the replication finishes, it will update the _replication_state field (and _replication_state_time) with the value "completed", so the document will look like:

{
    "_id": "my_rep",
    "source":  "http://myserver.com:5984/foo",
    "target":  "bar",
    "create_target":  true,
    "_replication_id":  "c0ebe9256695ff083347cbf95f93e280",
    "_replication_state":  "completed",
    "_replication_state_time":  "2011-06-07T16:56:21+01:00"
}

When an error happens during replication, the _replication_state field is set to "error" (and _replication_state gets updated of course).

When you PUT/POST a document to the _replicator database, CouchDB will attempt to start the replication up to 10 times (configurable under [replicator], parameter max_replication_retry_count). If it fails on the first attempt, it waits 5 seconds before doing a second attempt. If the second attempt fails, it waits 10 seconds before doing a third attempt. If the third attempt fails, it waits 20 seconds before doing a fourth attempt (each attempt doubles the previous wait period). When an attempt fails, the Couch log will show you something like:

[error] [<0.149.0>] Error starting replication `67c1bb92010e7abe35d7d629635f18b6+create_target` (document `my_rep_2`): {db_not_found,<<"could not open http://myserver:5986/foo/">>

Note: the _replication_state field is only set to "error" when all the attempts were unsuccessful.

There are only 3 possible values for the _replication_state field: "triggered", "completed" and "error". Continuous replications never get their state to "completed".

3. Documents describing the same replication

Lets suppose 2 documents are added to the _replicator database in the following order:

{
    "_id": "doc_A",
    "source":  "http://myserver.com:5984/foo",
    "target":  "bar"
}

and

{
    "_id": "doc_B",
    "source":  "http://myserver.com:5984/foo",
    "target":  "bar"
}

Both describe exactly the same replication (only their _ids differ). In this case document "doc_A" triggers the replication, getting updated by CouchDB with the fields _replication_state, _replication_state_time and _replication_id, just like it was described before. Document "doc_B" however, is only updated with one field, the _replication_id so it will look like this:

{
    "_id": "doc_B",
    "source":  "http://myserver.com:5984/foo",
    "target":  "bar",
    "_replication_id":  "c0ebe9256695ff083347cbf95f93e280"
}

While document "doc_A" will look like this:

{
    "_id": "doc_A",
    "source":  "http://myserver.com:5984/foo",
    "target":  "bar",
    "_replication_id":  "c0ebe9256695ff083347cbf95f93e280",
    "_replication_state":  "triggered",
    "_replication_state_time":  "2011-06-07T16:54:35+01:00"
}

Note that both document get exactly the same value for the _replication_id field. This way you can identify which documents refer to the same replication - you can for example define a view which maps replication IDs to document IDs.

4. Canceling replications

To cancel a replication simply DELETE the document which triggered the replication. The Couch log will show you an entry like the following:

[Thu, 17 Feb 2011 20:16:29 GMT] [info] [<0.125.0>] Stopped replication `c0ebe9256695ff083347cbf95f93e280+continuous+create_target` because replication document `doc_A` was deleted

Note: You need to DELETE the document that triggered the replication. DELETEing another document that describes the same replication but it did not triggered it, will not cancel the replication.

5. Server restart

When CouchDB is restarted, it checks its _replicator database and restarts any replication that is described by a document that either has its _replication_state field set to "triggered" or it doesn't have yet the _replication_state field set.

Note: Continuous replications always have a _replication_state field with the value "triggered", therefore they're always restarted when CouchDB is restarted.

6. Changing the replicator database

Imagine your replicator database (default name is _replicator) has the two following documents that represent pull replications from servers A and B:

{
    "_id": "rep_from_A",
    "source":  "http://aserver.com:5984/foo",
    "target":  "foo_a",
    "continuous":  true,
    "_replication_id":  "c0ebe9256695ff083347cbf95f93e280",
    "_replication_state":  "triggered",
    "_replication_state_time":  "2011-06-07T16:54:35+01:00"
}
{
    "_id": "rep_from_B",
    "source":  "http://bserver.com:5984/foo",
    "target":  "foo_b",
    "continuous":  true,
    "_replication_id":  "231bb3cf9d48314eaa8d48a9170570d1",
    "_replication_state":  "triggered",
    "_replication_state_time":  "2011-06-07T16:54:35+01:00"
}

Now without stopping and restarting CouchDB, you change the name of the replicator database to another_replicator_db:

$ curl -X PUT http://localhost:5984/_config/replicator/db -d '"another_replicator_db"'
"_replicator"

As soon as this is done, both pull replications defined before, are stopped. This is explicitly mentioned in CouchDB's log:

[Fri, 11 Mar 2011 07:44:20 GMT] [info] [<0.104.0>] Stopping all ongoing replications because the replicator database was deleted or changed
[Fri, 11 Mar 2011 07:44:20 GMT] [info] [<0.127.0>] 127.0.0.1 - - PUT /_config/replicator/db 200

Imagine now you add a replication document to the new replicator database named another_replicator_db:

{
    "_id": "rep_from_X",
    "source":  "http://xserver.com:5984/foo",
    "target":  "foo_x",
    "continuous":  true
}

From now own you have a single replication going on in your system: a pull replication pulling from server X. Now you change back the replicator database to the original one _replicator:

$ curl -X PUT http://localhost:5984/_config/replicator/db -d '"_replicator"'
"another_replicator_db"

Immediately after this operation, the replication pulling from server X will be stopped and the replications defined in the _replicator database (pulling from servers A and B) will be resumed.

Changing again the replicator database to another_replicator_db will stop the pull replications pulling from servers A and B, and resume the pull replication pulling from server X.

7. Replicating the replicator database

Imagine you have in server C a replicator database with the two following pull replication documents in it:

{
    "_id": "rep_from_A",
    "source":  "http://aserver.com:5984/foo",
    "target":  "foo_a",
    "continuous":  true,
    "_replication_id":  "c0ebe9256695ff083347cbf95f93e280",
    "_replication_state":  "triggered",
    "_replication_state_time":  "2011-06-07T16:54:35+01:00"
}
{
    "_id": "rep_from_B",
    "source":  "http://bserver.com:5984/foo",
    "target":  "foo_b",
    "continuous":  true,
    "_replication_id":  "231bb3cf9d48314eaa8d48a9170570d1",
    "_replication_state":  "triggered",
    "_replication_state_time":  "2011-06-07T16:54:47+01:00"
}

Now you would like to have the same pull replications going on in server D, that is, you would like to have server D pull replicating from servers A and B. You have two options:

  • Explicitly add two documents to server's D replicator database
  • Replicate server's C replicator database into server's D replicator database

Both alternatives accomplish exactly the same goal.

8. The user_ctx property and delegations

Replication documents can have a custom "user_ctx" property. This property defines the user context under which a replication runs. For the old way of triggering replications (POSTing to /_replicate/), this property was not needed (it didn't exist in fact) - this is because at the moment of triggering the replication it has information about the authenticated user. With the replicator database, since it's a regular database, the information about the authenticated user is only present at the moment the replication document is written to the database - the replicator database implementation is like a _changes feed consumer (with ?include_docs=true) that reacts to what was written to the replicator database - in fact this feature could be implemented with an external script/program. This implementation detail implies that for non admin users, a user_ctx property, containing the user's name and a subset of his/her roles, must be defined in the replication document. This is ensured by the document update validation function present in the default design document of the replicator database. This validation function also ensure that a non admin user can set a user name property in the user_ctx property that doesn't match his/her own name (same principle applies for the roles).

For admins, the user_ctx property is optional, and if it's missing it defaults to a user context with name null and an empty list of roles - this mean design documents will not be written to local targets. If writing design documents to local targets is desired, the a user context with the roles _admin must be set explicitly.

Also, for admins the user_ctx property can be used to trigger a replication on behalf of another user. This is the user context that will be passed to local target database document validation functions.

Note: The "user_ctx" property only has effect for local endpoints.

Example delegated replication document:

{
    "_id": "my_rep",
    "source":  "http://bserver.com:5984/foo",
    "target":  "bar",
    "continuous":  true,
    "user_ctx": {
        "name": "joe",
        "roles": ["erlanger", "researcher"]
    }
}

As stated before, for admins the user_ctx property is optional, while for regular (non admin) users it's mandatory. When the roles property of user_ctx is missing, it defaults to the empty list [ ].

@bsquared
Copy link

bsquared commented Feb 13, 2012 via email

@fdmanana
Copy link
Author

fdmanana commented Feb 14, 2012 via email

@kurtmilam
Copy link

Thanks for the work and docs on _replicator. I understand that the docs sent to _replicator are almost exactly like the ones you'd send to _replicate, (including query_params and filter), but it might be nice to include an example doc or two that show setting filter and query_params explicitly.

@NandishAndDev
Copy link

Hi,
Thanks for the great feature.
We are encountering a strange issue with this. The records in _replicator database are getting updated continously. one of the recod looks like this
{ "_id": "flight-cd0694-20130720-nbe-ams2cloud", "_rev": "2082-41aa7aeed4404be340e3c76f8c402014", "source": "flight-cd0694-20130720-nbe-ams", "target": "https://xxxxx:yyyyy@CD.sync.api.abc.com/mi-master", "continuous": true, "filter": "Purchase/subscriptionsAndPurchases", "_replication_state": "triggered", "_replication_state_time": "2013-07-23T13:13:41+02:00", "_replication_id": "0c37f5634a83b99575a182b690ff0d05" }

The size of the db stands at 300Mb when it has only 9 records.Please help

@jchris
Copy link

jchris commented Jul 26, 2013

I think it would be better to move the automatically updated fields out of the replication document (so that only the user modifies it). Instead the same information would be made available via the _active stats API.

Perhaps if the replicator encounters a failure so bad that it stops retrying, it should touch the document with an error report.

@raghu-simpragma
Copy link

Thanks for this detailed write-up Filipe!

However, we have a tricky situation at hand in one of our cool android-apps which relies on CouchDB for peer-to-peer 'sync'.

We set up a replication record in the _replicator DB. But then, it takes a while before CouchDB comes round to actually 'triggering' our replication [which is between the CouchDB's of 2 peers!].

The issue is that we see there are other replication-records as well...(that try to replicate from the Internet to the local Couch...& vice-versa). But for our peer-to-peer replication, we rely on just WiFi.

Based on the logs, we therefore see that when we are in a WiFi-only(no Internet!) network, the Internet-based replications keep getting 'timeout's, and hence, our (more critical!) Local-to-Local p2p-Couch replication-records take a long while to get their replication_state to get changed to triggered. Even otherwise, we want our peer-to-peer replications to get 'triggered' first over the others that the app may want to trigger.

Is there any way to prioritise our replications so that CouchDB picks them up first? (Name of replication-id, etc...?)

Also, we tried using Ektorp to set-up & start our continuous Local-to-Local replications using a StdCouchDbInstance, and found that it internally POST's to /_replicate--which implies we can't track them via the standard _replicator system database.

Do replications triggered by POST'ing to the /_replicate URL take priority over those that put a record in _replicator DB?

Given the constraint that we can't go change the underlying CouchDB version now, how could we go about tweaking this..?

Would be great if you could please provide some pointers here...

BTW, we are using CouchBase Mobile version 1.20a  (maven version: 2.0.0-beta)

Thanks & regards,
~raghu

@kombadzomba
Copy link

Hi,

Thanks for this great feature. I have a CouchDB 1.6.1 on Centos 6 with about 200 documents in _replicator db. Target is local couchdb A and sources are local B1, B2... B200. All replications are filtered and continuous. I am facing some weird and very troubling bug now. I have middleware that runs script that creates new B database, few documents in it and filter too. Script also creates new _replicator document for B201 -> A filtered continuous replication. Here things go wild. CouchDb replies with ok:true for new _replicator document but the document is missing! CouchDb log is empty and the database does not crash or anything. It just doesn't create the document. Am I doing something wrong? CPU load is 7% for couchdb process constantly.

I am guessing the issue may be related with too many replications for target A since everything was working fine before the number of B databases increased. When I run only the part of the script that creates the _replicator document later, it is created with no problem, but when running the whole script it won't so it may be related with some race conditions I'm guessing.

Thanks and regards,
Milos

@Franklin62120
Copy link

Hi,
I'have a question about _replicator. Is it possible to use a view as filter for the replication as we can do in pouchdb ?

Thanks in advance.
Maxime

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment