A summary of the initial data that will be recorded in the Sync telemetry ping.
The collected data will primarily be used to answer the following questions. Images are used for visualization, are not composed of actual data and only show a very short time range.
How healthy is Sync?
This reports the total number of Syncs done per day, split by success and failure. This is the high-level overview of the general health of the Sync system.
Have improvements we made actually improved the overall health?
This reports rate of sync errors by release version. This will help tell us if improvements made in specific versions have had the impact we hoped for.
For example, in the above chart we can see that the error rates for "forms" and "bookmarks" improved in 48, but "tabs" got worse in 49.
How healthy are the individual Sync engines?
This reports the number of times the Sync failed for individual engines, so we can determine if a particular data type (such as bookmarks, passwords etc) is recording a higher than expected number of errors.
For example, in the above chart we can see fairly stable error rates per engine, although the error rate for bookmarks has been declining.
This reports the number of record successfully applied and those which failed. In this scenario Sync didn't actually fail, but certain Synced data (eg, a bookmark) did not get applied.
The following chart is for a single engine, but this should be duplicated for each engine (or better, all engines overlaid in a single chart)
Are Sync errors evenly distributed across users, or do a few users see the bulk of the errors?
This will give us insights into whether we should focus on the reasons why just a few users have extreme error rates, or whether the errors are evenly distributed.
For example, in the above chart we can see that while the average error rate is high, the vast majority of users have a low error rate.
Great start Mark.