Skip to content

Instantly share code, notes, and snippets.

@dennyglee
Last active May 17, 2019 04:48
Show Gist options
  • Save dennyglee/a0f35a6b0129bcb59439555005e1a0eb to your computer and use it in GitHub Desktop.
Save dennyglee/a0f35a6b0129bcb59439555005e1a0eb to your computer and use it in GitHub Desktop.
Request Units, Storage Utilization, Splits....oh my!

Request Units, Storage Utilization, Splits....oh my!

The Unofficial Throughput and Capacity Guestimate Guide for Azure Cosmos DB

Introduction

I have had a lot of great questions about how to estimate the throughput and storage capacity for Azure Cosmos DB. To get yourself up and running, the key best practices references are:

General Guidelines

When reviewing the scenarios below, please note the following contexts:

  • A single partition has a minimum of 400 RUs and a maximum of 10,000 RUs with a maximum size of 10GB.
  • A partitioned collection has a minumum of 2500 RUs and a maximum of 10,000 RUs x the number of partitions in your partitioned collection (e.g. 10 partitions has a maximum of 100,000 RUs; 25 partitions has a maximum of 250,000 RUs; etc.).

To get maximum throughput, the goals are to:

  • Spread your data uniformly across all of your available partitions so this way you can when running a distributed query, there are no long running tasks on a skewed partition ultimately slowing down your query performance. A good reference is How to partition and scale in Azure Cosmos DB.
  • Each partition has a maximum of 10,000 RUs based on the fact that Cosmos DB is durable with four replicas for each partition (each replica has a maximum of 2,500 RUs).

Here are some general questions to think about when using these resources denoting some basic call outs:

  • Are you storage heavy, throughput heavy, or both?
    • Storage Heavy: That is, you have a lot of documents to store but you only need to query a little bit of the data at any one point in time (e.g. I have a year's worth of data stored, but I only need to look at the last week or month worth of data). This would mean that you need more partitions to store the data (recall, maximum of 10GB per partition) and you would set the throughput to a lower RUs (e.g. for a 25-partition collection, set to 2,500 RUs).
    • Throughput Heavy: That is, its not the size or number of documents but the need to have very fast throughput. If you need 250,000 RUs to support the throughput for your application, then you would create 25-partitioned collections to support that throughput rate. While you have the capacity of 250GB, you pay for the storage you used (e.g. $0.25/GB per month).
    • Both Storage and Throughput Heavy: In this case, it'll be a combinatin of the two scenarios above. You pay for the storage you need and as well the throughput that you have reserved.

Note, while you only pay for the storage you have used, remember for throughput you pay for the throughput you have reserved. For example, if you had reserved 250,000 RUs but only used 10,000 RUs, in this case you have paid for the 250,000 RUs because Cosmos DB had guaranteed that your collection would be able to handle 250,000 RUs. Note, that you pay for the maximum RU configuration for the hour.

Some Additional Thoughts

  • Cosmos DB is distributed storage meaning that we spread the throughput and storage capacity across many physical replicas. This allows you to use more resources to support your high througput and/or storge capacity needs. But this also means if you are requesting for a lot of data from it (as opposed to point reads), the best way to request this is to have a distributed compute layer (e.g. Spark Connector for Azure Cosmos DB to maximimize your thoughput.
@alexhhh
Copy link

alexhhh commented Apr 15, 2019

Is there a RUs limit ? (e.g. for a 350-partition collection, can I set the Throughput on 3,500,000 RUs ?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment