Problem is, you will likely want to query by TimeStamp and need to index the TimeStamp in your schema. However when sending tons of mutations adding new values with high frequency the index keeps rebuilding and all attempts querying the timeseries data will either timeout or fail saying something like “try again later”. Recording a few values per minute should work fine, but for anything more aggressive run more tests before building a whole product relying on it. This approach puts timestamps on the node. An alternative approach is to put the timestamps on a facet, see below.
Schema
Name: string @index(fulltext, term, trigram) .
Value: uid @reverse .
Timeseries: [uid] .
Timestamp: datetime .
Number: float .
type NumberValue {
Timestamp
Number
}
Mutation
upsert {
query {
var(func: eq(Name, "myparameter")) {
param as uid
}
}
mutation {
set {
uid(param) <Value> <_:Value> .\n"
uid(param) <Timeseries> <_:Value> .\n"
<_:Value> <Timestamp> "2020-06-01T18:00:00-06:00"^^<xs:dateTime>
<_:Value> <Number> "22.3"^^<xs:double>
<_:Value> <dgraph.type> "NumberValue"
}
}
}
Query latest value
{
r1(func: eq(Name, "myparameter")) {
Name
Value {
expand(_all_)
dgraph.type
}
}
}
Query the most recent 100 timeseries values
{
r1(func: eq(Name, "myparameter")) {
Timeseries (orderdesc: Timestamp, first: 100) {
expand(_all_)
}
}
}
... not tried yet.