-
-
Save dabit3/e0af16db09b6e206292d1c5cfc0d0a07 to your computer and use it in GitHub Desktop.
type Order @model | |
@key(name: "byCustomerByStatusByDate", fields: ["customerID", "status", "date"]) | |
@key(name: "byCustomerByDate", fields: ["customerID", "date"]) | |
@key(name: "byRepresentativebyDate", fields: ["accountRepresentativeID", "date"]) | |
@key(name: "byProduct", fields: ["productID", "id"]) | |
{ | |
id: ID! | |
customerID: ID! | |
accountRepresentativeID: ID! | |
productID: ID! | |
status: String! | |
amount: Int! | |
date: String! | |
} | |
type Customer @model | |
@key(name: "byRepresentative", fields: ["accountRepresentativeID", "id"]) { | |
id: ID! | |
name: String! | |
phoneNumber: String | |
accountRepresentativeID: ID! | |
ordersByDate: [Order] @connection(keyName: "byCustomerByDate", fields: ["id"]) | |
ordersByStatusDate: [Order] @connection(keyName: "byCustomerByStatusByDate", fields: ["id"]) | |
} | |
type Employee @model | |
@key(name: "newHire", fields: ["newHire", "id"], queryField: "employeesNewHire") | |
@key(name: "newHireByStartDate", fields: ["newHire", "startDate"], queryField: "employeesNewHireByStartDate") | |
@key(name: "byName", fields: ["name", "id"], queryField: "employeeByName") | |
@key(name: "byTitle", fields: ["jobTitle", "id"], queryField: "employeesByJobTitle") | |
@key(name: "byWarehouse", fields: ["warehouseID", "id"]) { | |
id: ID! | |
name: String! | |
startDate: String! | |
phoneNumber: String! | |
warehouseID: ID! | |
jobTitle: String! | |
newHire: String! # We have to use String type, because Boolean types cannot be sort keys | |
} | |
type Warehouse @model { | |
id: ID! | |
employees: [Employee] @connection(keyName: "byWarehouse", fields: ["id"]) | |
} | |
type AccountRepresentative @model | |
@key(name: "bySalesPeriodByOrderTotal", fields: ["salesPeriod", "orderTotal"], queryField: "repsByPeriodAndTotal") { | |
id: ID! | |
customers: [Customer] @connection(keyName: "byRepresentative", fields: ["id"]) | |
orders: [Order] @connection(keyName: "byRepresentativebyDate", fields: ["id"]) | |
orderTotal: Int | |
salesPeriod: String | |
} | |
type Inventory @model | |
@key(name: "byWarehouseID", fields: ["warehouseID"], queryField: "itemsByWarehouseID") | |
@key(fields: ["productID", "warehouseID"]) { | |
productID: ID! | |
warehouseID: ID! | |
inventoryAmount: Int! | |
} | |
type Product @model { | |
id: ID! | |
name: String! | |
orders: [Order] @connection(keyName: "byProduct", fields: ["id"]) | |
inventories: [Inventory] @connection(fields: ["id"]) | |
} |
Great example for Relational Data with DynamoDB and Amplify!
One questions, as all types are annotated with the@model
directive, amplify will, in this cases, still generate 7 DynamoDB tables behind the scenes?
Or will it use a single table like it is advised in the adjacency list pattern?
Right now, it generates 7 DynamoDB tables behind the scenes with GSIs. According to the team:
In order to keep connection queries fast and efficient, the GraphQL transform manages global secondary indexes (GSIs) on the generated tables on your behalf. In the future we are investigating using adjacency lists along side GSIs for different use cases that are connection heavy.
newHire: String! # We have to use String type, because Boolean types cannot be sort keys
I would change newHire to a status field and use an enum to mark the employee as a new hire. Or use an enum to define true and false. This way you're not opening up the field for junk to come in from the front end. Enums are stored as strings anyways...
Quick question here, let's say if we had to make it multiple products in a order, how would you do multiple products mutations in an order? Is batch update mutations is possible?
Quick question here, let's say if we had to make it multiple products in a order, how would you do multiple products mutations in an order? Is batch update mutations is possible?
I'm currently facing the same task. I want to insert multiple entries for a given table. If you want batch updates you can do it with a custom resolver or with a lambda function.
Via Custom Resolver
https://medium.com/@jan.hesters/creating-graphql-batch-operations-for-aws-amplify-with-appsync-and-cognito-ecee6938e8ee
I would prefer this approach but honestly I can't get this to work. My issue is still here: https://stackoverflow.com/questions/61045181/aws-amplify-custom-resolver-unsupported-operation-batchputitem
Maybe you have more luck
Via Lambda function
I wrote a blog post some time ago how you can insert multiple entries to a dynamodb table
https://regenrek.com/posts/using-aws-lambda-insert-multiple-json-dynamodb/
Great example for Relational Data with DynamoDB and Amplify!
One questions, as all types are annotated with the@model
directive, amplify will, in this cases, still generate 7 DynamoDB tables behind the scenes?
Or will it use a single table like it is advised in the adjacency list pattern?
That´s my question too !
I also would like to understand how to "manually" use the directives, in particular @connection for a one-to-one relationship. I had already written the resolvers (VLT scripts) needed for my "single" queries but need to retrieve additional info for an certain graphql type object, from a record in the same DynamoDB table. All without using the magic "create resources" option...
How to construct the appropriate query-resolver ?
Some where along the line:
type Project @model {
id: ID!
name: String
teamID: ID!
team: Team @connection(fields: ["teamID"])
}
type Team @model {
id: ID!
name: String!
}
Great example for Relational Data with DynamoDB and Amplify!
One questions, as all types are annotated with the
@model
directive, amplify will, in this cases, still generate 7 DynamoDB tables behind the scenes?Or will it use a single table like it is advised in the adjacency list pattern?