Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Xk9eboF6/638f5e227c6315bfc3ef52d9f15b6720 to your computer and use it in GitHub Desktop.
Save Xk9eboF6/638f5e227c6315bfc3ef52d9f15b6720 to your computer and use it in GitHub Desktop.
The Graph Subtitles (English)
1
00:00:00,960 --> 00:00:05,360
Hey hackers, I'm Ford, an engineer at Edge & Node
2
00:00:05,360 --> 00:00:11,800
and I'm going to be giving a workshop today on creating,
deploying and using subgraphs on Graph Protocol.
3
00:00:12,200 --> 00:00:15,000
This workshop is going to be directed
towards new users of The Graph,
4
00:00:15,000 --> 00:00:20,920
so if you're a seasoned subgraph veteran,
this will most likely be review for you.
5
00:00:21,760 --> 00:00:27,680
Before we get into the workshop, just a reminder that
we will be giving two prizes in this hackathon.
6
00:00:28,280 --> 00:00:37,480
One for the best use of an existing subgraph from our
sponsors and that will be awarded 1500 DAI prize
7
00:00:37,480 --> 00:00:44,600
and another prize of 500 DAI will be rewarded to
the best new subgraph deployed to The Graph Explorer.
8
00:00:45,520 --> 00:00:49,240
You can find a list of the
sponsored subgraphs on the website.
9
00:00:51,000 --> 00:00:51,880
Okay, let's get into it.
10
00:00:52,160 --> 00:00:54,440
So, what is a subgraph?
11
00:00:55,280 --> 00:01:05,840
A subgraph is a definition of data that you
would like, extracted from an Ethereum chain
12
00:01:06,920 --> 00:01:12,720
and a corresponding definition
of mappings that transform that data
13
00:01:13,600 --> 00:01:20,320
into a structured data set that you
can then query via an open GraphQL API.
14
00:01:22,240 --> 00:01:28,720
Now that's a bit of a mouthful,
so let's go into it to learn by doing.
15
00:01:28,840 --> 00:01:36,080
So in order to do that, I'm going to be deploying
a simple NFT tracking subgraph today.
16
00:01:39,040 --> 00:01:47,160
So we are going to find a contract
that follows the ERC-721 token standard
17
00:01:48,400 --> 00:01:52,560
and then we are going to create
a subgraph to track that token.
18
00:01:52,720 --> 00:01:58,960
To keep it simple today, I think we'll just track
the transfers of that token and the holders of it.
19
00:02:01,200 --> 00:02:06,320
So before I get started creating and
deploying a subgraph, I need
20
00:02:06,320 --> 00:02:10,240
to make sure that I have the necessary
prerequisites installed on my computer.
21
00:02:12,480 --> 00:02:18,640
I need to have Node installed and a
Node Package Manager, such as NPM or Yarn
22
00:02:20,080 --> 00:02:29,120
and then I also need to install "graph-cli", which
is our CLI for creating and deploying subgraphs.
23
00:02:31,840 --> 00:02:35,840
So I'm going to switch to sharing my screen
and we can work through this together.
24
00:02:43,040 --> 00:02:52,320
I'm going to go to my workspace and make sure that
I have "graph-cli" installed globally on my computer
25
00:02:56,640 --> 00:03:01,920
and while that is installing I'm going to
start doing a little bit of research.
26
00:03:02,960 --> 00:03:09,353
So we're going to need to figure out what the contract
address is for the token that we're going to track.
27
00:03:09,353 --> 00:03:20,640
So today, I'm going to track the Zora token,
which extends the ERC-721 standard.
28
00:03:21,320 --> 00:03:27,280
So let's go find out where that contract
is deployed to, what the contract address is.
29
00:03:28,600 --> 00:03:35,840
I'm going to go to the Zora documentation
website at zora.engineering
30
00:03:37,920 --> 00:03:42,040
and let's see if we can
find the contract address.
31
00:03:42,438 --> 00:03:50,040
So here we have the mainnet media
contract address and let's go check it out.
32
00:03:50,920 --> 00:03:55,120
Okay, this looks like the
Zora tracking Zora token.
33
00:03:56,680 --> 00:03:59,840
All right, we're gonna grab that,
put it in our clipboard here
34
00:04:03,280 --> 00:04:10,720
and now what I'm going to do is I'm going to use
the "graph cli" to initialize my subgraph project.
35
00:04:12,080 --> 00:04:16,200
So I'm going to make sure I'm in the
workspace that I want to be in,
36
00:04:16,480 --> 00:04:23,600
so I have a subgraph folders here and then
let's check out the "graph init" command.
37
00:04:26,960 --> 00:04:34,800
So you can see that with "graph init" I can
either initialize a subgraph from the example
38
00:04:34,800 --> 00:04:37,760
or I can do so from a contract.
39
00:04:38,280 --> 00:04:44,000
Today I know what contract I want to track,
so let's go ahead and initialize it from a contract.
40
00:04:44,520 --> 00:04:48,960
So to do that I'm going to
do "graph init --from-contract"
41
00:04:52,680 --> 00:04:55,760
and everybody have it stored in here.
42
00:04:56,040 --> 00:05:04,880
So that's my Zora contract, we're going to initialize it
on mainnet network and we're going to tell it to index events.
43
00:05:05,160 --> 00:05:12,800
So that will automatically initialize the subgraph
that is tracking all events emitted by the contract.
44
00:05:14,040 --> 00:05:21,880
I'm going to name that contract just token here
and name my subgraph fordn/zora-token.
45
00:05:24,080 --> 00:05:24,920
Alright, let's go.
46
00:05:25,160 --> 00:05:29,360
So, it's going to prompt me to
confirm all of my entries here.
47
00:05:30,000 --> 00:05:32,840
The subgraph name, the
directory I'm storing it in
48
00:05:52,680 --> 00:05:53,480
and the network.
49
00:05:54,480 --> 00:05:56,480
Okay and then the contract address, confirming,
50
00:05:58,800 --> 00:06:03,040
and you can see here that it has
fetched the ABI from Etherscan.
51
00:06:07,120 --> 00:06:15,800
Okay, so it's initializing the subgraph
and installing all necessary dependencies
52
00:06:18,960 --> 00:06:23,600
and while it's doing that I'm just
going to go over to my Etherscan page
53
00:06:23,600 --> 00:06:32,720
and confirm here that the source
code is there, as well as the ABI.
54
00:06:32,840 --> 00:06:36,320
So the initialization should have no
problem picking up the ABI that we need.
55
00:06:38,800 --> 00:06:45,440
Okay, so it's finished initializing and
so let's go ahead and open up that project
56
00:06:46,240 --> 00:06:51,520
in our IDE and so we can take a
look at what it's initialized for us.
57
00:07:02,360 --> 00:07:08,400
Okay, so here we are. We have a full
project created for us. We have the ABIs,
58
00:07:10,320 --> 00:07:16,880
we have mappings and we have
schema and subgraph manifest.
59
00:07:22,400 --> 00:07:29,760
And you can see that the schema here is
already populated with five different entities,
60
00:07:29,760 --> 00:07:32,640
corresponding to the events emitted from that contract.
61
00:07:34,400 --> 00:07:41,560
It did this because we supplied the contract
events parameter on initialization.
62
00:07:44,880 --> 00:07:49,760
Okay, so it's currently tracking
all events and it's simply
63
00:07:52,640 --> 00:07:56,960
converting the parameters of that event
into the entity and saving it directly.
64
00:07:58,080 --> 00:08:01,360
This is a really nice start and gives
a good example of how mappings work,
65
00:08:02,880 --> 00:08:07,120
but it's not necessarily going
to be a very useful subgraph.
66
00:08:08,440 --> 00:08:18,160
So, today let's update this so that we can track
users that hold the token and transfers of that token.
67
00:08:20,760 --> 00:08:24,760
To do so, let's start by updating the schema.
68
00:08:26,040 --> 00:08:30,120
So, instead of just having these entities
for each event, let's create some new entities.
69
00:08:31,200 --> 00:08:33,040
So, we're going to make a "Token" entity
70
00:08:37,440 --> 00:08:43,920
and that's going to have an "id", which is a
required field on all entities, and a "tokenID",
71
00:08:46,800 --> 00:09:01,760
a "contentURI", which will have the location of
that NFT content and it's going to have a "creator" address
72
00:09:01,933 --> 00:09:05,840
and an "owner" of whoever currently owns that NFT.
73
00:09:08,320 --> 00:09:18,000
And then let's create another entity here, called "User"
and that is going to track the creators and holders of that token.
74
00:09:18,000 --> 00:09:30,080
So, again we need the "id" and then we are going
to store the "tokens" that user holds and tokens
75
00:09:30,080 --> 00:09:37,280
it has "created" and so this is a good example
of how we store relationships in the schema.
76
00:09:41,040 --> 00:09:45,600
This is a virtual relationship, so
the data is not stored in the database,
77
00:09:46,320 --> 00:09:51,280
it is joined at query time between
these entities and here I'm going
78
00:09:53,680 --> 00:10:03,600
to define this "derivedFrom" directive and specify the
field "owner", so that array will be populated based
79
00:10:03,600 --> 00:10:06,120
on the "owner" field in the "Token" entity.
80
00:10:07,920 --> 00:10:11,240
In order for this to work,
I'm going to have to update that
81
00:10:11,240 --> 00:10:15,440
"owner" field to be "User", so
this is a two-way relationship here.
82
00:10:17,600 --> 00:10:20,880
I'm going to do something very
similar for the "creative" field.
83
00:10:28,480 --> 00:10:32,400
Okay, so let's get rid of all
these auto-generated entities
84
00:10:32,400 --> 00:10:35,680
and we now just have our simplified schema here
85
00:10:37,840 --> 00:10:45,360
and so the next step here is
going to be to run "codegen" and
86
00:10:47,120 --> 00:10:51,840
that will generate our TypeScript
classes from this schema.
87
00:10:52,800 --> 00:10:59,840
Those classes will provide functionality for
creating, saving, loading and updating the entities.
88
00:11:02,080 --> 00:11:05,360
So to run "codegen", I'm going
to run "graph codegen" here
89
00:11:08,400 --> 00:11:10,480
and we'll see here that the
90
00:11:11,600 --> 00:11:18,080
generated entities have changed, so we now
have a "Token" class and we have a "User" class.
91
00:11:22,240 --> 00:11:26,080
So now that we have those classes,
let's go ahead and create some mappings
92
00:11:27,120 --> 00:11:28,880
to save data into those entities.
93
00:11:32,320 --> 00:11:36,560
Let's go ahead and get rid of the existing mappings here.
94
00:11:38,880 --> 00:11:44,560
The only ones that we are going to keep are the
"handleTokenURIUpdated" and the "handleTransfer"
95
00:11:45,840 --> 00:11:50,720
and that will be sufficient for tracking
the transfers and the holders of the token.
96
00:11:54,080 --> 00:12:03,440
Okay, so we're going to update this import to
have our new entity classes, "Token" and "User"
97
00:12:06,320 --> 00:12:12,800
and then let's go ahead and update this
"handleTransfer" function to track what we need.
98
00:12:13,440 --> 00:12:17,360
So, before we get into this, I'm
just going to lay out our goals here.
99
00:12:20,400 --> 00:12:24,880
So what we want to do is we
want to check if the transfer is
100
00:12:27,120 --> 00:12:29,760
mint or a transfer to a new owner
101
00:12:32,400 --> 00:12:42,480
and then we want to, if it is a mint we're
going to create a new entity, otherwise
102
00:12:44,320 --> 00:12:49,720
simply update the entity owner field.
103
00:12:52,800 --> 00:13:05,720
In addition, we want to check if it is owned by an
existing user and if not, then create a new user.
104
00:13:09,600 --> 00:13:15,920
Another thing we're going to
do here is call the contract
105
00:13:18,240 --> 00:13:24,880
to get the "contentURI" because that
will not be in the transfer event.
106
00:13:27,040 --> 00:13:31,080
Okay, so let's get started.
107
00:13:35,120 --> 00:13:40,960
So, what we're going to do is we're
going to load the token and if it exists
108
00:13:40,960 --> 00:13:44,640
it'll load that token, otherwise we'll get a
null value and we'll be able to check that.
109
00:13:44,800 --> 00:13:48,480
So let's use the "tokenId"
110
00:13:50,720 --> 00:13:55,160
in string format to attempt to load that token
111
00:14:06,480 --> 00:14:12,400
and then let's check if that token exists already
112
00:14:15,280 --> 00:14:17,680
and if it doesn't, we're going to create a new one.
113
00:14:35,040 --> 00:14:41,080
And we're going to initialize it
with a "creator" and a "tokenId" fields.
114
00:14:48,640 --> 00:14:54,960
So if it's a new token, the to field
will contain the "creator" address.
115
00:15:08,960 --> 00:15:14,960
Cool, and now we're also going to,
regardless of if it's a new token
116
00:15:14,960 --> 00:15:19,080
or if it's just a transfer, we're going to
want to update that, the "owner" field
117
00:15:30,520 --> 00:15:33,880
and then save the token to the database.
118
00:15:34,400 --> 00:15:38,960
Okay, so now here we've checked if the token exists
119
00:15:38,960 --> 00:15:45,400
and if not, we've created a new one and then we
update the "owner" field in either case.
120
00:15:45,760 --> 00:15:52,480
Cool, so that was our first step, now the next is to check
if it is owned by an existing user and if not,
121
00:15:52,480 --> 00:15:54,360
then we're going to create a new user.
122
00:15:54,480 --> 00:16:01,120
So this is a similar process,
let's attempt to load the user
123
00:16:16,800 --> 00:16:18,920
and then if user equals "null"
124
00:16:20,880 --> 00:16:21,840
we're gonna make a new one
125
00:16:36,400 --> 00:16:37,760
and we're gonna save that.
126
00:16:44,400 --> 00:16:55,280
Okay, so we are almost done with our plan here
but the last thing we want to do is save the "contentURI".
127
00:16:56,280 --> 00:17:02,920
The "contentURI" is not omitted from that event,
so we're going to have to call the contract to get it.
128
00:17:03,720 --> 00:17:07,600
To do that, we will bind to the contract.
129
00:17:15,200 --> 00:17:17,840
Let's make sure we have that here.
130
00:17:30,480 --> 00:17:34,720
Okay, binding to the contract
131
00:17:36,800 --> 00:17:39,400
with the address of that event.
132
00:17:39,520 --> 00:17:40,960
So this will bind to that contract
133
00:17:40,960 --> 00:17:45,640
at that block number and then we can
make contract calls as of that block.
134
00:17:53,440 --> 00:17:58,520
We're going to call the "tokenURI"
function to get the URI of that token.
135
00:18:08,560 --> 00:18:10,720
Okay, that's looking good here.
136
00:18:29,440 --> 00:18:30,120
Great.
137
00:18:36,000 --> 00:18:40,080
Okay, once we are done with that,
I'm satisfied that these
138
00:18:40,080 --> 00:18:43,360
to-do lists have been completed so I'm going
to get that out of there to clean it up
139
00:18:45,040 --> 00:18:49,120
and now we're going to move on to the
"handleTokenURIUpdated" function.
140
00:18:52,240 --> 00:18:55,840
This one's going to be pretty
simple, we're just going to
141
00:18:56,560 --> 00:19:00,720
load the token and update it’s "contentURI".
142
00:19:25,440 --> 00:19:28,640
So you can see that "URI"
field was emitted on the event
143
00:19:28,640 --> 00:19:36,040
and we are going to add that to the token we've
loaded and then save it back to the database.
144
00:19:49,040 --> 00:19:52,640
Okay, so now our mappings are
doing what we want them to do.
145
00:19:53,360 --> 00:19:56,560
We have a schema with
a "Token" and "User" entities.
146
00:19:58,160 --> 00:20:00,400
Let's go look at the manifest here.
147
00:20:00,600 --> 00:20:03,880
So let's update the entities
referenced in the manifest.
148
00:20:07,200 --> 00:20:12,800
Let's update the "eventHandlers" so it's
not calling these non-existent handlers.
149
00:20:13,840 --> 00:20:22,560
So we only have "TokenURIUpdated" and "Transfer" and
then in order to make it as efficient as we can,
150
00:20:22,760 --> 00:20:25,120
let's add a "startBlock" here.
151
00:20:26,320 --> 00:20:29,520
What this will do is it'll only
start syncing that subgraph
152
00:20:29,520 --> 00:20:33,840
as of this block and it'll avoid the work of
looking through the blockchain before that block.
153
00:20:37,960 --> 00:20:49,760
We can go back over here to the contract and
look for the contract creation transaction
154
00:20:50,880 --> 00:20:55,200
and we can see that that was at block 11565020
155
00:21:00,560 --> 00:21:05,040
and so we will use that as the "startBlock" and
that will save us some time syncing the subgraph.
156
00:21:08,800 --> 00:21:12,040
All right, now that we have our mappings
157
00:21:12,040 --> 00:21:19,680
and our schema and we've updated our manifest
to our liking, let's go ahead and build our subgraph.
158
00:21:26,920 --> 00:21:28,160
Great, that built successfully.
159
00:21:30,000 --> 00:21:35,280
Okay, so we're going to navigate
to the Graph Explorer now
160
00:21:37,040 --> 00:21:43,480
and we're going to create a subgraph on the explorer
and then we'll deploy our newly created subgraph there.
161
00:21:49,920 --> 00:21:52,400
All right, I'm already logged in,
I'm going to go to my dashboard
162
00:21:54,000 --> 00:21:57,360
and I'm going to add a subgraph here.
163
00:21:57,480 --> 00:22:00,000
I'm going to call it Zora,
164
00:22:07,920 --> 00:22:12,680
provide a quick description here
165
00:22:16,480 --> 00:22:18,400
and create that subgraph.
166
00:22:38,160 --> 00:22:45,840
Okay so let's see, here we have that subgraph
is created and now what we need to do is deploy
167
00:22:45,840 --> 00:22:50,160
our local subgraph to that newly
created subgraph on the explorer.
168
00:22:51,200 --> 00:22:57,360
So I'm going to copy my access token here
and I'm going to go back over to my terminal
169
00:22:57,920 --> 00:23:01,000
in my subgraph project and I'm
going to go ahead and deploy it.
170
00:23:01,160 --> 00:23:03,680
So I'm going to use the saved
171
00:23:04,400 --> 00:23:12,720
script for deploying and supply my access
token and we'll go ahead and deploy.
172
00:23:23,680 --> 00:23:31,840
So it has built our subgraph, it is uploading
it to IPFS and it sent the "deploy" command
173
00:23:32,960 --> 00:23:35,200
to The Graph Explorer hosted service.
174
00:23:35,880 --> 00:23:37,480
Okay, looks like that's deployed.
175
00:23:39,760 --> 00:23:41,120
All right, we're syncing.
176
00:23:43,200 --> 00:23:44,800
We already have seven entities.
177
00:23:47,200 --> 00:23:54,640
We can apply some queries here and we can see that
we're getting tokens coming in with "contentURI"s
178
00:23:56,240 --> 00:23:57,360
and "creator" "id"s.
179
00:23:59,800 --> 00:24:01,480
Great, that didn't take us too long.
180
00:24:02,600 --> 00:24:04,560
Let's make sure these are working as we'd expect,
181
00:24:04,560 --> 00:24:08,960
so I'm going to just go ahead and take that
"contentURI" and open it in the browser here.
182
00:24:12,800 --> 00:24:16,360
That wasn't so interesting,
so let's try this other one here.
183
00:24:35,080 --> 00:24:37,720
Okay okay, so it's got an image.
184
00:24:37,880 --> 00:24:39,840
It's not the most interesting
image but I think we can
185
00:24:41,920 --> 00:24:43,960
get a more interesting example here going.
186
00:24:45,440 --> 00:24:47,280
Okay, so we've gone through the process of
187
00:24:47,280 --> 00:24:54,200
initializing a subgraph, creating it to get the data
that we are interested in and deploying it.
188
00:24:54,200 --> 00:25:00,200
We can see that we can try out queries
in the playground, so this is a great first step.
189
00:25:00,320 --> 00:25:03,760
Let's now go over to some existing subgraphs
190
00:25:04,640 --> 00:25:09,520
and let's go through some examples of how to query
those subgraphs for data that we're interested in.
191
00:25:11,760 --> 00:25:15,200
And so we've been looking at
the Zora subgraph so far, so let's
192
00:25:15,200 --> 00:25:18,640
switch over and look at the Foundation subgraph.
193
00:25:20,000 --> 00:25:22,680
So I'm going to go ahead and navigate to that.
194
00:25:37,080 --> 00:25:39,000
All right, here's that Foundation subgraph.
195
00:25:39,640 --> 00:25:41,840
You can see here again we
have the query playground
196
00:25:42,960 --> 00:25:48,240
and on the right side we have the schema and you
can see that they've added some nice comments in
197
00:25:48,240 --> 00:25:53,920
the schema to explain how each field relates
to the other and what data is stored there.
198
00:25:55,440 --> 00:25:58,000
This is a great place to get
started looking at subgraphs.
199
00:25:58,840 --> 00:26:06,880
By looking at the schema you can get a good understanding
of the data structure and then dive deeper from there.
200
00:26:07,920 --> 00:26:11,880
So, let's see if we can answer some general
questions of this Foundation data.
201
00:26:13,960 --> 00:26:22,760
Let's see maybe who are the top creators in
the Foundation Network based on net sales.
202
00:26:24,080 --> 00:26:26,760
So to do this I'm going to query the
203
00:26:26,760 --> 00:26:36,440
"Creator" entity here and I'm going to add some filters
to it so we can sort it by the sales of each "creator".
204
00:26:36,440 --> 00:26:40,480
So let's see, we have, in this
example query we can modify it a bit
205
00:26:44,080 --> 00:26:45,360
for each "creator".
206
00:26:46,840 --> 00:26:53,360
Let's go ahead and get the "id",
"netSalesInETH", "netRevenueInETH",
207
00:26:57,880 --> 00:26:59,360
"netPendingSalesInETH",
208
00:27:00,840 --> 00:27:07,680
"netPendingRevenueInETH" and then let's also look
at what "nfts" are associated with that "creator".
209
00:27:10,160 --> 00:27:15,960
We get the "tokenId", "tokenIPFSPath"
so we can view that NFT,
210
00:27:16,080 --> 00:27:22,160
the "name", "image" and then for that specific
NFT we can also get revenue and sales values.
211
00:27:24,000 --> 00:27:28,800
Okay, that could be interesting but
let's take it a little further and let's
212
00:27:30,800 --> 00:27:36,320
add an "orderBy" here, in order
by the "netRevenueInETH".
213
00:27:38,880 --> 00:27:45,600
Make sure that that is ordering in descending
direction and then let's get the "first" 10.
214
00:27:46,200 --> 00:27:50,360
So that will give us our top 10 creators
and let's see what we get here.
215
00:27:51,480 --> 00:27:57,920
Cool, so we have an array of creators returned
and with the data popular that we want.
216
00:27:58,040 --> 00:28:05,400
Okay, we can see the "image" and "name" don't seem to
be used here but we are getting the "tokenIPFSPath".
217
00:28:06,840 --> 00:28:15,080
Very cool, so we can go a step further
and actually view those tokens.
218
00:28:15,080 --> 00:28:20,600
So, let's take that IPFS path and
see if we can take a look at it.
219
00:28:20,600 --> 00:28:30,480
Let's see ipfs.io and we're gonna paste that path
in there and that gives us a name.
220
00:28:30,480 --> 00:28:38,320
So that's called Finite with a simple description
and an IPFS link to the "image".
221
00:28:39,000 --> 00:28:40,960
Cool, so let's go check out that "image".
222
00:28:45,200 --> 00:28:49,000
Hey, there you go. Okay that's, it's a cool NFT.
223
00:28:51,200 --> 00:28:56,000
So you can see how pretty quickly I can
query NFTs here, I can query creators
224
00:28:56,800 --> 00:29:02,000
and I can actually pull up the actual
images or videos associated with that NFT.
225
00:29:03,120 --> 00:29:11,280
So using these queries and the query API, HTTP
API that's shared, you can see how you could
226
00:29:11,280 --> 00:29:18,320
easily integrate this into a front-end app that
is able to sort by creators and display their art.
227
00:29:21,520 --> 00:29:28,560
Okay, so we've quickly gone through the creation
of NFT of subgraphs and we've gone through
228
00:29:28,560 --> 00:29:35,120
how to query subgraphs in order to get useful
data and then the next step might be to integrate
229
00:29:35,120 --> 00:29:39,840
that into a front-end application which we
will not have time for today, unfortunately.
230
00:29:42,080 --> 00:29:49,320
I hope this workshop has been helpful to you
in giving you a quick overview of how subgraphs work
231
00:29:49,680 --> 00:29:55,400
and hopefully provide some inspiration that
you can take forward into the hackathon.
232
00:29:56,320 --> 00:29:59,880
All right, happy hacking everyone. See you later.
1
00:00:01,680 --> 00:00:07,600
In this video you'll learn how to build and publish a
subgraph to The Graph Explorer with Subgraph Studio.
2
00:00:09,840 --> 00:00:15,360
The subgraph that we'll be building today will
be indexing data from the Zora Smart Contract.
3
00:00:15,360 --> 00:00:17,520
Zora is a popular NFT marketplace.
4
00:00:20,000 --> 00:00:24,320
To be successful in this guide you should
have a MetaMask wallet installed as an
5
00:00:24,320 --> 00:00:29,334
extension to your web browser, as well as
Node.js installed locally on your machine.
6
00:00:31,520 --> 00:00:35,440
To get started,
navigate to thegraph.com/studio
7
00:00:38,240 --> 00:00:40,240
Here you'll see a button
to connect your wallet.
8
00:00:42,000 --> 00:00:46,167
Choose MetaMask and then choose the address
that you would like to authenticate with.
9
00:00:57,920 --> 00:01:01,520
To create a subgraph,
click Create a Subgraph.
10
00:01:01,520 --> 00:01:04,500
Here, give your subgraph a
name and then click Continue.
11
00:01:12,560 --> 00:01:15,600
This view allows you to see
information about your subgraph,
12
00:01:15,600 --> 00:01:19,840
including the name, the status,
the slug and the deployment key.
13
00:01:20,560 --> 00:01:24,320
You can also edit metadata about the
subgraph, including the description,
14
00:01:24,320 --> 00:01:28,480
the source code URL, the website
URL and up to three categories.
15
00:01:29,920 --> 00:01:32,720
Now that the subgraph has been
initialized in the studio,
16
00:01:32,720 --> 00:01:35,360
open your terminal so we can
start writing some code.
17
00:01:35,360 --> 00:01:38,640
The first thing you'll need to do
is install The Graph CLI globally
18
00:01:38,640 --> 00:01:42,960
using "npm install -g @graphprotocol/graph-cli".
19
00:01:46,560 --> 00:01:50,880
If The Graph CLI is installed successfully
you should now have the "graph" binary.
20
00:01:53,600 --> 00:01:56,640
To create a new subgraph you can
use the "graph init" command.
21
00:01:57,520 --> 00:02:00,400
Here we'll pass in additional arguments
for the “--contract-name”
22
00:02:01,680 --> 00:02:03,700
to “--index-events”,
23
00:02:04,900 --> 00:02:07,367
to set the “--product subgraph-studio”
24
00:02:11,600 --> 00:02:15,440
and then set the “--from-contract”
pasting in the contract address.
25
00:02:16,080 --> 00:02:19,520
You can copy the contract address from
the description below or visit
26
00:02:23,600 --> 00:02:25,600
and copy the media contract address.
27
00:02:27,200 --> 00:02:32,400
For the subgraph name, paste in the name of the
subgraph that you created in Subgraph Studio.
28
00:02:32,400 --> 00:02:35,840
You should now be able to choose
the defaults for the rest of the options.
29
00:02:43,200 --> 00:02:45,840
Next, change into the new
directory and you should see
30
00:02:45,840 --> 00:02:49,167
all of the files that have been
created for the subgraph boilerplate.
31
00:02:50,400 --> 00:02:53,280
Next, let's go ahead and open
the project in our text editor.
32
00:02:55,920 --> 00:03:01,120
The first file that we will update is the
GraphQL schema located at schema.graphql.
33
00:03:02,320 --> 00:03:07,040
Here we can delete the existing boilerplate
code and create a new entity for a "Token" type.
34
00:03:08,560 --> 00:03:12,467
The token will have fields for
"id", "tokenID", "contentURI",
35
00:03:12,480 --> 00:03:17,840
"metadataURI", "createdAtTimestamp",
"creator" and "owner".
36
00:03:18,640 --> 00:03:22,160
The "creator" and "owner" will be of
a "user" type that we'll create next.
37
00:03:23,680 --> 00:03:28,400
The user type will have fields for "id",
"tokens" owned and tokens "created".
38
00:03:31,040 --> 00:03:35,920
One-to-many relationships can be created
using the "derivedFrom" directive, passing in
39
00:03:35,920 --> 00:03:40,080
the field of the parent type from
which the relationship was derived.
40
00:03:41,600 --> 00:03:44,434
Save this file and open subgraph.yaml
41
00:03:46,000 --> 00:03:50,000
subgraph.yaml describes the main
configuration for your subgraph.
42
00:03:51,440 --> 00:03:55,920
In addition to some of the boilerplate code
that we will see in this subgraph.yaml file,
43
00:03:55,920 --> 00:03:59,840
the CLI was also able to pull
down the ABIs for this Smart Contract.
44
00:04:15,120 --> 00:04:17,520
By passing in the
"--index-events" flag,
45
00:04:17,520 --> 00:04:20,240
boilerplate "eventHandlers"
have also been created for us.
46
00:04:24,800 --> 00:04:27,567
The first update that we'll make
is setting the "startBlock".
47
00:04:28,367 --> 00:04:31,600
The "startBlock" is an optional
setting that allows you to define
48
00:04:31,600 --> 00:04:34,880
from which block in the chain the
data source will start indexing.
49
00:04:35,920 --> 00:04:41,840
If no "startBlock" is defined the subgraph will
index events starting from the genesis block.
50
00:04:42,960 --> 00:04:46,000
Next, update the entities
to be "Token" and "User" to
51
00:04:46,000 --> 00:04:48,640
match the entities defined
in our GraphQL schema.
52
00:04:54,320 --> 00:04:57,280
Finally, the only two event
handlers that we'll be using
53
00:04:57,280 --> 00:05:01,360
are "tokenURIUpdated" and “Transfer”,
so we can delete all of the others.
54
00:05:06,800 --> 00:05:11,360
In order to make working with Smart Contracts,
events and entities easy and type-safe,
55
00:05:11,360 --> 00:05:16,160
The Graph CLI can generate AssemblyScript types
from the subgraph’s GraphQL schema
56
00:05:16,160 --> 00:05:19,040
and the contract ABIs
included in the data sources.
57
00:05:20,720 --> 00:05:24,240
To do so, let's jump back to our
terminal and run the "codegen" command.
58
00:05:28,480 --> 00:05:32,640
When the "codegen" is complete
you should see a file named schema.ts
59
00:05:32,640 --> 00:05:36,160
and a folder named Token located
in the generated folder.
60
00:05:36,160 --> 00:05:38,800
The last thing we need to
do is write our mappings.
61
00:05:38,800 --> 00:05:42,080
To do so, open src/mapping.ts
62
00:05:44,320 --> 00:05:48,640
Next, we'll import references to the
"tokenURIUpdated" and "TransferEvent"s,
63
00:05:48,640 --> 00:05:53,760
as well as a reference to the "TokenContract" from
the code that was generated for us by the CLI.
64
00:05:54,640 --> 00:05:57,600
These imports will give us type
safety, as well as functions
65
00:05:57,600 --> 00:06:00,960
that will allow us to interact
directly with the Smart Contract.
66
00:06:03,040 --> 00:06:05,840
Next, we'll import the "Token"
and the "User" from the schema.
67
00:06:06,480 --> 00:06:09,360
These imports will allow us to
interact with the Graph Node.
68
00:06:10,720 --> 00:06:15,100
The interactions that we'll be using are
facilitated by The Graph TypeScript Library.
69
00:06:21,920 --> 00:06:24,640
The Graph TypeScript Library
gives us the following.
70
00:06:24,640 --> 00:06:26,960
An API for working with Smart Contracts,
71
00:06:26,960 --> 00:06:30,800
events, blocks, transactions
and Smart Contract values.
72
00:06:30,800 --> 00:06:35,120
A store API to load and save entities
from and to The Graph Node store.
73
00:06:35,680 --> 00:06:41,200
A log API to log in debug messages to
The Graph Node output and The Graph Explorer.
74
00:06:41,200 --> 00:06:45,120
An IPFS API to load files from IPFS.
75
00:06:45,120 --> 00:06:48,240
A JSON API to parse JSON data.
76
00:06:48,240 --> 00:06:53,040
A crypto API to use cryptographic
functions and low-level primitives to
77
00:06:53,040 --> 00:06:59,200
translate between different type systems such
as Ethereum, JSON, GraphQL and AssemblyScript.
78
00:07:01,440 --> 00:07:03,600
Next, we'll create our "handleTransfer" function.
79
00:07:04,640 --> 00:07:06,480
Let's walk through what this function is doing.
80
00:07:08,480 --> 00:07:14,367
We first try to load the "token" from The Graph Node by
calling "Token.load", passing in the "tokenId".
81
00:07:16,800 --> 00:07:20,800
If the "token" does not exist we create a
new token, setting the "creator",
82
00:07:20,800 --> 00:07:25,280
"tokenID" and "createdAtTimestamp" from
values passed in from the event.
83
00:07:27,600 --> 00:07:34,134
Next, we call out to the "tokenContract" itself to
get and set the "contentURI" in the "metadataURI".
84
00:07:36,720 --> 00:07:39,840
Here we set the "token.owner" and then
save the "token" to the store.
85
00:07:41,760 --> 00:07:43,920
Next, we check to see if the "user" exists.
86
00:07:43,920 --> 00:07:46,160
If they do not, we go ahead
and create a new "User".
87
00:07:52,960 --> 00:07:59,800
The "handleTokenURIUpdated" function updates the "contentURI"
of the "token" and then saves it back to the store.
88
00:08:04,080 --> 00:08:07,440
Now we're finished writing code and we
can deploy and publish the subgraph.
89
00:08:08,800 --> 00:08:11,840
To publish the subgraph
we're going to need the Deploy Key.
90
00:08:12,480 --> 00:08:17,280
To do so, we can go back to our subgraph in the
studio and copy the Deploy Key to our clipboard.
91
00:08:21,440 --> 00:08:28,300
To configure the Deploy Key we can run "graph auth --studio"
and then paste in the Deploy Key when prompted.
92
00:08:32,000 --> 00:08:39,000
To deploy, we can run "graph deploy --studio" passing in the
name of our subgraph and a setting of Version Label.
93
00:08:50,240 --> 00:08:55,667
Once your subgraph has been deployed successfully, it should show
up in your Subgraph Studio dashboard and begin syncing.
94
00:09:03,040 --> 00:09:08,467
In this view you should see the GraphQL Playground,
as well as Logs and Details for the subgraph.
95
00:09:15,440 --> 00:09:20,467
To test everything out, we can run the Example
Query given to us in the GraphQL Playground.
96
00:09:29,120 --> 00:09:32,767
We can also define our own queries by
updating arguments and fields.
97
00:09:41,200 --> 00:09:44,400
Our subgraph is now ready to publish
to the decentralized network.
98
00:09:45,200 --> 00:09:48,080
To test how this works, we can
publish to the Rinkeby Network.
99
00:09:49,440 --> 00:09:55,100
To do so, I'll first update my network to be the Rinkeby
Test Network instead of the Ethereum Mainnet.
100
00:10:02,160 --> 00:10:05,840
For Rinkeby Test tokens,
visit faucet.rinkeby.io
101
00:10:09,600 --> 00:10:12,560
To publish your subgraph
to Rinkeby, click Publish,
102
00:10:13,120 --> 00:10:16,667
choose Rinkeby as the network
and then click Publish again.
103
00:10:31,440 --> 00:10:34,640
Once the subgraph is successfully
deployed, you should see links to
104
00:10:34,640 --> 00:10:38,000
view it both in the Subgraph
Explorer, as well as Etherscan.
105
00:10:44,080 --> 00:10:46,400
Next, click View Subgraph in Explorer.
106
00:10:50,240 --> 00:10:53,120
Here we can now test out
signaling on our subgraph.
107
00:10:54,400 --> 00:10:57,840
To test signaling on a subgraph
you will need some test GRT.
108
00:10:58,640 --> 00:11:03,680
You can request test GRT in The Graph Protocol
Discord in the #testnet-faucet channel.
109
00:11:07,440 --> 00:11:10,640
Once you've received your
testnet GRT, click Signal.
110
00:11:11,360 --> 00:11:15,440
Here, enter the amount of GRT that you
would like to signal and then click Approve.
111
00:11:25,440 --> 00:11:29,634
Once you've confirmed the amount,
click on Signal to finalize signaling.
112
00:11:44,640 --> 00:11:49,300
Once signaling is successful, you should
see yourself show up as a Curator.
1
00:00:00,160 --> 00:00:04,800
Welcome everyone and we're here for the
workshop of the cost model and performance
2
00:00:04,800 --> 00:00:08,160
with Zac and I let him take the stage.
3
00:00:09,760 --> 00:00:11,640
I’ll just start sharing my screen.
4
00:00:21,440 --> 00:00:23,520
Okay, hello everyone and welcome.
5
00:00:23,520 --> 00:00:29,520
My name is Zac Burns. I go by That3Percent
on Github and on Discord.
6
00:00:30,640 --> 00:00:36,160
The name That3Percent is a reference to
an obscure Donald Knuth quote about
7
00:00:36,160 --> 00:00:38,240
performance for computer science geeks.
8
00:00:38,240 --> 00:00:40,240
It's not to be confused
with a reference to
9
00:00:40,240 --> 00:00:43,840
the political group having a
similar sounding name.
10
00:00:44,400 --> 00:00:46,640
So today we're going to be
talking about cost models.
11
00:00:48,320 --> 00:00:50,800
First question is, well
what is a cost model?
12
00:00:50,800 --> 00:00:51,520
What is it for?
13
00:00:51,520 --> 00:00:53,680
Why do we even have this concept?
14
00:00:54,560 --> 00:01:01,120
The problem is that an Indexer needs some
way to express to a Consumer, like the gateway,
15
00:01:01,680 --> 00:01:09,040
what the fee is that they're going to charge
for a given query. Not every query is equal.
16
00:01:09,040 --> 00:01:16,880
In fact, some subgraphs have greater costs
to index than others, some subgraphs can better
17
00:01:16,880 --> 00:01:24,080
amortize those costs over higher query volumes,
some subgraphs may be very competitive
18
00:01:24,080 --> 00:01:26,960
and have competitive
pricing because maybe
19
00:01:26,960 --> 00:01:29,520
they have a lot of signal and
there's a lot of Indexers
20
00:01:29,520 --> 00:01:33,280
competing for queries on that
subgraph, some might not.
21
00:01:33,280 --> 00:01:39,520
And even within a single subgraph, one query
may take a dozen milliseconds to execute and
22
00:01:39,520 --> 00:01:42,480
another may take several
seconds to execute.
23
00:01:44,400 --> 00:01:50,640
Even beyond the differences of subgraphs
and queries, not every Indexer is equal either.
24
00:01:51,360 --> 00:01:57,520
An Indexer which invests in a low latency
database or invests in economic security,
25
00:01:58,080 --> 00:02:01,680
maintains an excellent reputation or
attracts delegation through
26
00:02:01,680 --> 00:02:05,440
their community involvement,
they can and should
27
00:02:05,440 --> 00:02:08,480
charge a premium for the better
service that they offer.
28
00:02:09,920 --> 00:02:16,000
Broadly speaking, there's a lot of factors that
can go into the fee for a given query.
29
00:02:16,560 --> 00:02:21,200
So a Consumer, like the gateway, has a query
and they have a choice to make between
30
00:02:21,200 --> 00:02:25,680
the many Indexers and it would like
31
00:02:25,680 --> 00:02:29,040
to pay a fee for that query,
that is going to be fair
32
00:02:29,040 --> 00:02:32,880
for the level of service that
the gateway requires or
33
00:02:32,880 --> 00:02:35,720
the Consumer requires given
the current market conditions.
34
00:02:36,320 --> 00:02:41,600
So the question is that how might the gateway
approach this problem of finding out what Indexers
35
00:02:41,600 --> 00:02:45,680
are offering to serve that
query for a competitive fee?
36
00:02:46,400 --> 00:02:48,960
So one way to do that might be to expose
37
00:02:48,960 --> 00:02:52,720
an endpoint on the Indexer,
say get price for query,
38
00:02:53,360 --> 00:02:58,880
that API could take a query and then return
the fee that is being proposed by the Indexer.
39
00:02:59,840 --> 00:03:04,720
A gateway could send their query to each Indexer
that's indexing a subgraph and get the feedback
40
00:03:04,720 --> 00:03:06,000
for all of them and then compare.
41
00:03:06,960 --> 00:03:08,400
This has some problems.
42
00:03:08,400 --> 00:03:13,000
The first problem is going to be that it
introduces latency into the query.
43
00:03:13,280 --> 00:03:18,160
So an end user who's waiting for data to load
in a web page, would usually have to wait
44
00:03:18,160 --> 00:03:21,680
an additional amount of time for
those fee proposals to come back.
45
00:03:22,480 --> 00:03:29,520
In practice, this wait time is always going to be
the maximum amount that we're willing to bear,
46
00:03:29,520 --> 00:03:33,840
because some Indexers will probably have
latencies that exceed the maximum wait time
47
00:03:33,840 --> 00:03:36,080
and where we'd be waiting
for some set of
48
00:03:36,080 --> 00:03:39,280
responses to come back and
that would add a lot of time
49
00:03:39,280 --> 00:03:42,000
that the user spends waiting
for data, so that's not
50
00:03:42,000 --> 00:03:45,440
great and then the other problem
with that is throughput.
51
00:03:46,240 --> 00:03:53,600
While any competitive Indexer may expect
to receive many queries, for any given query
52
00:03:53,600 --> 00:03:55,680
only one Indexer can be selected.
53
00:03:55,680 --> 00:04:00,960
So that means that the vast majority of Indexers
are not selected for that one given query and
54
00:04:00,960 --> 00:04:05,840
sending unused fee proposals around is not going
to be an efficient use of the bandwidth
55
00:04:06,960 --> 00:04:10,880
that's limited, you know,
available between the Indexers and
56
00:04:10,880 --> 00:04:13,440
the gateways, so this is
where cost models come in.
57
00:04:13,920 --> 00:04:16,560
You can think of the
cost model as basically
58
00:04:16,560 --> 00:04:20,720
a price sheet that lists all
the queries that an Indexer
59
00:04:20,720 --> 00:04:23,280
is willing to serve
and for what fee.
60
00:04:23,280 --> 00:04:26,960
And so the assumption is
that the gateway will be
61
00:04:26,960 --> 00:04:30,880
able to serve many queries
to any Indexer on average
62
00:04:30,880 --> 00:04:37,760
and it pays a higher upfront cost in performance
to request that cost model and then when
63
00:04:37,760 --> 00:04:40,880
the query shows up from
the end user to pay
64
00:04:40,880 --> 00:04:44,240
a much lower performance cost
just to look up the fee,
65
00:04:44,800 --> 00:04:48,720
given the cost model that's
in hand and periodically
66
00:04:48,720 --> 00:04:50,800
update the cost model out of band.
67
00:04:51,680 --> 00:04:55,440
So that's sort of the
design motivation behind
68
00:04:55,440 --> 00:04:58,400
cost models and what we were
thinking of when we designed it.
69
00:05:00,880 --> 00:05:03,840
So here's what a basic
cost model looks like.
70
00:05:05,360 --> 00:05:10,400
To express the cost model succinctly, we
developed a language which is called Agora
71
00:05:11,040 --> 00:05:15,200
and you can think of Agora
as essentially a price sheet.
72
00:05:15,200 --> 00:05:19,440
It lists all the possible GraphQL
queries that you might serve on the left
73
00:05:20,080 --> 00:05:23,840
and all the fees for those
queries go on the right.
74
00:05:24,720 --> 00:05:30,880
We can break down how that looks for this specific
simple cost model that's in front of us right now.
75
00:05:31,680 --> 00:05:34,880
Paying attention to the box
on the left that's labeled
76
00:05:34,880 --> 00:05:40,400
Agora, we can see a simple GraphQL
query, which is for swaps.
77
00:05:40,400 --> 00:05:46,320
I'm just using Uniswap here because that's
a very well-known name in the DeFi community.
78
00:05:46,960 --> 00:05:53,040
So it says that we're going to ask for the GraphQL
query at "swaps", which is followed by a fat arrow
79
00:05:53,040 --> 00:05:59,600
and then the number 0.01 and finally a semicolon
which terminates the statement.
80
00:06:00,480 --> 00:06:05,040
Continuing the price sheet analogy,
the left side of the fat arrow is the service
81
00:06:05,040 --> 00:06:10,240
that you're offering, in this case the GraphQL
that you can serve and the right side
82
00:06:10,240 --> 00:06:13,520
of the fat arrow is your query fee in GRT.
83
00:06:14,480 --> 00:06:20,400
So in the Agora terminology, right, leaving
the terminology of price sheets for a moment,
84
00:06:21,520 --> 00:06:23,920
this query fee listing
is called a statement.
85
00:06:24,640 --> 00:06:26,480
The left-hand side of
the statement is called
86
00:06:26,480 --> 00:06:32,640
the match and the match tells you the set of queries
to which the statement applies and the right-hand side
87
00:06:32,640 --> 00:06:35,360
of the statement is called
the cost expression.
88
00:06:35,920 --> 00:06:40,240
The cost expression evaluates to
the price of the query in GRT.
89
00:06:41,360 --> 00:06:44,000
In the examples box
we're going to have some
90
00:06:44,880 --> 00:06:47,840
queries and the prices that they would evaluate to.
91
00:06:48,400 --> 00:06:56,400
You can see from the examples that the Agora query,
in the box labeled Agora that's in the statement,
92
00:06:56,400 --> 00:07:00,720
is the most general form
of the query that matches.
93
00:07:01,280 --> 00:07:10,000
So in order to match every part of the query
specified in Agora must have a corresponding
94
00:07:10,000 --> 00:07:11,760
item in the real query.
95
00:07:12,320 --> 00:07:18,560
So, walking through the first example
you can see that it asks for "swaps"
96
00:07:19,600 --> 00:07:22,480
and the "id" field of those "swaps"
97
00:07:22,480 --> 00:07:28,000
that it gets back and that will match our Agora
cost model because the Agora cost model
98
00:07:28,000 --> 00:07:32,120
is only asking if they have
swaps and that first query does,
99
00:07:32,760 --> 00:07:38,560
so it matches and therefore the price
of that query is going to be 0.01 of a GRT.
100
00:07:40,720 --> 00:07:46,560
The second query in this example also matches,
even though it's specifying an argument
101
00:07:47,200 --> 00:07:52,880
for the first 10 swaps and it's also asking
for the "transaction" instead of the "id",
102
00:07:53,600 --> 00:07:58,640
that also matches because our Agora cost model
only specifies that it must include swaps to match
103
00:07:59,520 --> 00:08:03,600
and so that also ends up
being priced at 0.01 of a GRT.
104
00:08:04,160 --> 00:08:08,240
And then the last
example is asking for "mints"
105
00:08:09,040 --> 00:08:14,320
by "id" and that query gets no
price, so it's not going
106
00:08:14,320 --> 00:08:18,960
to be served by the Indexer,
it just does not match our cost model.
107
00:08:22,480 --> 00:08:28,640
All right, in Agora a document is
going to be a list of statements.
108
00:08:28,640 --> 00:08:34,160
To get past that problem of some of our
queries not matching, we can provide
109
00:08:34,160 --> 00:08:36,320
more statements to
our Agora cost model.
110
00:08:37,120 --> 00:08:40,320
The statement priority is
going to be from top to bottom.
111
00:08:40,880 --> 00:08:43,760
So given any query, we're going
112
00:08:43,760 --> 00:08:50,160
to try executing each statement and if it
does not match, then we're going to continue
113
00:08:50,720 --> 00:08:53,680
to the next statement and
if it does match, then we're
114
00:08:53,680 --> 00:08:56,320
going to evaluate
the right hand side
115
00:08:56,320 --> 00:08:59,920
of the expression to find
out what that price is.
116
00:08:59,920 --> 00:09:05,200
So that means that your cost model should
order statements from the most specific
117
00:09:05,840 --> 00:09:09,920
to the least specific and
I have some examples here
118
00:09:09,920 --> 00:09:11,920
that's going to show how that works.
119
00:09:15,440 --> 00:09:20,240
All right, so just walking through
this to be fairly pedantic about it.
120
00:09:21,600 --> 00:09:25,200
Our cost model is specifying
three different statements.
121
00:09:26,640 --> 00:09:33,920
The first statement would match any query asking
for at least "swaps" and the field "transaction".
122
00:09:35,360 --> 00:09:43,120
The second statement is asking for any query
that asks for at least "swaps", with the specific
123
00:09:43,120 --> 00:09:50,240
argument asking for the first 10 swaps and that
has a different price and then the last query
124
00:09:50,240 --> 00:09:54,320
is the least specific one
and that's just asking for
125
00:09:54,320 --> 00:09:59,200
anything that has at least "swaps"
and that has a price as well.
126
00:10:01,360 --> 00:10:07,120
So in the examples, the first
query is asking for "swaps" by "id".
127
00:10:07,840 --> 00:10:12,240
We can check each statement in order
to see if it's going to apply.
128
00:10:12,240 --> 00:10:13,280
So the first statement
129
00:10:14,080 --> 00:10:17,920
in Agora is not going to match this
query because that statement's match
130
00:10:17,920 --> 00:10:24,640
is specifying the "transaction" selector
but our query is asking for the "id" selector
131
00:10:25,200 --> 00:10:28,000
and it does not have
the "transaction" selector,
132
00:10:28,560 --> 00:10:33,120
so it will go on to try and
match the second statement.
133
00:10:33,520 --> 00:10:40,720
The second statement also does not apply
because that statement is saying to match an argument
134
00:10:40,720 --> 00:10:49,680
for the first 10 swaps and the query, "swaps" by "id",
doesn't have that argument but the last statement
135
00:10:50,480 --> 00:10:57,840
does apply, because that statement's match
is only specifying that it needs to ask for "swaps"
136
00:10:57,840 --> 00:11:02,000
and the query does ask for "swaps",
so this query is going to be apprised given
137
00:11:02,000 --> 00:11:06,400
that final statement for
the price of 0.01 of a GRT.
138
00:11:06,960 --> 00:11:15,560
We can execute that same process on the second
example query, which I'll go over that one.
139
00:11:15,920 --> 00:11:22,000
It says we're going to ask for "swaps" with
the argument "first: 10" and then ask for
140
00:11:23,680 --> 00:11:24,880
the "transaction" field.
141
00:11:25,600 --> 00:11:32,240
Looking at our first statement, we see that we are
indeed asking for "swaps", which the statement
142
00:11:32,240 --> 00:11:37,600
says that it must be asking for "swaps" to match
and then we must also be asking for a "transaction"
143
00:11:37,600 --> 00:11:45,440
to match and our query, the second query in the
examples does specify, yes I want the "transaction",
144
00:11:45,440 --> 00:11:52,000
so that's going to be priced using the first statement
for the price of 0.02 of a GRT because it completely
145
00:11:52,000 --> 00:11:56,240
matches what's being
asked for by Agora.
146
00:12:01,600 --> 00:12:10,560
Now GraphQL is very flexible and there's practically
an infinite number of queries that you might serve,
147
00:12:11,360 --> 00:12:17,600
so it would not be a very good idea to try and list
every permutation of a query in Agora.
148
00:12:17,600 --> 00:12:26,000
So we have a way to, instead of listing individual
queries, to extract the relevant features of a query
149
00:12:26,000 --> 00:12:28,480
and then derive a price from that.
150
00:12:29,360 --> 00:12:36,320
So the first statement in this slide is an example
of that, I'm just going to walk through the syntax here.
151
00:12:36,320 --> 00:12:42,080
Instead of saying before, like on the previous
slide, that we want "swaps" with the first 10,
152
00:12:42,640 --> 00:12:48,800
it instead captures that argument using
the GraphQL variable syntax and says the first colon
153
00:12:48,800 --> 00:12:55,920
and dollar sign first, so that will take any value
for "first" and then capture that value,
154
00:12:55,920 --> 00:13:01,600
so that we can then use that value later
in the right hand side in the cost expression.
155
00:13:02,720 --> 00:13:10,160
So, giving an example here,
it says, if we want the "swaps",
156
00:13:10,160 --> 00:13:14,960
first 100 "swaps" get their "id",
the way that that's going
157
00:13:14,960 --> 00:13:21,360
to be costed is to run that first expression. It’s
going to match on "swaps", it's also going to match
158
00:13:21,360 --> 00:13:26,880
on this argument that says, have the first
argument and it's going to then capture
159
00:13:27,760 --> 00:13:30,960
that 0.01 into the variable "first",
160
00:13:31,760 --> 00:13:38,720
execute then the right hand side of the cost expression
which says I want a base cost of 0.01 of a GRT
161
00:13:39,520 --> 00:13:49,440
and then I want to also charge for the number of swaps
that you're returning with a cost of 0.0002 GRT,
162
00:13:50,560 --> 00:13:56,000
multiplied by the number of swaps that you're returning,
multiplied by that "first" argument which is going
163
00:13:56,000 --> 00:13:59,840
to be, how big the list is of
the swaps that's being returned.
164
00:14:00,880 --> 00:14:06,400
So in this way you can
scale the value of a query
165
00:14:06,400 --> 00:14:13,840
or the difficulty of the query by extracting the relevant
features from that query and then using it in your cost expression.
166
00:14:14,400 --> 00:14:17,280
So in that first example we
can see that if they specified
167
00:14:18,000 --> 00:14:25,920
first 100 and then capture the "id", that's
going to resolve to a price of 0.03 of a GRT because
168
00:14:25,920 --> 00:14:35,120
that base price 0.01 plus 0.0002
times 100 is that price of 0.03 GRT.
169
00:14:37,520 --> 00:14:44,880
This next example here says "swaps" by "id",
it's going to run the first statement
170
00:14:44,880 --> 00:14:50,320
and find that it does not match because it does
not specify that "first" argument and go on to
171
00:14:50,320 --> 00:14:58,160
the second statement which says that "swaps" without
a "first" argument are going to be priced at 0.03.
172
00:14:59,280 --> 00:15:07,040
This is not coincidental because the default value
for "first" in our GraphQL queries is 100,
173
00:15:07,040 --> 00:15:13,120
so it ends up resolving to the same price
as the previous example which was asking for
174
00:15:13,120 --> 00:15:18,000
the first 100 elements, if we left that
out we get "swaps" by "id", the price 0.03.
175
00:15:19,680 --> 00:15:23,520
And then the third example
is asking for more results,
176
00:15:23,520 --> 00:15:34,560
it says "swaps" "first: 1000" and if you run that you'll get
the price 0.21 GRT which is going to be a greater price
177
00:15:35,440 --> 00:15:37,280
because they're asking for more data.
178
00:15:38,640 --> 00:15:41,600
That is captures and that's
one of the language features.
179
00:15:42,960 --> 00:15:46,240
One thing is that if
you're like me you probably
180
00:15:46,240 --> 00:15:52,080
don't think in terms of GRT, you
probably think in your local stablecoin.
181
00:15:52,640 --> 00:15:57,360
So one of the utilities
that we provided for you is to
182
00:15:57,360 --> 00:16:04,000
be able to inject the value
of DAI into your cost model,
183
00:16:04,000 --> 00:16:07,160
which is an approximation
of the value of a US dollar.
184
00:16:07,440 --> 00:16:15,200
So if we wanted to have a query that is priced
consistently at half of a penny, then what we could
185
00:16:15,200 --> 00:16:25,120
do is we could take the value 0.005 and multiply
that by $DAI which is a global variable that I'm going
186
00:16:25,120 --> 00:16:27,440
to go over more about what
that means in a minute,
187
00:16:28,480 --> 00:16:35,840
but it's going to correspond to the conversion
rate that would convert from DAI to GRT.
188
00:16:36,720 --> 00:16:45,120
In this case this DAI global variable is a bit magic
because it's automatically injected by the Indexer agent
189
00:16:45,120 --> 00:16:49,760
by default. Your Indexer
agent will look up a price
190
00:16:49,760 --> 00:16:56,400
that it thinks is a good swap rate for
DAI to GRT and then it will inject this.
191
00:16:56,960 --> 00:17:03,600
So, in that way you can respond to
changing market conditions very quickly
192
00:17:03,600 --> 00:17:08,880
in your cost model without having to go in
and then manually tweak a cost model.
193
00:17:09,840 --> 00:17:12,240
Generally that's what global variables are for.
194
00:17:14,640 --> 00:17:17,840
Let's see here.
195
00:17:18,720 --> 00:17:23,120
I'm gonna go ahead, well
I guess there's a quick example
196
00:17:23,120 --> 00:17:28,240
there which went by really quick,
I'm gonna just go over that.
197
00:17:28,240 --> 00:17:38,560
So in this case, that example query of asking
for "swaps" by "id", it would evaluate to 0.5 cents,
198
00:17:38,560 --> 00:17:40,800
half a penny as we anticipated.
199
00:17:42,720 --> 00:17:52,400
All right, so globals, generally, are there for you to be
able to respond to market conditions very quickly.
200
00:17:52,400 --> 00:17:59,360
So you can update any information that changes
over time that you'd like to respond to automatically
201
00:17:59,360 --> 00:18:06,080
but unlike the $DAI variable, you're going to have
to write your own script to collect that information
202
00:18:06,080 --> 00:18:11,200
and then update the globals JSON yourself
and provide that to the Indexer CLI.
203
00:18:13,440 --> 00:18:17,760
So some examples of that, just to give you
some inspiration of what you might use it for.
204
00:18:17,760 --> 00:18:21,440
It's going to be totally up to you but these are some ideas.
205
00:18:21,440 --> 00:18:26,160
The first statement says that if today is discount Friday,
206
00:18:27,040 --> 00:18:30,320
then you're going to apply a specific price
207
00:18:30,320 --> 00:18:36,800
for all queries because it's the first statement
and it's matching all GraphQL. So going over that
208
00:18:36,800 --> 00:18:40,000
it says the first word in
that statement is "default",
209
00:18:40,640 --> 00:18:48,720
that means match all GraphQL and then the next word
is "when" and that means to also apply this condition,
210
00:18:49,360 --> 00:18:55,200
in addition to matching the GraphQL, we also have
to make sure that whatever this condition is that it
211
00:18:55,200 --> 00:19:00,000
evaluates to "true" and then you
see "$DISCOUNT_FRI".
212
00:19:00,800 --> 00:19:05,440
What happens there is then we would
look up the value of "$DISCOUNT_FRI"
213
00:19:05,440 --> 00:19:12,560
in your globals JSON that you provide to the Indexer agent
and we can see there that in the globals on the right,
214
00:19:13,200 --> 00:19:23,520
"$DISCOUNT_FRI" is "true", so this statement would match
all queries and all queries would have 0.0001 GRT.
215
00:19:24,640 --> 00:19:30,240
I could think of maybe using something like that for
if there was a conference coming up and you expected
216
00:19:30,240 --> 00:19:35,760
there to be a lot of load and maybe you
want to apply a discount for a limited time.
217
00:19:38,320 --> 00:19:45,920
The next example would be an example of using
database statistics in order to inform the price.
218
00:19:45,920 --> 00:19:53,840
So you can see that it says, give me "entities" "where"
the "id" is greater ("id_gt") than a specific "id" and maybe
219
00:19:53,840 --> 00:19:58,720
the cost of that query would actually scale with
the number of IDs that are in the database.
220
00:20:00,560 --> 00:20:09,120
So we have an expression after that that says,
use "$ENTITIES" times 0.00001 GRT, we would then
221
00:20:09,120 --> 00:20:18,000
look up what "ENTITIES" is in your globals
table and we see that it has the value of "250",
222
00:20:19,520 --> 00:20:27,680
so then a result of a query that matched that
statement would be 250 times 0.00001 so,
223
00:20:28,640 --> 00:20:29,200
and that way
224
00:20:32,640 --> 00:20:40,720
if, well you'd be able to pipe information
in real time from the database as new blocks
225
00:20:40,720 --> 00:20:48,240
are being ingested by your Indexer and quickly
update your prices without manual intervention,
226
00:20:48,240 --> 00:20:51,040
so maybe that's something that you want
to take into account, it's up to you.
227
00:20:52,320 --> 00:20:55,520
A third example here would be using
228
00:20:56,800 --> 00:21:00,240
the "$LOAD_FACTOR" on your
server to update the prices.
229
00:21:00,640 --> 00:21:07,600
Perhaps you see that load is changing over the time
of day and you might want to restrict the amount
230
00:21:07,600 --> 00:21:10,640
of load on your server or
maybe if you have low load,
231
00:21:11,280 --> 00:21:16,640
to try to attract more queries.
So we can then see that
232
00:21:18,160 --> 00:21:26,800
this third statement is just using "$LOAD_FACTOR" directly as their price,
we could look that up in the globals and see that "LOAD_FACTOR".
233
00:21:27,360 --> 00:21:29,840
I'm sorry about that,
I tried to move the...
234
00:21:32,240 --> 00:21:37,600
I can't actually see what "LOAD_FACTOR"
says because Zoom's camera thing is in the way,
235
00:21:37,600 --> 00:21:39,280
so whatever that says it would look it up.
236
00:21:40,160 --> 00:21:43,120
I tried to move it out of the way and it went to the
next slide so I'm just going to keep talking.
237
00:21:44,640 --> 00:21:51,440
All right, so it would look up whatever "LOAD_FACTOR" says
and use that as a price and then you could dynamically
238
00:21:51,440 --> 00:21:58,720
detect what kind of load that you're under and update
your price accordingly, maybe to attract more queries
239
00:21:58,720 --> 00:22:02,720
or to try to reduce the load
on your server accordingly.
240
00:22:04,000 --> 00:22:10,480
There are many different uses for globals, these are just
some examples to get you thinking about how you might use them.
241
00:22:16,000 --> 00:22:21,040
All right, so once you've developed a cost model,
the next thing that you're going to want to do is
242
00:22:21,040 --> 00:22:26,320
to test it before rolling that out and
so we have some tools to help you do that.
243
00:22:28,560 --> 00:22:35,920
After this slide on and in fact the last slide, I have some
links that are going to show documentation for all of these tools,
244
00:22:35,920 --> 00:22:36,800
so that you don't need to
245
00:22:37,520 --> 00:22:46,960
furiously try to write down these command line things right here, instead
I would encourage you to look up the many options that these tools offer,
246
00:22:48,880 --> 00:22:51,760
according to their documentation
from the last slide.
247
00:22:53,280 --> 00:23:02,400
So in order to evaluate your cost model what you're
going to do is, first, to save some set of queries that
248
00:23:02,400 --> 00:23:11,920
you want to evaluate and the way that you can do that is by setting
The Graph log query timing environment variable
249
00:23:12,480 --> 00:23:20,960
to the value gql and that's going
to start saving some queries to your logs.
250
00:23:22,240 --> 00:23:29,120
The next thing that you want to do is to run
a tool called qlog to pre-process those queries.
251
00:23:29,120 --> 00:23:36,080
It can do things like extract the queries from the log,
it can filter by specific subgraphs, it can take
252
00:23:36,080 --> 00:23:42,240
a specific number of samples of queries,
so that you can iterate more quickly.
253
00:23:42,880 --> 00:23:46,160
So you'll want to run that
using the next statement here
254
00:23:46,160 --> 00:23:54,920
which greps logs for query timing and then pipes that to
cue log and that will output this “queries.jsonl” file.
255
00:23:55,280 --> 00:24:00,080
You'll then want to store that somewhere
so that you can iterate on your cost model
256
00:24:00,800 --> 00:24:06,880
and try running it several times over on the same
set of queries and then the last thing that you want
257
00:24:06,880 --> 00:24:14,240
to do is use the Agora evaluation tool to run over
all of those queries with your given cost model
258
00:24:14,240 --> 00:24:18,080
and your globals and it will
output then how many query fees
259
00:24:18,080 --> 00:24:22,400
that you would have accumulated in
the aggregate, based on your logs.
260
00:24:23,360 --> 00:24:25,440
So that will give you a way of
261
00:24:26,560 --> 00:24:30,640
figuring out whether or not you're
meeting your revenue targets,
262
00:24:30,640 --> 00:24:34,720
say, for a given set of queries
over a given amount of time.
263
00:24:35,360 --> 00:24:40,640
It's not going to be totally
accurate because, of course,
264
00:24:41,680 --> 00:24:47,760
the actual price that you specify for your queries
is going to affect the queries that you receive,
265
00:24:47,760 --> 00:24:51,360
so it's not a prediction in any
sense but it's better than nothing.
266
00:24:52,720 --> 00:24:55,840
So go ahead and test your
cost model using this.
267
00:24:58,080 --> 00:25:01,040
Once you're satisfied with
the results and you'd like
268
00:25:01,040 --> 00:25:05,680
to deploy your cost model, you can
use the Indexer's CLI for that.
269
00:25:07,360 --> 00:25:14,240
So there's one command in the Indexer CLI
for setting the variables, which you are expected
270
00:25:14,240 --> 00:25:20,400
to do fairly frequently in and according
to changing market conditions, changing database,
271
00:25:20,400 --> 00:25:26,000
changing whatever and
there's also another command
272
00:25:26,000 --> 00:25:31,920
for setting the model itself, which you were expected
to do a bit less frequently because that would usually
273
00:25:31,920 --> 00:25:38,280
require human intervention and a changing
strategy of how you want to do your prices.
274
00:25:38,640 --> 00:25:44,080
Again, links to documentation are going to be
on the last slide, so you don't need to to memorize
275
00:25:44,080 --> 00:25:48,320
this but there would be these two commands
in the Indexer CLI that you would use
276
00:25:48,880 --> 00:25:52,080
to deploy your cost model.
277
00:25:53,600 --> 00:26:02,160
So the set variables is going to be a JSON file,
which contains all of the global variables
278
00:26:02,160 --> 00:26:09,040
that you want to use in your cost model and
then the set cost model is going to be an Agora file,
279
00:26:10,080 --> 00:26:15,200
which contains all of the, the list of statements
that I showed examples to you of, earlier.
280
00:26:18,800 --> 00:26:25,600
So we've looked at Agora and we've seen
how to communicate your prices, that still leaves
281
00:26:25,600 --> 00:26:32,560
a pretty big question that's on everyone's mind and
that is, what should you set your fees to?
282
00:26:33,920 --> 00:26:37,200
I'm not gonna, during this
talk, anchor query fees
283
00:26:37,200 --> 00:26:41,200
to any particular value because I think
that would do the community a disservice.
284
00:26:42,400 --> 00:26:45,600
The network is going to
be comprised of many
285
00:26:45,600 --> 00:26:49,600
independent agents that are all
acting in their own best interest.
286
00:26:49,600 --> 00:26:54,720
That includes DApp Developers trying to minimize
the price that they are paying for queries,
287
00:26:54,720 --> 00:27:02,560
that includes Indexers trying to maximize the price
that they're accruing from queries and I have no inside
288
00:27:02,560 --> 00:27:06,160
knowledge of what any DApp
Developer is willing to pay,
289
00:27:06,160 --> 00:27:13,520
I have no inside knowledge of what an Indexer
would charge for a query, but I can tell you how
290
00:27:13,520 --> 00:27:17,040
I might frame the problem
if I was running an Indexer
291
00:27:17,040 --> 00:27:20,960
and I can hope that that would be
helpful to you as a starting point.
292
00:27:22,000 --> 00:27:25,600
So if you remember, at the
beginning of this talk
293
00:27:25,600 --> 00:27:31,200
I said that many things can go into the
price of a query, that there are many factors.
294
00:27:31,440 --> 00:27:33,840
One would be what
differentiates you as an Indexer,
295
00:27:34,480 --> 00:27:38,800
did you splurge for that low latency database
and do you need to recover that cost?
296
00:27:39,440 --> 00:27:42,960
Does the subgraph that you're
serving have a high query volume?
297
00:27:42,960 --> 00:27:45,200
Does it cost a lot to index it?
298
00:27:45,200 --> 00:27:50,560
Do you need to make many Ethereum calls
to get that data from an Ethereum provider
299
00:27:50,560 --> 00:27:51,920
and what is that going to cost you?
300
00:27:52,880 --> 00:27:54,880
So I'm going to make
the case though, that when
301
00:27:54,880 --> 00:28:01,240
setting your optimal price, that none of
these factors are your primary concern.
302
00:28:01,920 --> 00:28:06,880
When you are studying the price there's actually
only one question that really matters and the question is,
303
00:28:07,440 --> 00:28:10,800
what price maximizes my revenue?
304
00:28:12,240 --> 00:28:19,840
The Indexer's revenue is going to be the price per query
that they set, multiplied by the size of the market
305
00:28:19,840 --> 00:28:26,400
that they can capture at that price and the size
of the market that you can capture already takes
306
00:28:26,400 --> 00:28:29,840
into account all of those other questions
that we were thinking about when
307
00:28:32,080 --> 00:28:33,640
talking about query fees earlier.
308
00:28:33,840 --> 00:28:38,960
The size of the market that you can capture
takes into account what differentiates you as
309
00:28:38,960 --> 00:28:42,640
an Indexer compared to the pool
of available Indexers to select.
310
00:28:43,360 --> 00:28:49,200
The size of the market that you can capture takes
into account whether you splurge for that low latency
311
00:28:49,200 --> 00:28:59,200
database and then the revenue function is affected
by the size of that addressable market but then
312
00:28:59,200 --> 00:29:06,400
all that's left after taking to all those things into account
is what is going to be the price times the market captured
313
00:29:06,400 --> 00:29:14,320
and making, rather like, getting as much revenue
as possible out of the market that you can capture,
314
00:29:14,320 --> 00:29:21,240
taking into account that the price that you set also
affects the size of the market that you can capture.
315
00:29:21,520 --> 00:29:24,240
So once you figure out your
revenue maximizing price,
316
00:29:24,880 --> 00:29:31,680
then you can decide which query markets are
the most attractive based on your expected profits.
317
00:29:33,360 --> 00:29:35,520
So I think that can kind of simplify
318
00:29:37,200 --> 00:29:41,600
what you're considering when writing
your cost model but then it makes
319
00:29:41,600 --> 00:29:46,880
it a bit of a more interesting question,
when you're deciding which subgraphs to index,
320
00:29:46,880 --> 00:29:50,560
based on how many other Indexers
are indexing this subgraph,
321
00:29:50,560 --> 00:29:54,320
based on what is the query volume
and all these other questions.
322
00:29:57,120 --> 00:30:00,400
Once your broad strategy is established...
323
00:30:02,560 --> 00:30:09,440
Actually, sorry, that's for
another slide. All right.
324
00:30:12,400 --> 00:30:16,720
Okay, I want to hammer this point
home about maximizing revenue.
325
00:30:18,160 --> 00:30:20,320
Let's see, can we get this to run?
326
00:30:20,320 --> 00:30:24,960
No. How do we get this...
327
00:30:26,160 --> 00:30:26,880
There we go. Okay.
328
00:30:30,000 --> 00:30:35,840
Technical difficulties, I apologize.
329
00:30:38,240 --> 00:30:47,280
Okay, so figuring out that optimal price that maximizes
your revenue itself isn't going to be very easy,
330
00:30:48,320 --> 00:30:51,840
because the optimal price
depends on your quality
331
00:30:51,840 --> 00:30:59,680
of service that's relative to the quality of the service
offered by all of the other Indexers, as well as their prices,
332
00:31:00,800 --> 00:31:04,240
the full set of Consumers,
their selection preferences,
333
00:31:04,880 --> 00:31:09,360
their budgets, the amount of
volume at the current time of day,
334
00:31:10,080 --> 00:31:12,640
how much load your server can bear,
335
00:31:12,640 --> 00:31:17,200
the gateway selection algorithm and probably
the phase of the moon, for all that I know.
336
00:31:17,440 --> 00:31:19,200
It can take into account
a lot of different things.
337
00:31:20,560 --> 00:31:24,320
For our gateway, an Indexer
which offers a better service
338
00:31:24,960 --> 00:31:28,160
does have a higher
revenue maximizing price.
339
00:31:29,040 --> 00:31:35,440
So to drive this point home, I want to show this graphic
which demonstrates a graph that shows the expected
340
00:31:35,440 --> 00:31:40,080
revenue at different prices
relative to a Consumer budget,
341
00:31:42,880 --> 00:31:46,640
given our Indexer selection
algorithm at one point in time.
342
00:31:48,160 --> 00:31:52,720
So what you're looking
at is a graph that shows
343
00:31:53,680 --> 00:32:00,400
on the x-axis a price that
the Indexer could choose,
344
00:32:01,840 --> 00:32:10,480
as a proportion of the maximum price that the Consumer
is willing to spend for a query and then on the y-axis,
345
00:32:12,240 --> 00:32:18,160
it is the amount of revenue that the Indexer
would actually receive for that query.
346
00:32:18,560 --> 00:32:26,480
And over time I'm increasing the amount of utility
that the Indexer brings to a network by, say,
347
00:32:26,480 --> 00:32:28,080
making them a better Indexer, by
348
00:32:29,520 --> 00:32:34,080
having better economic security
or better performance or what have you.
349
00:32:35,200 --> 00:32:39,280
So what I want you to
notice is that by increasing
350
00:32:39,280 --> 00:32:46,960
the utility that you offer, that is going to have a pretty
large effect on the amount of revenue that you can bring in.
351
00:32:48,000 --> 00:32:53,280
So as time increases that graph is getting higher,
which means that we're bringing in more revenue,
352
00:32:53,920 --> 00:32:59,760
but also as your utility is
increasing, that the price that
353
00:33:00,480 --> 00:33:04,320
would maximize your revenue
is also increasing and
354
00:33:04,320 --> 00:33:10,800
so that's represented by the vertical line that's
intersecting the curve at its highest point.
355
00:33:11,440 --> 00:33:18,880
That is your revenue maximizing price because it
captures the most revenue when multiplying
356
00:33:19,440 --> 00:33:28,320
the price for that query by the size of the addressable
market that would be captured at that price.
357
00:33:29,120 --> 00:33:32,640
As you increase your price
up to and beyond the maximum
358
00:33:32,640 --> 00:33:42,400
that a Consumer is willing to pay, that can drop to zero
but as you decrease your price it also goes to zero
359
00:33:43,200 --> 00:33:50,480
because even though you're capturing more queries,
you're not actually making as high of a price for that query.
360
00:33:50,720 --> 00:33:55,920
So what you want to do is
figure out how to discover
361
00:33:57,040 --> 00:34:00,400
what that revenue maximizing
price is going to be.
362
00:34:02,800 --> 00:34:09,200
An idea that you might use to figure that out,
is actually just to go back and use global variables.
363
00:34:09,360 --> 00:34:15,040
I could imagine having a global variable that's
a multiplier for all of your prices,
364
00:34:15,040 --> 00:34:23,440
so that you can adjust it and see how it affects
the market that you capture, as well as how that
365
00:34:23,440 --> 00:34:27,200
will impact your revenue and just kind
of try and discover this information.
366
00:34:28,400 --> 00:34:31,040
That's an idea, there are
many ways that you might
367
00:34:31,040 --> 00:34:35,600
go about this but that's probably the way
that I would do it if I were an Indexer.
368
00:34:38,960 --> 00:34:43,520
Now going back to questions
about pricing strategy.
369
00:34:43,680 --> 00:34:51,200
Once the sort of broad revenue maximizing strategy
is established, another thing that I might try to do with
370
00:34:51,200 --> 00:34:58,880
cost models is to use Agora to disincentivize very long
running queries that need to be optimized.
371
00:34:59,520 --> 00:35:05,520
So what we've observed in the hosted service
is that many queries go very fast but there are
372
00:35:05,520 --> 00:35:12,880
some queries that might take 5, 10, 30 seconds
to evaluate because of the complexity of the query
373
00:35:12,880 --> 00:35:15,280
or the amount of data
that they're requesting.
374
00:35:16,080 --> 00:35:18,800
So you would want to raise
prices on those queries,
375
00:35:19,360 --> 00:35:23,200
which would incentivize
DApp Developers more
376
00:35:23,200 --> 00:35:26,800
than they are already incentivized
to optimize their queries,
377
00:35:27,360 --> 00:35:29,840
so that those queries can
be executed very quickly
378
00:35:30,800 --> 00:35:38,240
and not bring excessive load to your server.
But also you can raise prices on those queries
379
00:35:38,240 --> 00:35:39,200
so that those
380
00:35:40,240 --> 00:35:46,560
bad queries can be routed to competing
Indexers and become their problem and a problem
381
00:35:46,560 --> 00:35:49,200
for their infrastructure instead of yours.
382
00:35:50,320 --> 00:35:52,640
So that might be another
way that I would use Agora.
383
00:35:55,920 --> 00:36:01,360
Agora isn't necessarily
finished and in fact, one
384
00:36:01,360 --> 00:36:10,720
of the things that we wanted to do during the testnet
was to be able to see how that Indexer’s priced different
385
00:36:10,720 --> 00:36:16,720
queries and figure out what strategies emerged
and then to update Agora over time to
386
00:36:17,600 --> 00:36:20,600
be able to express those
strategies more efficiently.
387
00:36:20,880 --> 00:36:22,880
But it was not something
that we really had time
388
00:36:22,880 --> 00:36:29,920
to during the testnet but because this is extra protocol,
we still have room to iterate on this design.
389
00:36:31,440 --> 00:36:39,760
So, ways that we might iterate it include
that, because you can list every possible query
390
00:36:39,760 --> 00:36:48,560
and price each one individually using Agora,
it is possible to express any possible cost model
391
00:36:48,560 --> 00:36:54,560
logic that you could imagine but the downside
of that would be that the cost model would get very long.
392
00:36:55,520 --> 00:37:02,880
So what we want to do is to, within the design
space of Agora, give you that flexibility right now
393
00:37:02,880 --> 00:37:09,360
to be able to experiment and come up with
any cost model that you can imagine
394
00:37:09,360 --> 00:37:15,600
and as patterns emerge, as we see that Indexers
are doing the same things over and over again,
395
00:37:16,160 --> 00:37:22,560
offer new language features in Agora that would allow
you to semantically compress the resulting cost model
396
00:37:22,560 --> 00:37:25,520
and express your strategy
more efficiently.
397
00:37:26,080 --> 00:37:31,680
You can see that the captures that I talked about
before are one feature along those lines,
398
00:37:32,240 --> 00:37:34,960
which can take many different
possible queries,
399
00:37:35,520 --> 00:37:39,440
reduce those to a single pattern
and then make cost models shorter.
400
00:37:40,560 --> 00:37:46,240
Similar ideas can be introduced over time, once
the community is more comfortable with the best
401
00:37:46,240 --> 00:37:49,600
way to express their
strategies using Agora.
402
00:37:49,760 --> 00:37:57,120
So please open issues on the Agora repo,
give us feedback and we'll try to make this
403
00:37:57,120 --> 00:37:59,440
a great tool for you
to use over time.
404
00:38:03,360 --> 00:38:09,600
Lastly, if anyone has any questions,
it looks like we have about 15 minutes for questions
405
00:38:10,240 --> 00:38:17,840
and here are the promised links to various pages
of documentation, so that you can look at the options
406
00:38:17,840 --> 00:38:21,920
of those different command line
arguments that I went over earlier and
407
00:38:23,200 --> 00:38:25,480
figure out the full feature
set that's available to you.
408
00:38:25,760 --> 00:38:31,440
Also in the Agora, make sure that you look
at the language reference which is going to show
409
00:38:31,440 --> 00:38:34,560
every possible way that
you can use the language.
410
00:38:34,560 --> 00:38:42,560
It's a bit technical, but once you have the idea
of what broadly power is used, which I hope
411
00:38:42,560 --> 00:38:45,120
that I've communicated in this talk,
412
00:38:45,120 --> 00:38:50,000
then that will give context to the way that
you might be able to use those language features
413
00:38:50,640 --> 00:38:54,240
and of course, feel free to
ask questions on Discord as well.
414
00:38:54,720 --> 00:38:58,640
If you then look at this documentation
or are confused for any reason,
415
00:38:59,280 --> 00:39:05,520
ping me on Discord and I'll get back to you
but right now we have, yeah again, 15 minutes
416
00:39:05,520 --> 00:39:07,600
for questions if anyone
wants to take the floor.
417
00:39:12,480 --> 00:39:14,480
Hey Zac, awesome presentation.
418
00:39:14,480 --> 00:39:18,000
My head is very slightly
sore, so thanks for that.
419
00:39:18,880 --> 00:39:26,560
Question about default arguments
and how they are matched against Agora rules.
420
00:39:26,560 --> 00:39:31,520
So for example, if you do
not specify a first argument
421
00:39:31,520 --> 00:39:35,360
in the GraphQL query, it obviously
still defaults to a value.
422
00:39:36,080 --> 00:39:39,840
So how are defaults mapped
to rules in Agora?
423
00:39:41,120 --> 00:39:46,400
That's a great question and Agora does not have
knowledge of the default arguments.
424
00:39:46,400 --> 00:39:52,080
I think that would be a great thing to add,
so let's open an issue on Github for that.
425
00:39:53,760 --> 00:39:55,840
Cool, thanks.
426
00:39:59,360 --> 00:39:59,920
Thanks, Zac.
427
00:40:01,440 --> 00:40:06,320
I was only here for the second half unfortunately,
I was in another meeting, so forgive me
428
00:40:06,320 --> 00:40:12,720
if this is covered in your first half but I'm just sort
of thinking about the input side of Agora.
429
00:40:13,760 --> 00:40:18,560
Back in the early days of the mission control,
when we were all sort of experimenting with
430
00:40:18,560 --> 00:40:24,640
the price thresholds,
I often sort of compared it
431
00:40:24,640 --> 00:40:29,920
to stumbling around in a dark pitch
dark room looking for a light switch.
432
00:40:30,800 --> 00:40:37,840
So we don't have the visibility that
hosted has, to see the queries there.
433
00:40:38,720 --> 00:40:47,680
And on the mainnet, just thinking as an insular Indexer,
you only have knowledge of the queries that you are getting,
434
00:40:49,200 --> 00:40:55,360
so in this new world that we're moving into,
I can see, in terms of the data,
435
00:40:56,400 --> 00:40:58,800
a need to do things
like sharing data,
436
00:41:00,160 --> 00:41:03,760
because I'm assuming that we're
still in that same situation,
437
00:41:03,760 --> 00:41:09,680
where we're not being told how much people,
sorry, how much Consumers are willing to pay.
438
00:41:09,680 --> 00:41:11,600
We have to work out
how much Consumers
439
00:41:11,600 --> 00:41:18,880
are willing to pay, so can you talk a little bit about
your thoughts around that side of the market I guess, query market.
440
00:41:18,880 --> 00:41:25,680
Right, so there's a broader question there,
about just how are signals about the broad
441
00:41:25,680 --> 00:41:33,840
market going to be shared in a decentralized way,
without allowing for fraudulent behavior?
442
00:41:34,880 --> 00:41:40,160
How do we ensure that an Indexer could
share information about the market?
443
00:41:40,720 --> 00:41:45,040
How can Consumers share their information
about their preferences and so on and so forth?
444
00:41:46,560 --> 00:41:48,800
That is a question that we've
been thinking about a lot.
445
00:41:49,840 --> 00:41:53,600
I don't think that we have a
good answer for that generally.
446
00:41:54,800 --> 00:42:00,240
So right now, each participant in the network
is going to be limited in their decision making,
447
00:42:01,600 --> 00:42:09,440
based on what information that they can see
until we figure out how to broadly expose
448
00:42:09,440 --> 00:42:12,320
market signal to all
of the participants.
449
00:42:13,600 --> 00:42:23,040
So right now, I would just encourage you to take
active steps toward gathering information.
450
00:42:24,000 --> 00:42:30,320
So one thing that you can do is set your prices
to zero and see then what is the size
451
00:42:30,320 --> 00:42:33,760
of your addressable
market in the maximum.
452
00:42:34,640 --> 00:42:38,560
This is not something that you would leave
running for very long but you can probably afford
453
00:42:38,560 --> 00:42:46,560
to do it for an hour and then you'd have a decent set
of queries that at least meet the minimum threshold
454
00:42:46,560 --> 00:42:51,840
for what your Indexer offers otherwise,
in terms of say, economic security and performance
455
00:42:51,840 --> 00:42:57,840
and these other factors and then figure out what
your revenue maximizing price is from there.
456
00:42:58,640 --> 00:43:05,040
So until there is a global market signal, then
you're going to have to investigate information.
457
00:43:23,920 --> 00:43:32,080
There's a question from Vladimir that he sent me, asking,
is it possible to get a list of available global variables?
458
00:43:32,800 --> 00:43:42,400
And how can he add custom global variables?
There's only one sort of magic global variable
459
00:43:42,400 --> 00:43:49,040
that's automatically injected by the Indexer agent
and that is the conversion rate from DAI to GRT.
460
00:43:49,680 --> 00:43:51,360
All of the other global variables,
461
00:43:52,320 --> 00:43:55,040
you come up with their names,
you put them in a JSON file,
462
00:43:55,600 --> 00:43:59,120
you come up with their values, you
put those values in the JSON file
463
00:43:59,120 --> 00:44:02,480
and they're just supplied
via the Indexer CLI,
464
00:44:02,480 --> 00:44:07,840
so you have total freedom as to what
you want to use for global variables.
465
00:44:14,240 --> 00:44:21,040
Somewhat confusingly, globals seem to be subgraph
specific, as in when I define a cost model
466
00:44:21,040 --> 00:44:25,760
I also attach the globals that apply
to that model specific to the subgraph.
467
00:44:28,080 --> 00:44:36,800
If we were to implement kind of more holistic
global space strategies, for example,
468
00:44:36,800 --> 00:44:42,240
something like load targeting,
is there a true global,
469
00:44:42,240 --> 00:44:48,400
some way of defining like a system load level that applied
to all cost models or would we need to loop through
470
00:44:49,040 --> 00:44:53,840
each of the subgraph deployments
and set the globals individually?
471
00:44:55,200 --> 00:44:58,480
Right now you would have to loop through
and set the globals individually.
472
00:44:59,040 --> 00:45:01,600
With some globals like,
say, the entity count,
473
00:45:03,120 --> 00:45:06,640
that could be information that
is different per subgraph
474
00:45:07,440 --> 00:45:08,800
but may have the same name.
475
00:45:09,680 --> 00:45:16,560
So we don't expose a way of having global variables,
like a hierarchy of global variables.
476
00:45:17,440 --> 00:45:21,520
I think that could be an interesting feature
and that's something that we could add to
477
00:45:21,520 --> 00:45:25,440
the Indexer CLI to merge
global variable documents.
478
00:45:27,360 --> 00:45:30,320
Yeah, I could see that being useful
but we don't have it right now.
479
00:45:32,400 --> 00:45:33,120
Cool, thanks.
480
00:45:44,320 --> 00:45:45,800
Any other question?
481
00:45:46,320 --> 00:45:49,800
Anyone that wants to
jump in to the stage?
482
00:46:10,240 --> 00:46:16,720
Okay, because if not, we can take the time
back to, wanna cover something else Zac?
483
00:46:18,400 --> 00:46:21,280
I don't have any more content
but thank you all for coming.
484
00:46:22,320 --> 00:46:27,840
Okay, thanks everyone and I will be
sharing the recording afterwards.
485
00:46:29,120 --> 00:46:32,080
Right. Bye. Thanks.
1
00:00:00,080 --> 00:00:02,800
All right. Hey everyone,
thanks for being here,
2
00:00:03,600 --> 00:00:07,520
at the migration workshop
for disputes and arbitration.
3
00:00:08,560 --> 00:00:13,600
Today, the agenda is basically, they're
trying to get everyone, mostly Indexers,
4
00:00:13,600 --> 00:00:16,560
up to speed on how disputes and
arbitration are going to be working,
5
00:00:18,080 --> 00:00:21,920
in the decentralized network. Not just from a
protocol economic standpoint, but also from
6
00:00:21,920 --> 00:00:29,120
like the sort of hands-on tooling that
you'll be using to interact with the disputes
7
00:00:29,120 --> 00:00:34,800
mechanism and also, how
arbitration will play out.
8
00:00:35,840 --> 00:00:38,640
So some of this will be a
little bit of review for those
9
00:00:38,640 --> 00:00:40,960
that were in the protocol
town hall yesterday,
10
00:00:40,960 --> 00:00:43,680
so bear with me here but I just
want to do a quick overview
11
00:00:43,680 --> 00:00:47,840
of the arbitration charter
that we showed yesterday,
12
00:00:47,840 --> 00:00:52,000
because that's kind of the context
for how a lot of this other stuff
13
00:00:52,000 --> 00:00:55,760
is going to play out that you'll be
diving into later with the tooling.
14
00:00:57,280 --> 00:00:58,880
So let me share my screen real quick.
15
00:01:02,480 --> 00:01:04,000
Hopefully all of you guys can see this.
16
00:01:04,720 --> 00:01:08,960
So, one thing I want to emphasize
with the arbitration charter,
17
00:01:08,960 --> 00:01:13,520
this is the GIP that, like I said,
that we went through yesterday,
18
00:01:13,520 --> 00:01:17,360
is that this isn't giving the
Arbitrator any new capabilities.
19
00:01:18,240 --> 00:01:22,640
The capabilities of the Arbitrator
are sort of defined and hard-coded
20
00:01:22,640 --> 00:01:26,160
into the Smart Contracts
that are in Solidity,
21
00:01:26,160 --> 00:01:28,160
that have already been
deployed to the network.
22
00:01:28,160 --> 00:01:30,960
This is about establishing
norms to actually constrain
23
00:01:32,480 --> 00:01:34,480
the abilities of the
Arbitrator and also just,
24
00:01:35,280 --> 00:01:38,560
add clarity to how they will
behave in certain situations,
25
00:01:38,560 --> 00:01:40,320
that maybe are a little bit difficult to
26
00:01:42,400 --> 00:01:45,600
formalize in code, or at least
haven't been formalized yet.
27
00:01:46,560 --> 00:01:48,960
So since a lot of this is going
to be review for some of you,
28
00:01:48,960 --> 00:01:50,560
I'm going to try and move
through this pretty quickly.
29
00:01:51,600 --> 00:01:56,320
Most of you should know what the
Arbitrator does at this point.
30
00:01:57,280 --> 00:02:02,240
There's effectively two types of disputes
that, I guess you could say three but
31
00:02:02,240 --> 00:02:06,880
query and indexing disputes is what
the Arbitrator decides the outcome of.
32
00:02:07,600 --> 00:02:11,120
And query disputes have two
subtypes, one is the single
33
00:02:11,120 --> 00:02:12,720
query attestation where it's just
34
00:02:13,920 --> 00:02:17,840
the Fisherman disputing an Indexer
directly and then another is where
35
00:02:17,840 --> 00:02:22,560
a Fisherman uses two Indexers to kind
of effectively dispute each other.
36
00:02:24,320 --> 00:02:26,960
A big part of, you know, felt important to
37
00:02:27,520 --> 00:02:32,320
us in writing this proposal was that
Indexers felt confident to participate
38
00:02:32,880 --> 00:02:38,080
in this kind of early stage of the
network, without the fear of being punished
39
00:02:38,080 --> 00:02:41,600
for things that are outside
their reasonable control or
40
00:02:41,600 --> 00:02:46,080
outside of what they could be reasonably expected
to do and from, like a diligence perspective.
41
00:02:46,080 --> 00:02:50,640
And so a big part of that is like we don't expect
all Indexers to be Graph Node core developers.
42
00:02:51,440 --> 00:02:54,640
We think that would be bad for
decentralization if that was requirement.
43
00:02:54,640 --> 00:03:02,240
So the Arbitrator has the discretion, using
this draw outcome mechanism, to decide
44
00:03:02,240 --> 00:03:04,720
disputes as draws, if there's a reasonable
45
00:03:07,520 --> 00:03:13,600
evidence that it was due to a determinism
bug in Graph Node or some other software
46
00:03:13,600 --> 00:03:17,040
that the kind of core developers
have shipped to Indexers.
47
00:03:17,040 --> 00:03:21,680
We've seen these in the past, the
protocol is still under rapid development,
48
00:03:21,680 --> 00:03:25,520
so it's not unreasonable to think that
these would come up again in the future.
49
00:03:26,560 --> 00:03:29,760
Another one here that we mentioned
yesterday was double jeopardy.
50
00:03:30,320 --> 00:03:34,720
Indexers shouldn't be slashable
for the same offense twice.
51
00:03:36,400 --> 00:03:40,400
Unfortunately today the attestation data
structure doesn't have any replay protection,
52
00:03:40,400 --> 00:03:41,040
so that means,
53
00:03:41,840 --> 00:03:48,400
for a specific query request response
pair the Indexer can only be slashed once.
54
00:03:51,120 --> 00:03:56,880
And which effectively means like a
single mistake can't be compounded
55
00:03:56,880 --> 00:04:00,400
by someone just repeatedly
creating disputes for that mistake.
56
00:04:01,760 --> 00:04:09,040
Furthermore, there's going to be a
statute of limitations on how long ago a
57
00:04:09,040 --> 00:04:11,760
fault may have occurred, for
an Indexer to be disputed.
58
00:04:12,400 --> 00:04:13,840
At least according to this proposal.
59
00:04:14,640 --> 00:04:17,120
The reason for that is twofold.
60
00:04:17,120 --> 00:04:21,840
One is that attackers very commonly
will just exit the protocol immediately,
61
00:04:22,480 --> 00:04:25,440
so we didn't want to create
any asymmetry between,
62
00:04:26,320 --> 00:04:30,720
sort of the assumptions that an attacker
can expect versus an honest Indexer.
63
00:04:31,520 --> 00:04:35,920
And then the second is that if
there was no statute of limitations,
64
00:04:36,800 --> 00:04:40,560
the longer an Indexer participated in
the protocol, the more likely there,
65
00:04:40,560 --> 00:04:45,520
they would be to accumulate some
fault at some point in their past.
66
00:04:46,320 --> 00:04:49,600
And so like we don't also want
to punish Indexers for being
67
00:04:49,600 --> 00:04:53,760
long time honest participants in the
protocol by saying that ‘hey you're just
68
00:04:54,320 --> 00:04:58,080
constantly accumulating risk
of being slashed for something
69
00:04:58,080 --> 00:05:00,400
you did at some point in the past’.
70
00:05:01,200 --> 00:05:04,160
So we think the statute of
limitations kind of achieves that.
71
00:05:05,120 --> 00:05:10,080
This proposal has it set at
basically two thawing periods so, 56
72
00:05:10,080 --> 00:05:15,920
epochs, mainly to give the Arbitrator
a nice buffer to still do its job.
73
00:05:18,960 --> 00:05:21,680
This should be pretty
straightforward, but disputes can't be
74
00:05:21,680 --> 00:05:29,280
decided as in anyone's favor if the data
isn't available to look at the dispute.
75
00:05:29,280 --> 00:05:35,760
So, hopefully we'll get to some of
this in the tooling, but if a....
76
00:05:36,640 --> 00:05:41,600
So as you know, the attestation structure just
has like the content hash of a query in it.
77
00:05:42,240 --> 00:05:45,280
It doesn't have the entire query body itself.
78
00:05:45,280 --> 00:05:49,680
That wouldn't really be practical to store
on-chain or emit in a Solidity event.
79
00:05:50,320 --> 00:05:53,680
So the dispute mechanism kind
of relies on the fact that these
80
00:05:55,040 --> 00:06:01,840
query requests are posted to IPFS, so
that the Arbitrator can actually find,
81
00:06:04,400 --> 00:06:07,600
can find the query when
it's time to settle a dispute.
82
00:06:08,720 --> 00:06:12,000
Furthermore, this is kind of the case
for a lot of parts of the protocol, but
83
00:06:12,000 --> 00:06:13,840
the subgraph has to be available.
84
00:06:13,840 --> 00:06:18,000
If the subgraph manifest and mappings
and schema aren't available on
85
00:06:18,000 --> 00:06:21,440
IPFS, then again, the Arbitrator
can't really recreate the
86
00:06:21,440 --> 00:06:25,040
work of the query to decide whether
the work was done correctly or not.
87
00:06:29,040 --> 00:06:34,160
This clause is about setting
a cap on the amount of times
88
00:06:34,160 --> 00:06:40,480
that an Indexer can be slashed
for queries during an allocation
89
00:06:40,480 --> 00:06:44,960
and the reason that this is important is that
an Indexer only submits one proof of indexing
90
00:06:44,960 --> 00:06:51,920
per allocation and thus, one
instance of possible slashing
91
00:06:53,040 --> 00:06:56,160
for that allocation, whereas we fully expect
92
00:06:56,160 --> 00:06:59,840
Indexers to serve millions
of queries per allocation.
93
00:07:00,400 --> 00:07:06,320
And so, if each of those queries was
slashable and even if the slashing percentage
94
00:07:06,320 --> 00:07:10,960
was like really small, like it could have been
like a fraction of a fraction of a percent,
95
00:07:13,200 --> 00:07:17,040
that would still be potentially
enough to wipe out an Indexer's
96
00:07:17,040 --> 00:07:21,040
entire stake in a single allocation,
due to a determinism bug for
97
00:07:21,040 --> 00:07:26,320
example that, let's say for the
sake of argument that the Arbitrator
98
00:07:26,320 --> 00:07:33,920
wasn't aware of and that, we felt,
would put an undue disincentive
99
00:07:33,920 --> 00:07:37,680
for Indexers to serve queries and
so we wanted to create some clarity
100
00:07:37,680 --> 00:07:44,560
around that there is a sort of bounded downside
risk for serving queries during an allocation.
101
00:07:46,400 --> 00:07:48,080
This also piggybacks some of the work that
102
00:07:48,080 --> 00:07:53,440
Ariel presented yesterday of
separating the slashing percentages
103
00:07:53,440 --> 00:07:58,880
and the protocol for indexing a query, so today
it's just a single parameter across the protocol.
104
00:08:00,000 --> 00:08:06,400
An indexing dispute and a query dispute are
punished equally, but in the future if that
105
00:08:06,400 --> 00:08:12,320
proposal is accepted, then we'll
be able to parameterize the query
106
00:08:12,320 --> 00:08:17,760
slashing percentage lower than the indexing
slashing percentage for this reason.
107
00:08:21,360 --> 00:08:26,160
Valid proofs of indexing for a given
epoch, it's the correct behavior for
108
00:08:26,160 --> 00:08:29,040
an Indexer to submit a proof
of indexing for the first
109
00:08:29,040 --> 00:08:32,800
block of the epoch, in which
they closed an allocation.
110
00:08:34,880 --> 00:08:41,920
Right now, the closed allocation
transaction doesn't have an epoch argument
111
00:08:41,920 --> 00:08:45,760
or parameter, so there's no way
to like fail that transaction
112
00:08:45,760 --> 00:08:50,000
if for some reason, let's say
due to blockchain congestion,
113
00:08:50,000 --> 00:08:53,840
it actually ends up getting
mined during the next epoch.
114
00:08:54,560 --> 00:08:56,000
So, for that reason,
115
00:08:57,520 --> 00:09:02,640
if a proof of indexing is incorrect for
the epoch in which an allocation was closed
116
00:09:02,640 --> 00:09:08,000
but it is correct for the previous
allocation, then that dispute would be a draw.
117
00:09:08,000 --> 00:09:12,160
It's basically sort of a, there's
a little grace window there.
118
00:09:12,160 --> 00:09:20,880
In the future we might add that argument to the
close allocation method in the future proposal,
119
00:09:21,920 --> 00:09:27,200
in which case some of this
stuff could be more automated.
120
00:09:28,880 --> 00:09:33,600
And then the last one, this doesn't really
impact the Indexers, but this is just more about,
121
00:09:33,600 --> 00:09:37,520
the Arbitrator is expected to settle disputes
in a timely manner in it and it kind of
122
00:09:38,320 --> 00:09:43,520
defines what that means and so the
rest of this GIP is just rationale for
123
00:09:43,520 --> 00:09:47,680
some of the stuff that we just talked
through but I guess I'll pause there.
124
00:09:47,680 --> 00:09:50,320
Yesterday we didn't get a
lot of time for questions so,
125
00:09:52,560 --> 00:09:57,360
are there any questions specifically
on the contents of this charter?
126
00:09:58,360 --> 00:09:59,520
Zoro/Oliver
127
00:09:59,520 --> 00:10:03,840
I know you had some feedback in the forums,
haven't totally incorporated that in here yet.
128
00:10:04,480 --> 00:10:07,600
Is there anything you want to
actually discuss with respect to this?
129
00:10:15,760 --> 00:10:17,840
Or anyone else, if they have
any questions or comments.
130
00:10:19,600 --> 00:10:20,880
Hopefully this is super clear.
131
00:10:26,960 --> 00:10:30,160
Cool! All right, well if there's no
questions there, then i think we can hand
132
00:10:30,160 --> 00:10:35,600
it off to Ford, who will be walking you
guys through how to use the Indexer agent.
133
00:10:36,800 --> 00:10:38,240
It looks like we actually do have a question.
134
00:10:41,360 --> 00:10:45,840
Is there a rule that Fishermen and
Arbitrators should be independent?
135
00:10:47,680 --> 00:10:50,240
There isn't a rule like that in
the charter right now and it's
136
00:10:51,440 --> 00:10:53,920
impossible to enforce at the protocol level.
137
00:10:58,160 --> 00:11:01,760
Yeah, I guess I would have to understand
the motivation behind the question. If
138
00:11:01,760 --> 00:11:07,360
it's that you wouldn't want the
same, like Indexers to be assigned
139
00:11:07,360 --> 00:11:11,360
to, I guess your concern is that you
don't want a Fisherman to basically
140
00:11:11,360 --> 00:11:16,240
submit a dispute and then
just arbitrate themselves.
141
00:11:16,960 --> 00:11:20,000
It's really impossible to enforce
that, so there's sort of a degree of,
142
00:11:22,240 --> 00:11:25,680
I guess trust that the Arbitrator
has right now in terms of
143
00:11:25,680 --> 00:11:29,040
its role that's been assigned
by decentralized governance.
144
00:11:29,040 --> 00:11:33,120
So one of the things that the charter sort
of does outline is removal of the Arbitrator.
145
00:11:33,680 --> 00:11:39,840
And so if the Arbitrator is, you know, a sort of
obvious fault of the arbitrator, it would be to
146
00:11:39,840 --> 00:11:44,720
settle disputes incorrectly and that
would absolutely be ground for removal
147
00:11:44,720 --> 00:11:49,280
by decentralized governance,
in this case The Graph Council.
148
00:11:53,280 --> 00:11:55,440
It looks like Zoro was muted by Martin.
149
00:11:56,240 --> 00:11:58,080
Oliver, did you have anything
you wanted to add here?
150
00:11:59,280 --> 00:12:03,680
Nothing to add as such, I mean I
left my, some of the feedback in the
151
00:12:03,680 --> 00:12:06,880
forum post and there's
certainly some things
152
00:12:06,880 --> 00:12:10,320
that I've learned more since
you shared the entire charter.
153
00:12:10,320 --> 00:12:12,640
A couple of things we
can certainly dig into
154
00:12:13,280 --> 00:12:16,400
as we review certain
chapters, so I'm good for now.
155
00:12:17,120 --> 00:12:18,560
Cool! Yeah, thank you.
156
00:12:18,560 --> 00:12:21,360
So yeah, this has been posted to
the forum for about a week or so.
157
00:12:22,160 --> 00:12:24,320
Oliver had some great feedback
in there that we'll
158
00:12:24,320 --> 00:12:27,440
try and incorporate, so I encourage
you guys to take a look at that.
159
00:12:27,440 --> 00:12:32,160
Obviously this affects Indexers quite a
bit, so we look forward to your feedback.
160
00:12:32,720 --> 00:12:37,840
And with that I'll hand it off to
Ford, he'll be walking through...
161
00:12:39,040 --> 00:12:42,880
So the slashing penalty, that's going
to be covered in a separate GIP.
162
00:12:45,440 --> 00:12:48,720
I actually don't recall off the top
of my head what the number, latest
163
00:12:48,720 --> 00:12:50,240
number we've been playing around with.
164
00:12:50,240 --> 00:12:53,840
It's definitely substantially lower
than the indexing slashing percentage.
165
00:12:53,840 --> 00:12:57,200
I would guess somewhere around one percent,
but I don't think that's finalized yet.
166
00:13:00,400 --> 00:13:03,680
And yeah and feel free to keep
posting questions in chat.
167
00:13:03,680 --> 00:13:07,040
We'll move on to Ford for now but
we'll have some time for Q&A at the end
168
00:13:07,040 --> 00:13:08,800
and we can cover this stuff as well.
169
00:13:08,800 --> 00:13:12,320
So Ford will be talking through the
Indexer agent, which is used for,
170
00:13:13,440 --> 00:13:20,240
among many other things, detecting bad POIs
from other Indexers, so I'll hand off to Ford.
171
00:13:20,240 --> 00:13:21,520
All right. Thanks Brandon.
172
00:13:22,800 --> 00:13:31,280
So to start, I'm gonna just do an
overview of how proofs of indexing work,
173
00:13:33,120 --> 00:13:36,640
because that's an important
part of this overall mechanism.
174
00:13:36,640 --> 00:13:41,120
So, a proof of indexing is a
cryptographically strong digest
175
00:13:42,000 --> 00:13:47,200
that The Graph Node uses to prove that
has done the work of indexing a specific
176
00:13:47,200 --> 00:13:51,920
subgraph deployment and that it has
the correct data for that deployment.
177
00:13:53,200 --> 00:13:55,440
What do I mean by
cryptographically strong here?
178
00:13:56,880 --> 00:14:02,160
It is, there is no chance of
collisions between different
179
00:14:02,160 --> 00:14:05,840
proof of indexing, so
they will be unique and
180
00:14:05,840 --> 00:14:12,320
it does not leak information, so you
cannot use a POI from another Indexer
181
00:14:12,880 --> 00:14:15,840
to try to derive one
for yourself for example.
182
00:14:17,040 --> 00:14:22,400
Each POI is specific to a deployment,
an Indexer address and a block number.
183
00:14:23,520 --> 00:14:29,280
So, a POI basically declares that
you've indexed that specific subgraph
184
00:14:29,280 --> 00:14:34,080
to that block and it also includes
your Indexer address for uniqueness.
185
00:14:36,400 --> 00:14:40,160
The information in the POI is exposed
by The Graph Node indexing node
186
00:14:41,360 --> 00:14:45,040
GraphQL endpoint, so you can query that
on your infrastructure at any point
187
00:14:45,760 --> 00:14:49,680
and that is what the Indexer
agent uses to compare against
188
00:14:50,320 --> 00:14:53,600
the network POIs to try to identify disputes.
189
00:14:55,680 --> 00:15:01,600
And the POI is important, it must be submitted
with an allocation close transaction,
190
00:15:02,960 --> 00:15:06,240
in order for that allocation
to be eligible for rewards.
191
00:15:07,520 --> 00:15:11,840
And that also makes it eligible to
be disputed and potentially slashed.
192
00:15:15,680 --> 00:15:20,560
So, in order to use the POI monitoring
mechanism that we've added to the
193
00:15:20,560 --> 00:15:24,720
Indexer agent, you simply need
to turn on POI dispute monitoring
194
00:15:25,680 --> 00:15:31,120
using the startup argument or using
the corresponding environment variable.
195
00:15:34,160 --> 00:15:40,000
And what the feature does, is it monitors
POIs that have been submitted on the
196
00:15:40,000 --> 00:15:45,440
network and it creates
reference POIs from your Indexer
197
00:15:46,240 --> 00:15:50,240
and it compares those, in order
to identify potential disputes.
198
00:15:51,280 --> 00:15:54,800
So, if the reference POIs
that it generates do not
199
00:15:54,800 --> 00:15:59,600
match the POI that was submitted on-chain,
it'll flag that as a potential dispute.
200
00:16:01,120 --> 00:16:05,440
And then what it does is it stores
those potential disputes in a database
201
00:16:05,440 --> 00:16:09,680
table called POI disputes that
you can then later reference
202
00:16:09,680 --> 00:16:11,440
to submit disputes on-chain.
203
00:16:13,600 --> 00:16:18,480
We've also added a command
to the Indexer CLI.
204
00:16:19,040 --> 00:16:21,840
A "disputes get" command
so that you can view
205
00:16:22,880 --> 00:16:26,240
all this, all allocations that
have been flagged as potentials.
206
00:16:28,080 --> 00:16:30,800
And I've been running this on my machine
207
00:16:30,800 --> 00:16:35,840
and so I'm just gonna show you guys what
the output of some of this might look like.
208
00:16:39,520 --> 00:16:41,360
So here's my Indexer agent running.
209
00:16:42,560 --> 00:16:51,120
You can see a log here that shows that it has
monitored a disputable allocations on-chain
210
00:16:51,120 --> 00:16:56,480
and has found some valid allocations
and no potential disputes in that run.
211
00:16:59,280 --> 00:17:05,040
And then I'm going to go over here into
the Indexer CLI and what I can do is I
212
00:17:05,040 --> 00:17:12,960
can call that "disputes get" command and list
any potential disputes that have been flagged.
213
00:17:12,960 --> 00:17:17,840
So you can see here, I have I think
eight disputes that have been flagged
214
00:17:18,400 --> 00:17:21,040
and you can see the information
from those allocations.
215
00:17:21,040 --> 00:17:24,080
You can see the Indexer that
had submitted that allocation,
216
00:17:25,200 --> 00:17:31,200
you can see the proof that was submitted with
that allocation, the allocation amount, etc.
217
00:17:32,560 --> 00:17:38,320
So you can look at this to look at more metadata
about the allocations that have been flagged.
218
00:17:39,840 --> 00:17:42,800
And for example one thing that
stands out here, is you can see these
219
00:17:43,440 --> 00:17:48,880
<fefefefe> proof of indexings that
are clearly not valid and those were
220
00:17:50,000 --> 00:17:54,000
test POIs that we've submitted
on-chain on testnet to test this.
221
00:17:56,160 --> 00:18:00,080
Of course you can also go to the
database and explore the data
222
00:18:00,080 --> 00:18:05,600
and just to show you guys
overall numbers, I've taken that
223
00:18:05,600 --> 00:18:11,040
database table and made a few
charts and so I've like broken up
224
00:18:12,320 --> 00:18:15,040
the allocations that have been
submitted on-chain overall
225
00:18:16,000 --> 00:18:21,360
and here are the potential disputes that have
been identified, so not too many on testnet.
226
00:18:23,360 --> 00:18:27,920
Broken out by deployment and this
kind of just gives you a good
227
00:18:27,920 --> 00:18:32,800
overview of what's been going on on
testnet and how you can look at this data.
228
00:18:32,800 --> 00:18:37,280
And then once you've identified
potential disputes, the next step
229
00:18:37,280 --> 00:18:40,880
would be to look a little
bit closer at them,
230
00:18:40,880 --> 00:18:44,080
identify ones you might want
to submit as disputes and then
231
00:18:44,640 --> 00:18:55,840
we have another tool that Ariel is gonna talk
about here, for actually submitting those disputes.
232
00:18:56,560 --> 00:18:59,920
But before we go to Ariel, do we
have any questions about that?
233
00:19:06,800 --> 00:19:09,840
I had one question that
I'll be the rubber duck for.
234
00:19:10,560 --> 00:19:16,400
So you mentioned that The Graph Node
has an endpoint for returning the POI.
235
00:19:17,280 --> 00:19:23,440
Does that API take the Indexer's
public key as a argument
236
00:19:23,440 --> 00:19:27,200
or is the Indexer agent the one that's
actually computing the final proof of indexing?
237
00:19:29,200 --> 00:19:33,680
Yeah, so that endpoint does take
the Indexer address as an argument
238
00:19:33,680 --> 00:19:38,160
along with the deployment
ID and a block identifier.
239
00:19:39,360 --> 00:19:46,480
And so, in normal operation your Indexer
will use your Indexer address to create POIs
240
00:19:46,480 --> 00:19:49,760
and then submit those, but when
you're checking against another
241
00:19:49,760 --> 00:19:54,640
Indexer you'll use their address to
create that POI that's unique to them.
242
00:19:57,760 --> 00:20:01,840
There's like a base POI that's stored in the
database and then that's combined with the
243
00:20:01,840 --> 00:20:05,840
Indexer address, the deployment ID and
block number to create a unique digest.
244
00:20:20,400 --> 00:20:21,200
Any other question?
245
00:20:27,440 --> 00:20:29,200
Some people are seeing
themselves on the...
246
00:20:29,200 --> 00:20:31,040
Yeah. ...the testnet list.
247
00:20:31,920 --> 00:20:35,120
Yeah, so if there's nothing
else then I guess Ariel will
248
00:20:35,760 --> 00:20:38,560
take the stage and walk through
the submitting of disputes.
249
00:20:39,600 --> 00:20:41,840
Yeah. I'll share my screen.
250
00:20:43,120 --> 00:20:51,760
I want to show how the dispute process
works, related to the on-chain part.
251
00:20:53,120 --> 00:20:57,920
The first thing that you need to do is
to identify a potential dispute, like
252
00:20:57,920 --> 00:21:04,960
Ford showed, but then, there's a process of
submitting that dispute on-chain to a contract.
253
00:21:05,760 --> 00:21:10,400
So the disputes are currently managed by
a contract called the dispute manager,
254
00:21:11,360 --> 00:21:16,320
that's deployed in, like with these
addresses that you see here, in mainnet and
255
00:21:16,320 --> 00:21:19,600
Rinkeby and you can
256
00:21:20,880 --> 00:21:25,440
check the contract code anytime by looking
at Etherscan because it's verified.
257
00:21:27,760 --> 00:21:29,840
Yeah.
258
00:21:31,040 --> 00:21:33,200
This is the implementation,
I was looking at the proxy.
259
00:21:34,480 --> 00:21:40,640
So, the first thing you need to do is to
interact with this contract by creating
260
00:21:40,640 --> 00:21:44,080
a dispute, passing the information
related to the allocation ID,
261
00:21:44,640 --> 00:21:50,400
or the query that is wrong. For
doing that you can use any tool,
262
00:21:50,400 --> 00:21:53,920
because the contract is open and
it's out there and you can use Remix,
263
00:21:53,920 --> 00:21:59,840
you can use Etherscan but we created
a tool to do that easily. A CLI tool.
264
00:22:01,600 --> 00:22:06,560
The first things you probably want to
know are some relevant network parameters.
265
00:22:07,760 --> 00:22:12,960
The dispute process works by
doing a slashing on the Indexers
266
00:22:12,960 --> 00:22:22,000
and the percentage of the stake that would
be slashed can be checked in this URL.
267
00:22:22,640 --> 00:22:28,960
If you go to Network, you'll
see that variable here, 2.5%
268
00:22:30,160 --> 00:22:36,160
and this is set by governance and is stored
in the dispute manager contract that I show.
269
00:22:37,200 --> 00:22:42,640
So this dispute manager, whenever you present
a dispute and it's resolved by the arbitration,
270
00:22:44,320 --> 00:22:52,000
will talk with this staking contract and will
slash based on the Indexer stake and this 2.5%.
271
00:22:52,000 --> 00:22:52,500
Okay.
272
00:22:54,800 --> 00:22:57,840
So, that's one of the important
variables you want to see.
273
00:22:58,720 --> 00:23:03,520
Then there's a different governance
parameter related to disputes that is the
274
00:23:04,240 --> 00:23:10,880
percentage of the slash that the
submitter of the dispute will get.
275
00:23:11,600 --> 00:23:19,600
So, when a dispute is resolved,
accepted and the Indexer is slashed, 50%
276
00:23:19,600 --> 00:23:24,640
of that slash is going to go to
the submitter of the dispute.
277
00:23:24,640 --> 00:23:27,360
That's another value that
is set by governance.
278
00:23:28,560 --> 00:23:32,320
And the last one is the bond that
you need to present as a submitter.
279
00:23:32,320 --> 00:23:34,000
If you find a dispute and you want to
280
00:23:34,720 --> 00:23:42,400
dispute that and present that in the
dispute manager, you need to deposit 10.000 GRT.
281
00:23:42,400 --> 00:23:49,840
Now, so, that's something that you
need to have before creating a dispute.
282
00:23:51,040 --> 00:23:54,640
As I was saying before, we have
a CLI that you can install.
283
00:23:55,440 --> 00:24:02,080
Don't do it now because it's not published,
but I will publish the script after this
284
00:24:02,080 --> 00:24:07,440
meeting and probably in some hours,
but you can install with NPM.
285
00:24:08,960 --> 00:24:14,000
It's a script in TypeScript and it
will give you like different commands
286
00:24:14,000 --> 00:24:16,560
that you can use to interact with
the dispute manager contract.
287
00:24:17,760 --> 00:24:19,600
You will need this information.
288
00:24:22,000 --> 00:24:29,040
The endpoint to query for trust for the
POIs, this is the one that Ford mentioned.
289
00:24:31,360 --> 00:24:37,360
So, this script will test if whatever
you are presenting is a valid
290
00:24:37,360 --> 00:24:43,280
dispute or not, so we can catch any, like
issue, before submitting to the dispute manager.
291
00:24:45,760 --> 00:24:49,040
You need an Ethereum Node because it's
going to interact with the contract,
292
00:24:49,760 --> 00:24:53,280
you need the network's subgraph
URL because it's going to query for
293
00:24:53,280 --> 00:24:58,400
allocations and some information
to present in the script.
294
00:24:59,760 --> 00:25:01,200
You will need a private key,
295
00:25:01,200 --> 00:25:06,160
so you send the transaction to the
dispute manager contract and you need the
296
00:25:06,160 --> 00:25:08,800
funds, the 10k if you are going to dispute.
297
00:25:11,360 --> 00:25:17,600
Then, all the things are configurable,
you can use the environment you want, like you
298
00:25:17,600 --> 00:25:20,880
can use an Ethereum Node
for mainnet or for Rinkeby
299
00:25:20,880 --> 00:25:23,360
and interact with any of
these deployed contracts.
300
00:25:24,880 --> 00:25:32,480
So The Graph dispute CLI provides
these commands, a command to list the
301
00:25:33,120 --> 00:25:39,120
active disputes and it will show active
disputes and resolved disputes with the details.
302
00:25:40,160 --> 00:25:46,560
A show command that you can pass the dispute ID
and see the details of that particular dispute.
303
00:25:48,480 --> 00:25:52,880
Some commands to create indexing
disputes or query disputes.
304
00:25:55,440 --> 00:25:59,680
In these cases, the parameters
that you need to pass change a bit.
305
00:26:00,240 --> 00:26:02,320
In the case of an indexing
dispute you will pass the
306
00:26:02,320 --> 00:26:06,320
allocation ID, that you
find it's using a bad proof.
307
00:26:07,280 --> 00:26:10,480
In the case of a query dispute
you will present an attestation
308
00:26:10,480 --> 00:26:12,240
that the Indexer is going to send you.
309
00:26:12,240 --> 00:26:14,560
Okay. As a response.
310
00:26:16,800 --> 00:26:21,600
Then you can pass it the deposit that
I said, like the minimum deposit is 10k
311
00:26:22,320 --> 00:26:25,840
but you can pass any
number above that 10k.
312
00:26:28,880 --> 00:26:33,520
And then, it also provides some
commands that will be used for
313
00:26:33,520 --> 00:26:36,400
arbitration to reject,
accept or draw the dispute.
314
00:26:37,840 --> 00:26:44,560
This is going to be used by the Arbitrators to
basically decide what to do with that dispute.
315
00:26:47,600 --> 00:26:54,080
So, I'm going to go like a bit about the
different, like type of disputes and what's
316
00:26:54,080 --> 00:26:58,640
happening behind and what the
CLI is doing to help the process.
317
00:27:00,240 --> 00:27:04,400
As Brandon said that there's an arbitration
charter and the Arbitrators will
318
00:27:04,400 --> 00:27:08,160
look at the charter and decide on what
to do with a dispute based on that,
319
00:27:09,840 --> 00:27:15,440
but the script is going to help to
validate things before the dispute is
320
00:27:15,440 --> 00:27:20,160
submitted, to follow the arbitration
charter and help the Arbitrators.
321
00:27:21,040 --> 00:27:28,240
So, Ford already described
what a POI is and, like
322
00:27:28,240 --> 00:27:33,840
it's related with an allocation ID
that is closed on an epoch.
323
00:27:34,480 --> 00:27:40,080
What you need to do, you need to know, is
that anyone can dispute, it's like a public
324
00:27:40,080 --> 00:27:44,480
process, like anyone can call
this create dispute function.
325
00:27:45,680 --> 00:27:47,840
So anyone that is monitoring the network
326
00:27:48,880 --> 00:27:52,960
can create a dispute on allocation ID,
but there are some conditions.
327
00:27:55,120 --> 00:28:01,840
Well, first, this is an example
of how you would call the CLI
328
00:28:02,800 --> 00:28:07,440
to create the transaction
that will submit the dispute.
329
00:28:07,440 --> 00:28:11,120
Okay, in this case I'm submitting
this allocation ID with this
330
00:28:11,120 --> 00:28:14,400
deposit and these are all the
endpoints I'm going to use.
331
00:28:14,400 --> 00:28:15,840
Okay.
332
00:28:16,640 --> 00:28:21,520
So, when you're creating these disputes there are
some things that are enforced by the contract,
333
00:28:23,280 --> 00:28:27,200
you can't create multiple disputes from
the same allocation, that's one thing.
334
00:28:28,640 --> 00:28:33,040
The allocation must exist in the staking
contract, if you pass like a random
335
00:28:33,040 --> 00:28:35,840
allocation ID it's not going
to work, it's going to revert.
336
00:28:36,880 --> 00:28:40,720
The Indexer must have some
stake, you can't dispute,
337
00:28:41,360 --> 00:28:47,280
like an Indexer that doesn't have
any funds or doesn't exist basically.
338
00:28:48,320 --> 00:28:53,200
And the deposit bond should
be above the minimum required.
339
00:28:53,200 --> 00:28:55,200
So those things are
enforced by the contract.
340
00:28:56,480 --> 00:29:03,120
But the CLI is going to test a number of
things to help, like to follow the charter.
341
00:29:04,320 --> 00:29:11,120
These things are, it's not going
to let you create a dispute on some
342
00:29:11,120 --> 00:29:16,960
allocation that was already slashed,
to basically follow this double
343
00:29:16,960 --> 00:29:19,200
jeopardy protection in the charter.
344
00:29:20,880 --> 00:29:26,880
It's also going to check if the
allocation is old and the, it's like
345
00:29:28,160 --> 00:29:34,000
older than the two thawing periods in the
charter, it's going to, like issue a warning, so
346
00:29:34,000 --> 00:29:39,840
you don't submit that dispute because
basically the Arbitrators will reject or draw.
347
00:29:42,080 --> 00:29:45,680
And it's going to check that data
availability, like it's going to test that the
348
00:29:46,560 --> 00:29:53,040
content is there, so the dispute can
be reproduced by the Arbitrators.
349
00:29:54,480 --> 00:29:58,160
So, this is the case for indexing rewards.
350
00:29:59,440 --> 00:30:03,840
You don't need many input variables,
like you need the allocation ID
351
00:30:03,840 --> 00:30:07,280
but this is going to test a number
of things by quoting the subgraph,
352
00:30:07,280 --> 00:30:11,280
quoting IPFS for the data
and quoting the content.
353
00:30:15,200 --> 00:30:18,160
Then the other type of
dispute is the query disputes.
354
00:30:18,160 --> 00:30:22,880
Indexers are responding to queries
and it's going to sign attestations
355
00:30:22,880 --> 00:30:29,280
based on the response and these
attestations can be used as a receipt,
356
00:30:30,240 --> 00:30:37,760
that if the Consumer finds the
query to be invalid, it can use this
357
00:30:37,760 --> 00:30:40,240
attestation to submit a, file a dispute.
358
00:30:42,080 --> 00:30:49,120
So in this case we will use, instead of create
indexing, we'll create query, the query
359
00:30:49,120 --> 00:30:51,760
dispute and we will pass the bytes.
360
00:30:53,760 --> 00:30:59,520
This is the, basically this is structure,
the attestation but in byte format
361
00:31:01,360 --> 00:31:06,240
and this will submit a query dispute
in the dispute manager contract.
362
00:31:09,200 --> 00:31:15,920
An additional information that needs to be
available is the actual query that you did
363
00:31:16,880 --> 00:31:20,640
and that needs to be available in
IPFS because this whole process
364
00:31:20,640 --> 00:31:28,000
is, we need to reproduce the dispute so
we can know if it's right or wrong, so
365
00:31:29,280 --> 00:31:32,000
the request needs to be available in IPFS.
366
00:31:32,000 --> 00:31:35,680
So, some conditions enforced by the contract,
367
00:31:36,720 --> 00:31:39,840
only one dispute with the same
data and submitter can be active.
368
00:31:42,400 --> 00:31:46,800
Any of these conditions will
revert if it's not fulfilled.
369
00:31:48,000 --> 00:31:53,040
The Indexer must have available stake and the
deposit must be over the minimum required.
370
00:31:54,000 --> 00:32:01,680
Again, in this case the CLI will
help to enforce the charter
371
00:32:02,480 --> 00:32:06,240
by not submitting disputes that are
going to be draw by the Arbitrator.
372
00:32:07,280 --> 00:32:11,680
This would help, like not getting like lots of
submissions on things that's going to be drawn.
373
00:32:14,240 --> 00:32:18,640
So it will help the Indexers,
Consumers and the arbitration process.
374
00:32:20,160 --> 00:32:21,120
Any questions so far?
375
00:32:24,080 --> 00:32:27,520
We've had a couple questions that've
come up in chat, they're not specifically
376
00:32:27,520 --> 00:32:30,800
related to the tooling, maybe
we can look back on those at
377
00:32:30,800 --> 00:32:33,520
the end, if you have
more to get through here.
378
00:32:35,040 --> 00:32:42,160
Okay, then there's some,
these are, I would say
379
00:32:42,160 --> 00:32:45,440
two commands that most people
will use to file disputes.
380
00:32:46,480 --> 00:32:50,240
After doing some analysis,
probably like using the
381
00:32:50,800 --> 00:32:56,080
agent detection and maybe
looking at the information
382
00:32:56,080 --> 00:33:02,160
manually a bit and then the last
step will be to use this CLI
383
00:33:02,160 --> 00:33:07,680
to have reassurance about the
checks the script is doing
384
00:33:07,680 --> 00:33:09,840
and then getting the dispute submitted.
385
00:33:12,880 --> 00:33:16,400
I'm not sure if it's obvious but,
submitting a dispute means a transaction
386
00:33:16,400 --> 00:33:20,720
in the blockchain, so you'll need to pay
for some gas cost for creating the dispute.
387
00:33:23,120 --> 00:33:28,000
Then, there are some tools that can be
used by anyone, by the Arbitrators and
388
00:33:29,920 --> 00:33:36,000
anyone wanting to file a dispute, is
this command. Again, you need to pass
389
00:33:36,640 --> 00:33:43,760
a number of endpoints that this
script will use and it's going to list
390
00:33:44,880 --> 00:33:46,400
some information about the dispute.
391
00:33:47,040 --> 00:33:56,080
So, in this case, you see the dispute
ID that is created in the contract.
392
00:33:57,760 --> 00:34:03,040
The type of dispute, in this case an
indexing dispute is undecided because
393
00:34:03,840 --> 00:34:06,000
the Arbitrator needs to resolve this dispute.
394
00:34:06,880 --> 00:34:10,400
You will see the addresses of
the Indexer and the submitter,
395
00:34:11,920 --> 00:34:17,120
the subgraph deployment, some
information about the allocation
396
00:34:17,120 --> 00:34:18,640
that this dispute is related,
397
00:34:19,200 --> 00:34:25,360
like you'll see the ID, when the
allocation was created on the block hash
398
00:34:26,000 --> 00:34:29,040
and the actual POI submitted by the Indexer.
399
00:34:30,240 --> 00:34:36,480
And then the script will do some checks
to see if the reference POI match,
400
00:34:36,480 --> 00:34:40,320
like in this case you see, like
match is false and it's false again.
401
00:34:41,360 --> 00:34:46,080
These are two checks because, as
Brandon mentioned, we are checking the
402
00:34:46,080 --> 00:34:51,840
epoch when the allocation was closed and
the epoch before that to ensure that we have
403
00:34:51,840 --> 00:34:54,000
some time, like if some
404
00:34:54,000 --> 00:34:59,760
congestion in the network happens
you have more time to mine the transaction.
405
00:35:01,680 --> 00:35:05,600
So in this case, the reference
POI doesn't match with this one.
406
00:35:06,560 --> 00:35:12,160
In this case, there's a lot of leading zeros,
it's a bit strange and it's a bad proof.
407
00:35:13,040 --> 00:35:17,200
So, the Arbitrator can assume that this POI
408
00:35:17,200 --> 00:35:23,760
is really a wrong one and then,
decide to accept the dispute.
409
00:35:25,600 --> 00:35:30,960
But anyone can use this tool to list
the current disputes in the contract.
410
00:35:34,800 --> 00:35:37,280
Then the resolution process can be,
411
00:35:39,920 --> 00:35:45,280
the resolution process can follow,
like with an accept, a rejection or a draw.
412
00:35:46,560 --> 00:35:49,760
Accepting a dispute means that
the Indexer will be slashed.
413
00:35:50,880 --> 00:35:58,080
The deposit will be returned to
the Fisherman and the 50% of the
414
00:35:58,080 --> 00:36:02,800
slash will go to the submitter
and the rest will be burned.
415
00:36:04,240 --> 00:36:08,080
The other condition could be to be
rejected and that means that the
416
00:36:08,080 --> 00:36:13,280
Indexers won't be slashed and the
Fisherman will lose all the deposit.
417
00:36:14,560 --> 00:36:19,280
And the last condition will be a
draw where the Indexer is not slashed
418
00:36:19,280 --> 00:36:23,040
and the Fisherman will not
lose the deposit basically.
419
00:36:23,760 --> 00:36:26,400
So that's more, like the neutral one.
420
00:36:30,080 --> 00:36:35,200
One note is that the dispute ID that I
showed here is created within the contract,
421
00:36:36,560 --> 00:36:43,920
it's basically a hash of some fields to avoid
duplication of the disputes you present.
422
00:36:45,520 --> 00:36:48,720
So you will find the dispute ID
after you create the dispute.
423
00:36:52,080 --> 00:36:53,840
I'm thinking I'm covering most of the things.
424
00:36:54,640 --> 00:36:58,800
The package is not yet published, but
we'll publish in the following hours, okay?
425
00:37:01,680 --> 00:37:04,720
I have a question on the resolution structure.
426
00:37:07,280 --> 00:37:12,240
Is it meaningful and also feasible to
consider an escalating penalty structure,
427
00:37:12,240 --> 00:37:16,720
on the Indexer side, that might start at
a lower base for a first time offense,
428
00:37:16,720 --> 00:37:22,240
but then escalates progressively
with more offenses being recorded?
429
00:37:24,320 --> 00:37:28,080
Yeah, I think these types of
things are all in the design space.
430
00:37:28,720 --> 00:37:32,880
One thing actually that that relates to
is in another question that I saw come up
431
00:37:35,040 --> 00:37:39,520
from StakeMachine, which basically
asks, if an Indexer knows that he
432
00:37:39,520 --> 00:37:45,520
submitted a bad POI, can he open a dispute on
itself and minimize slashing effect in any case?
433
00:37:46,640 --> 00:37:49,520
The reason I think both your questions are
related, is because it relates to like,
434
00:37:49,520 --> 00:37:52,560
what are the true penalties
for being slashed?
435
00:37:53,440 --> 00:37:58,720
So, the simple answer to StakeMachine’s
question is yes. If you wanted to
436
00:37:58,720 --> 00:38:02,720
preemptively slash yourself as the Fisherman,
then in theory you're reducing your,
437
00:38:05,040 --> 00:38:10,240
you're reducing your economic penalty
for that single incident by 50%
438
00:38:10,240 --> 00:38:11,360
with the current parameters.
439
00:38:12,480 --> 00:38:17,200
When you think of the system as a whole,
another way to look at slashing in
440
00:38:17,200 --> 00:38:20,960
disputes is that this is part of your
permanent record under this Indexer's
441
00:38:20,960 --> 00:38:25,040
account and so this is something that's
both visible to Delegators, but it's also
442
00:38:25,040 --> 00:38:30,800
visible to Consumers so, I don't
know how many of you guys were in.
443
00:38:32,560 --> 00:38:35,520
I'm trying to think the last time we did
a workshop on Indexer selection, it might
444
00:38:35,520 --> 00:38:39,440
have been even the testnet, might
have been after the network launch but
445
00:38:40,240 --> 00:38:44,480
the Indexer selection algorithm
that's run by both the gateway and the
446
00:38:45,200 --> 00:38:48,560
query engine, it actually
takes into account a whole
447
00:38:48,560 --> 00:38:52,480
list of factors that it basically
applies utility functions to.
448
00:38:52,480 --> 00:38:58,160
So some of those are things like performance,
throughput latency, availability, you know,
449
00:38:58,160 --> 00:39:00,080
are you always online?
450
00:39:00,080 --> 00:39:02,720
What's your efficiency with
respect to negotiation?
451
00:39:02,720 --> 00:39:04,560
Do you accurately advertise prices?
452
00:39:05,520 --> 00:39:08,800
But some of that is also economic
security and reputation and so.
453
00:39:10,720 --> 00:39:16,640
So to your point Zoro,
about compounding penalties,
454
00:39:17,280 --> 00:39:19,920
even if in the slashing
mechanism itself
455
00:39:19,920 --> 00:39:23,280
there's not a compounding penalty, I
would say that already today there is a
456
00:39:23,280 --> 00:39:27,120
compounding penalty with respect
to reputation, because if an
457
00:39:27,120 --> 00:39:30,640
Indexer is slashed once or twice, that
can be seen by an Indexer selection
458
00:39:30,640 --> 00:39:34,880
algorithm or a Delegator
as an honest mistake.
459
00:39:34,880 --> 00:39:38,240
If an Indexer is slashed repeatedly, I
think it's very likely that you would see
460
00:39:39,040 --> 00:39:46,080
Delegators start to exit that Indexer,
as well as Consumers start to deny
461
00:39:46,080 --> 00:39:49,120
list that Indexer completely
from receiving query volume.
462
00:39:50,240 --> 00:39:54,160
So, that's not to say that there's not
more you could do in the future, but it's
463
00:39:54,160 --> 00:39:56,320
worth noting that some of
that I think already exists,
464
00:39:56,320 --> 00:39:58,400
that dynamic already exists
today in the protocol.
465
00:40:01,200 --> 00:40:02,320
But great question.
466
00:40:05,520 --> 00:40:09,360
So a few more questions we
had that came in chat.
467
00:40:09,360 --> 00:40:14,240
I think this is a really interesting one, Josh
asked, assuming an Indexer is acting honestly,
468
00:40:15,040 --> 00:40:19,040
what are examples of situations that
may result in an Indexer being disputed?
469
00:40:20,800 --> 00:40:23,680
There's a couple that I can list off
the top of my head and I think Ford
470
00:40:23,680 --> 00:40:26,800
might be able to elaborate on some
more that that we've seen, but one
471
00:40:26,800 --> 00:40:30,720
example could be like your Ethereum provider
or Ethereum node that you're running,
472
00:40:31,280 --> 00:40:35,360
being faulty, so either like,
right now The Graph Node relies on
473
00:40:35,360 --> 00:40:38,560
calls to like ETH get logs to make
sure that it's getting events.
474
00:40:38,560 --> 00:40:42,240
You know, we've seen cases in the
past where either events were missed,
475
00:40:42,800 --> 00:40:48,640
or events were duplicated by the
Ethereum Node or Ethereum Node
476
00:40:48,640 --> 00:40:52,960
provider and so that could be a cause
outside of Graph Node that leads
477
00:40:52,960 --> 00:40:57,840
to determinism issues, because you're basically
double processing or missing an event.
478
00:40:59,200 --> 00:41:03,120
Another one that we talked about yesterday,
this will be a new kind of case, is that
479
00:41:04,000 --> 00:41:07,040
Graph Node starting with 1.0
release is going to adopt
480
00:41:07,680 --> 00:41:11,840
SemVer conventions, so the idea
is that the protocol at any given
481
00:41:11,840 --> 00:41:15,440
point will have like an official
major version of Graph Node
482
00:41:15,440 --> 00:41:19,200
but that minor versions are meant to be
backwards compatible with one another
483
00:41:19,200 --> 00:41:24,640
with respect to the proofs of index
seen in queries that would be produced.
484
00:41:25,520 --> 00:41:29,360
So you could imagine like a bug that the
core developers introduced, where they just
485
00:41:29,360 --> 00:41:33,600
accidentally don't... You know, they
introduce something that doesn't comply
486
00:41:33,600 --> 00:41:38,080
with those SemVer conventions, so an Indexer
upgrades to a newer minor version thinking,
487
00:41:38,080 --> 00:41:41,440
‘hey my proofs of indexing and
attestations will be the same’,
488
00:41:41,440 --> 00:41:45,600
when in fact, an inconsistency was
introduced, so that would be another one.
489
00:41:46,480 --> 00:41:49,840
I think we've seen things in the
past with, kind of like transient,
490
00:41:49,840 --> 00:41:55,680
like block reorgs and network events putting
the database into inconsistent states.
491
00:41:56,560 --> 00:42:00,000
Ford, do you have any other color to add on
like past determinism bugs that we've seen?
492
00:42:00,000 --> 00:42:03,200
Yeah, I mean that's
like a good intro.
493
00:42:03,840 --> 00:42:09,760
Another big class of things that could
lead to a POI not matching a trusted
494
00:42:09,760 --> 00:42:12,480
reference is configuration issues.
495
00:42:13,920 --> 00:42:18,880
Maybe your Indexer agent is
talking to a mainnet subgraph
496
00:42:19,680 --> 00:42:24,880
and getting the incorrect epoch value
and so it submits for the wrong block
497
00:42:24,880 --> 00:42:29,840
number, or other small issues
like that, that often will lead to
498
00:42:30,480 --> 00:42:34,560
the POI being valid but not lining
up with the arbitration charter
499
00:42:35,360 --> 00:42:37,200
and being for the correct epoch.
500
00:42:39,040 --> 00:42:45,760
So, it's definitely becoming
more important to be very prudent
501
00:42:45,760 --> 00:42:50,240
about your versioning of your software
and making sure you're checking
502
00:42:50,240 --> 00:42:52,560
everything in your configs,
especially on mainnet.
503
00:42:55,360 --> 00:42:59,040
Yeah, that's a great point and we've
gotten pretty good at I'd say debugging
504
00:42:59,040 --> 00:43:01,760
and recrossing a lot of
these determinism bugs.
505
00:43:01,760 --> 00:43:06,880
I know Ford and Ariel spent a lot of
time actually, when implementing the
506
00:43:06,880 --> 00:43:12,480
Indexer agent cross checking, specifically
looking at that last class of determinism bugs.
507
00:43:13,760 --> 00:43:17,360
So one thing I forgot to, I guess
drill down on the arbitration charter,
508
00:43:17,360 --> 00:43:21,520
was just, what do you do when you see
yourself being disputed on-chain, like
509
00:43:21,520 --> 00:43:22,960
what should your next steps be?
510
00:43:23,840 --> 00:43:27,120
The arbitration charter has
a placeholder link right now
511
00:43:27,120 --> 00:43:30,320
pled to a spot in the forums.
And that's basically
512
00:43:30,320 --> 00:43:35,120
where the Arbitrator, which today is a
multi-sig set to Ariel, Dave and Jannis,
513
00:43:37,920 --> 00:43:41,920
that's where that root cause analysis
can happen. So like if you were,
514
00:43:41,920 --> 00:43:44,720
you know, if you believe that
the proof of indexing was
515
00:43:45,280 --> 00:43:47,280
incorrect because you thought the epoch
516
00:43:47,280 --> 00:43:54,800
was wrong or you think there's a determinism
bug in Τhe Graph Node or you think
517
00:43:54,800 --> 00:43:58,400
there was an issue with your Ethereum Node
provider, that's the place where we can
518
00:43:58,400 --> 00:44:03,600
share logs, we can kind of investigate,
it's meant to be a collaborative process.
519
00:44:04,320 --> 00:44:08,400
We recognize that the network is
still kind of in this bootstrapping
520
00:44:08,400 --> 00:44:10,960
early phase and so the idea
is to work with people
521
00:44:10,960 --> 00:44:15,760
as much as possible, that are trying
to behave honestly in the network.
522
00:44:20,240 --> 00:44:23,840
I think, so Alex is asking, will
these cases of determinism bugs
523
00:44:24,720 --> 00:44:26,480
be settled as a draw or rejected?
524
00:44:26,480 --> 00:44:29,200
They will be settled as a draw,
because it's also not the Fisherman's
525
00:44:29,760 --> 00:44:33,920
fault if... From the Fisherman's
perspective they are correct
526
00:44:33,920 --> 00:44:36,640
that there isn’t
inconsistent proof of indexing.
527
00:44:39,280 --> 00:44:41,920
One follow-up question
to your earlier comment.
528
00:44:43,360 --> 00:44:47,200
Is there a chance an Indexer might
not even know that there's a dispute
529
00:44:47,200 --> 00:44:50,560
case being logged against him
and all of a sudden the first
530
00:44:50,560 --> 00:44:52,720
time he hears about it
is the decision itself?
531
00:44:55,840 --> 00:44:59,840
Certainly that's a possibility, I
mean being an Indexer in The Graph is
532
00:44:59,840 --> 00:45:03,840
much more of an engaged, I would say
active activity than being an Indexer
533
00:45:03,840 --> 00:45:07,280
or excuse me, being a validator in like
let's say, like a blockchain network
534
00:45:07,280 --> 00:45:10,880
where you could, like spin up a
Raspberry Pi and like set it, forget it.
535
00:45:12,160 --> 00:45:15,200
So we've already seen this with
like the subgraph migration,
536
00:45:15,200 --> 00:45:18,880
I would say there was like a certain
set of Indexers that were like
537
00:45:18,880 --> 00:45:21,600
ready and saw that the subgraphs
were migrating and like
538
00:45:21,600 --> 00:45:25,200
took action immediately and then we saw
other Indexers were a little bit slower to
539
00:45:25,200 --> 00:45:27,920
respond, because they're sort of
not actively engaged in the network.
540
00:45:28,720 --> 00:45:34,320
So I'd say at the phase that
we're at now it is primarily
541
00:45:34,320 --> 00:45:36,240
the Indexer's responsibility to be active
542
00:45:36,240 --> 00:45:40,320
and engaged and make sure that
they're participating with the
543
00:45:40,320 --> 00:45:42,320
mechanisms of the protocol responsibly.
544
00:45:43,120 --> 00:45:45,440
In the future there should be
a lot more tooling available.
545
00:45:45,440 --> 00:45:48,720
So someone already asked in the
chat whether there was a way to
546
00:45:48,720 --> 00:45:51,520
query the disputes outside of
looking at the contracts directly.
547
00:45:51,520 --> 00:45:56,880
There is a disputes entity in the network
subgraph which a lot of community members have
548
00:45:56,880 --> 00:45:59,840
already been using to build
their sort of third-party
549
00:46:00,880 --> 00:46:02,560
dashboards and network explorers.
550
00:46:02,560 --> 00:46:08,240
I fully expect that as we see more
usage of the mechanism, we'll see a lot
551
00:46:08,240 --> 00:46:12,720
more of that incorporated into those
explorers. It's also on the roadmap for
552
00:46:12,720 --> 00:46:16,640
some of the web apps products that Edge
& Node is working on, although it won't
553
00:46:16,640 --> 00:46:22,880
be part of the, it likely won't be part
of the MVP release that's happening at the
554
00:46:22,880 --> 00:46:27,840
end of the phase three migration.
555
00:46:34,960 --> 00:46:36,960
So, Joseph from Figma asks,
556
00:46:36,960 --> 00:46:40,880
is it only the query fees that are
slashed and not the Indexers deposit?
557
00:46:40,880 --> 00:46:45,920
No, Indexers stake is slashed
in the case of disputes.
558
00:46:51,360 --> 00:46:54,400
And I believe, to your follow-up
question, what about rebate pool rewards?
559
00:46:54,400 --> 00:46:58,960
So all, so for the Indexer like, once rewards
are settled, I believe they're all part of
560
00:46:58,960 --> 00:47:02,240
the same like pool of funds from
the perspective of the protocol.
561
00:47:02,240 --> 00:47:06,080
So at the point that once they're
in there they're slashable,
562
00:47:06,080 --> 00:47:10,240
I don't think the part of the rebate
pool that goes to like Delegators,
563
00:47:10,240 --> 00:47:12,080
that wouldn't be slashable, for example.
564
00:47:20,320 --> 00:47:25,120
Yes, right now, so someone asked, is this over
the allocated stake on the specific subgraph?
565
00:47:25,840 --> 00:47:30,480
Today, the way that it works in the mechanism
is that it is over the total indexer stake.
566
00:47:31,680 --> 00:47:37,360
I actually, just as an aside, I do believe
there would be, there's valid reasons for
567
00:47:37,360 --> 00:47:39,520
shifting that to the allocated stake.
568
00:47:40,320 --> 00:47:44,480
That's kind of on my personal
backlog of proposals to work on, but
569
00:47:45,040 --> 00:47:46,320
today the way that it works in the
570
00:47:46,320 --> 00:47:51,840
contracts is that it's the total
stake owned by the Indexer.
571
00:48:08,880 --> 00:48:11,120
Ariel, Ford, do you see any other
questions we haven't addressed yet?
572
00:48:14,800 --> 00:48:19,920
There's one question, is about
whether or not an Indexer can
573
00:48:19,920 --> 00:48:24,800
compare their POI with a reference prior
to submitting their allocation close?
574
00:48:26,080 --> 00:48:27,200
That's a good question.
575
00:48:27,200 --> 00:48:33,680
It would be prudent to have
some extra checks, however
576
00:48:33,680 --> 00:48:37,360
you would need to work with another
577
00:48:37,360 --> 00:48:43,360
trusted Indexer and use their POI
or ask them to compare because
578
00:48:44,240 --> 00:48:47,840
this is not something that I would
recommend you expose to other Indexers
579
00:48:47,840 --> 00:48:51,600
because then they'll be able
to spoof proof of indexing.
580
00:48:55,280 --> 00:48:57,360
By sharing your proof of
indexing with other Indexers
581
00:48:57,360 --> 00:49:00,400
you're basically sharing your
reward with other Indexers.
582
00:49:00,400 --> 00:49:01,840
Yeah, exactly!
583
00:49:03,360 --> 00:49:08,640
But it's something to think about and
I hope that that'll not be necessary as we
584
00:49:09,360 --> 00:49:13,280
hone in on determinism and hardening
all of our various systems,
585
00:49:14,480 --> 00:49:18,080
but you may want to work with
some other trusted Indexers to
586
00:49:18,080 --> 00:49:20,160
do some validation in
these earlier stages.
587
00:49:23,280 --> 00:49:27,040
One question that sounds like we might have missed
was, how long does it take to resolve a dispute?
588
00:49:28,640 --> 00:49:32,160
So, the expectation that's set in
the arbitration charter is that the
589
00:49:32,800 --> 00:49:36,800
Arbitrator should attempt to settle
dispute within one bond period.
590
00:49:37,360 --> 00:49:43,280
That's kind of logical since an attacker would
presumably exit immediately and then we
591
00:49:43,280 --> 00:49:45,440
only have one thawing period to slash them.
592
00:49:47,040 --> 00:49:51,120
In practice that might not always be possible
so this isn't the kind of thing that's gonna,
593
00:49:51,120 --> 00:49:55,360
like an Arbitrator is not gonna be
removed for not always meeting that goal,
594
00:49:56,080 --> 00:49:59,360
specifically because the Arbitrator
needs to be able to recreate the work
595
00:50:01,440 --> 00:50:05,760
that's implied in the dispute and
so sometimes that's going to be a
596
00:50:05,760 --> 00:50:09,440
quick and easy process, sometimes
that might be a longer process.
597
00:50:10,240 --> 00:50:14,960
As I mentioned earlier, the initial
Arbitrators that are set in the multi-sig are,
598
00:50:16,320 --> 00:50:21,040
or that are part of the Arbitrator multi-sig
rather, are Ariel, Dave and Jannis.
599
00:50:22,240 --> 00:50:29,360
And so, they're sort of proactively
running, kind of their Indexer to
600
00:50:32,080 --> 00:50:35,760
basically be in a position where
they can reproduce work very quickly,
601
00:50:35,760 --> 00:50:39,120
but if a dispute for example, were to
come up for like some subgraph that
602
00:50:39,120 --> 00:50:42,720
somehow they're not already indexing,
then it might take longer as they
603
00:50:42,720 --> 00:50:47,680
index this new subgraph from scratch
to kind of get to the block where that
604
00:50:47,680 --> 00:50:53,840
proof of indexing, dispute
or query dispute is relevant.
605
00:51:05,280 --> 00:51:07,840
Any other questions before we wrap up?
606
00:51:13,600 --> 00:51:17,440
I guess I'll close with some final
thoughts on another mechanism.
607
00:51:17,440 --> 00:51:23,200
This isn't something we'll have time to go
in depth on today, but it's something that
608
00:51:23,200 --> 00:51:25,840
maybe would be a good topic for a future workshop.
609
00:51:25,840 --> 00:51:29,520
It's just the role of the subgraph
oracle in securing the network.
610
00:51:29,520 --> 00:51:33,520
So it kind of, your last question kind
of made me think of this which is just,
611
00:51:34,400 --> 00:51:42,480
what if someone, what if a malicious Indexer
submits a subgraph that's impossible for
612
00:51:42,480 --> 00:51:46,480
the Arbitrator to index
and reproduce the work on,
613
00:51:46,480 --> 00:51:50,800
or is exceedingly difficult for
some known set of reasons.
614
00:51:51,680 --> 00:51:53,680
So that's the role of
the subgraph oracle.
615
00:51:54,560 --> 00:51:56,560
There will be a subgraph oracle charter
616
00:51:57,680 --> 00:52:03,840
GIP at some point in the future to
clarify the processes that that role
617
00:52:05,280 --> 00:52:08,720
should take in a way that we, similar
way that we've done to the Arbitrator.
618
00:52:09,600 --> 00:52:13,200
The main action that the
subgraph oracle takes,
619
00:52:13,200 --> 00:52:16,480
as many of you know, is that
they disable indexing rewards
620
00:52:16,480 --> 00:52:20,560
on, they have the ability to disable
indexing rewards on specific subgraphs.
621
00:52:22,160 --> 00:52:25,920
The reason that this ends up acting
as a kind of penalty of sorts is that
622
00:52:26,640 --> 00:52:32,400
if a attacker does signal a bunch on
a subgraph that, for example no one
623
00:52:32,400 --> 00:52:35,840
can index or no one can
reasonably be expected to index
624
00:52:36,400 --> 00:52:42,560
and then they also allocate a bunch on
that as an Indexer, A) they have the
625
00:52:42,560 --> 00:52:45,840
opportunity cost of the
indexer allocated stake
626
00:52:45,840 --> 00:52:48,960
but then also as a Curator
they're paying this curation tax.
627
00:52:49,680 --> 00:52:56,400
And so when that subgraph gets
disabled for indexing rewards,
628
00:52:56,400 --> 00:52:59,840
effectively the curation tax that
they've paid is now worthless.
629
00:53:00,400 --> 00:53:04,960
And so, part of the role
of the oracle is disabling
630
00:53:04,960 --> 00:53:07,520
subgraphs on a short
enough time frame
631
00:53:07,520 --> 00:53:13,840
to make those sorts of
attacks unprofitable.
632
00:53:16,720 --> 00:53:20,480
StakeMachine is asking, what about
serving bad data for queries?
633
00:53:20,480 --> 00:53:24,080
So that's the second big category
of disputes that we talked about.
634
00:53:24,640 --> 00:53:28,160
I think it was the first category that, or
the second category that Ariel showed in
635
00:53:28,160 --> 00:53:32,560
the tooling, but that's the
query attestation disputes
636
00:53:32,560 --> 00:53:37,920
so, there's a couple
different types of those,
637
00:53:37,920 --> 00:53:41,840
but yeah, absolutely, there's
a dispute process for those.
638
00:53:44,160 --> 00:53:49,280
I do have a couple of thoughts going
back to the arbitration charter and
639
00:53:50,720 --> 00:53:54,320
considerations to add to
the role of Arbitrators.
640
00:53:54,320 --> 00:53:59,840
One would be a consideration
for introducing a postmortem
641
00:53:59,840 --> 00:54:04,160
process on a dispute, following
the arbitration decision.
642
00:54:05,760 --> 00:54:13,680
A review of what is the likelihood of that
sort of dispute to recur in the future
643
00:54:14,240 --> 00:54:24,240
and can there be introduced mechanisms or
protocol improvements to address that gap.
644
00:54:24,240 --> 00:54:31,360
And here would be the Arbitrator's role
to do sort of like a mini review of that
645
00:54:31,360 --> 00:54:35,280
and then possibly engage
the community on ideas.
646
00:54:37,760 --> 00:54:39,200
Yeah, I think that's a great idea.
647
00:54:39,200 --> 00:54:42,800
I think a really natural place for that
would be, I guess, like the forum thread
648
00:54:42,800 --> 00:54:50,080
where the Arbitrators intended to engage
with the honest Indexer that made a mistake
649
00:54:50,080 --> 00:54:57,280
and yeah, the final step in that could be to
put some kind of postmortem, but also like,
650
00:54:59,120 --> 00:55:05,840
I guess an opinion, so to speak, of why they
decided the outcome the way that they did.
651
00:55:06,720 --> 00:55:13,920
The other thought was the idea of an appeal
process, which I don't think I've seen
652
00:55:13,920 --> 00:55:18,720
in the arbitration charter and I don't
know if that is something to consider,
653
00:55:18,720 --> 00:55:21,600
but the idea would be that an
Indexer has an opportunity to
654
00:55:22,160 --> 00:55:24,080
appeal the decision of the arbitration.
655
00:55:25,760 --> 00:55:29,360
Not necessarily based on content,
but maybe based on process.
656
00:55:32,880 --> 00:55:34,400
Yeah, so that's a
really interesting idea.
657
00:55:35,440 --> 00:55:39,040
One thought I had here that I didn't
put into the arbitration charter because
658
00:55:39,760 --> 00:55:43,120
mainly the charter was
about non-code changes but
659
00:55:43,120 --> 00:55:46,720
I think it could be interesting to put
like a waiting period on the slashing,
660
00:55:46,720 --> 00:55:49,040
that's something we could
explore in the future. So like,
661
00:55:49,040 --> 00:55:53,920
the Arbitrator makes their decision and the
Indexers stake at that point can't exit the
662
00:55:53,920 --> 00:55:59,600
protocol, so it's sort of like frozen,
but maybe they're not slashed right away.
663
00:56:00,480 --> 00:56:04,640
And that would actually give The Graph
Council for example, time to act as in like an
664
00:56:04,640 --> 00:56:09,120
appeals body if for some reason the
Arbitrator needed to be removed,
665
00:56:10,000 --> 00:56:13,520
if they were making, like
a string of bad decisions.
666
00:56:14,720 --> 00:56:19,440
So I think these are really great
suggestions, that could either be worked into
667
00:56:19,440 --> 00:56:30,320
either this charter or just future proposals for
improving and hardening the arbitration mechanism.
668
00:56:30,320 --> 00:56:32,240
Do you have any other thoughts, Oliver?
669
00:56:33,200 --> 00:56:35,280
I think you've posted a couple
other things in the forum as well.
670
00:56:35,280 --> 00:56:38,800
Yeah, I think most of the other
items that I've posted on the forum
671
00:56:38,800 --> 00:56:42,720
have since been addressed by just
getting deeper insight into the charter
672
00:56:42,720 --> 00:56:47,600
and also as far as, I had one comment
on the penalty and what levels,
673
00:56:47,600 --> 00:56:53,360
but Ariel walked us through that today
as well, so I think what I've brought up
674
00:56:53,360 --> 00:56:57,760
were the ones that I think are
still maybe valid ideas to consider.
675
00:56:57,760 --> 00:57:00,160
Everything else I think is addressed.
676
00:57:01,160 --> 00:57:01,840
Cool!
677
00:57:01,840 --> 00:57:02,560
Thank you.
678
00:57:03,360 --> 00:57:07,600
So Josh asked, where did the reference POIs
come from that the Indexers are comparing to?
679
00:57:10,160 --> 00:57:14,480
Ariel, I assume he means to the reference
ones that showed up in your tooling.
680
00:57:17,600 --> 00:57:19,040
If I understand the question correctly.
681
00:57:20,960 --> 00:57:22,800
It seems Ariel might be frozen but...
682
00:57:24,000 --> 00:57:25,840
I thought he was just being stoic.
683
00:57:27,360 --> 00:57:32,560
In Ariel's script you can
supply a trusted Indexer endpoint.
684
00:57:34,800 --> 00:57:41,600
A Graph Node has a, what we call the index
node status endpoint that's a GraphQL
685
00:57:41,600 --> 00:57:44,000
endpoint that includes the
proof of indexing query.
686
00:57:46,160 --> 00:57:49,200
So in the Indexer agent
monitoring tool I showed,
687
00:57:50,240 --> 00:57:53,840
that would query your own
infrastructure and compare.
688
00:57:54,880 --> 00:57:58,400
In Ariel's dispute script
you can point that to
689
00:57:59,520 --> 00:58:02,560
any Graph Node that you have
access to that you trust.
690
00:58:06,800 --> 00:58:08,800
Cool, so it looks like we're
at the top of the hour here.
691
00:58:10,560 --> 00:58:14,640
If you guys have any other questions, please
find us in Discord forums, you kind of know
692
00:58:14,640 --> 00:58:19,040
how to reach us, but yeah, thanks for
everyone, for your questions, for showing up
693
00:58:19,040 --> 00:58:21,840
and yeah, good luck with the
rest of the migration workshops.
1
00:00:00,960 --> 00:00:08,560
Welcome everyone and we'll have today Jannis
and Zac to go over the Scalar and Vector Node
2
00:00:09,440 --> 00:00:12,080
and with that Ι'll give it to Jannis.
3
00:00:22,720 --> 00:00:26,160
All right. So, yeah, today we're gonna
4
00:00:27,040 --> 00:00:32,560
talk about Scalar and how it
works on a more technical level,
5
00:00:32,560 --> 00:00:40,960
how it integrates with the Indexer components
and unix infrastructure, what kind of changes have
6
00:00:40,960 --> 00:00:44,400
to be made to integrate it on the Indexer side.
7
00:00:45,440 --> 00:00:49,680
Now this is currently for
testnet only, where we're
8
00:00:49,680 --> 00:00:53,600
stabilizing Scalar and particularly
the Vector integration,
9
00:00:54,320 --> 00:01:00,640
making some performance improvements,
improving a few other things that we found that
10
00:01:00,640 --> 00:01:04,160
don't work at the scale that would
be necessary for the network.
11
00:01:05,200 --> 00:01:12,160
But we're making good progress
there, we're testing bursts and yeah.
12
00:01:12,160 --> 00:01:16,880
So the purpose of this is to
introduce all of you Indexers
13
00:01:16,880 --> 00:01:20,240
or those who are interested
but are not Indexers,
14
00:01:21,120 --> 00:01:29,040
to Scalar in more detail and also to kind of
prepare you for ultimately the mainnet launch of Scalar.
15
00:01:31,600 --> 00:01:36,480
So yeah, first of all, so for some
of the Indexers who were part of the
16
00:01:36,480 --> 00:01:41,680
last index office hours this will be
mostly with the same content again,
17
00:01:41,680 --> 00:01:43,760
although there could be
different questions at the end,
18
00:01:43,760 --> 00:01:48,800
I will reserve some time for Q&A.
Martin is this workshop an hour long?
19
00:01:48,800 --> 00:01:50,160
I forgot to check.
20
00:01:50,160 --> 00:01:54,160
Yes. Cool! So, yeah, I
should be able to get through
21
00:01:54,160 --> 00:01:58,640
everything in about 30
minutes or so and then we can,
22
00:02:00,240 --> 00:02:04,000
yeah, talk about Scalar, Vector, etc.
23
00:02:05,440 --> 00:02:09,280
I don't know, I don't think we'll need 30 minutes
to go through all the questions, but maybe there is
24
00:02:09,280 --> 00:02:10,560
a lot of questions, so we'll see.
25
00:02:13,120 --> 00:02:18,960
If you also want to read up on Scalar there's,
a good resource is the blog on thegraph.com
26
00:02:18,960 --> 00:02:22,320
and there's a Scalar blog
post that introduces what,
27
00:02:22,320 --> 00:02:27,680
introduces its design, its
benefits and talks a bit about
28
00:02:27,680 --> 00:02:32,000
the collaboration with Connext who built the
Vector Protocol that Scalar is built on.
29
00:02:33,600 --> 00:02:36,320
So, can definitely recommend reading that.
30
00:02:37,440 --> 00:02:40,480
We'll go into a bit more detail
today, but we'll start with
31
00:02:41,120 --> 00:02:46,800
kind of similar high level but much
more compact introduction to what it is.
32
00:02:46,800 --> 00:02:51,840
So, in The Graph Network
there's two kind of rewards,
33
00:02:52,400 --> 00:02:55,920
there's indexing rewards for
performing the work of indexing
34
00:02:55,920 --> 00:03:01,120
subgraphs and whenever you
close an allocation as an
35
00:03:01,120 --> 00:03:05,520
Indexer you can collect rewards
for that, assuming you can prove
36
00:03:05,520 --> 00:03:09,680
that you did the indexing
work, called proof of indexing.
37
00:03:10,240 --> 00:03:12,000
Most of you probably heard of that.
38
00:03:12,880 --> 00:03:16,400
And the other type of rewards are query fees,
39
00:03:16,400 --> 00:03:22,960
that Indexers can earn by
serving queries to a client and
40
00:03:23,520 --> 00:03:27,440
in order for these payments
to work, we can't really
41
00:03:27,440 --> 00:03:31,280
use Ethereum layer one as is,
can't create a transaction
42
00:03:31,280 --> 00:03:32,800
for every single payment obviously.
43
00:03:33,840 --> 00:03:36,480
There's a variety of different
scaling solutions, but
44
00:03:37,120 --> 00:03:39,120
some have drawbacks like not being able to,
45
00:03:41,280 --> 00:03:44,400
for instance collect the fees, we know within
46
00:03:44,400 --> 00:03:47,280
a certain time, before a certain
time period has elapsed etc.
47
00:03:49,040 --> 00:03:58,400
And so we started, last year, we started to work
on state channels for this and we basically
48
00:03:58,400 --> 00:04:05,040
went through three different iterations to reach
this point that we're at today, where we
49
00:04:05,040 --> 00:04:10,640
have Scalar and Scalar really
is the kind of, what we,
50
00:04:10,640 --> 00:04:13,040
kind of taking all the learnings
from these different attempts
51
00:04:13,760 --> 00:04:20,320
to make things scale and make things
robust and make them fault tolerant etc.
52
00:04:21,040 --> 00:04:24,560
So Scalar has built a lot of knowledge
that we've acquired over the last year,
53
00:04:24,560 --> 00:04:31,040
a lot of experience that we've gained from working
with state channels and so, like one takeaway
54
00:04:31,040 --> 00:04:41,280
that we realized is necessary
is that the majority of
55
00:04:41,280 --> 00:04:45,600
payments that you, or query fees that you send
56
00:04:45,600 --> 00:04:52,000
between a client and an Indexer can't really go
through state channels, because state channels
57
00:04:52,000 --> 00:04:55,840
need to be updated, they need to be
synchronized between participants.
58
00:04:58,080 --> 00:05:01,280
So that would usually require
too much work to be
59
00:05:01,280 --> 00:05:06,480
feasible to do within a query
that's, especially a short
60
00:05:06,480 --> 00:05:09,120
living query that doesn't require
a lot of computational work
61
00:05:09,120 --> 00:05:12,160
and in addition, otherwise you
could parallelize some of that.
62
00:05:12,720 --> 00:05:14,880
But so, one thing we learned
is that the overhead needs
63
00:05:14,880 --> 00:05:21,600
to be really low, because that
single few milliseconds in each
64
00:05:21,600 --> 00:05:26,160
query add up pretty quickly and
so that's not really an option.
65
00:05:27,200 --> 00:05:30,880
Also the syncing that's necessary
between two participants in
66
00:05:31,600 --> 00:05:38,160
a traditional state channel,
messages might not be delivered
67
00:05:38,160 --> 00:05:42,880
between the client and the
Indexer, like state channel updates
68
00:05:42,880 --> 00:05:47,040
in particular and so these
two sides can run out of sync
69
00:05:47,040 --> 00:05:50,960
and that's tricky to handle because you
then always have to kind of re-sync and
70
00:05:52,160 --> 00:05:56,640
if you do that with, we did that
in the testnet, we still had a
71
00:05:56,640 --> 00:06:00,240
solution based on the state channels protocol and
72
00:06:01,600 --> 00:06:07,120
that required hundreds thousands,
tens of thousands of messages
73
00:06:07,120 --> 00:06:12,640
to be passed around all the time to recover
state channels that had run out of sync.
74
00:06:13,200 --> 00:06:16,880
So that's another thing we
realized, that we don't want to
75
00:06:17,520 --> 00:06:22,960
use these state channels for every
single payment’s, every single
76
00:06:22,960 --> 00:06:27,120
query fee transaction and instead we want to
77
00:06:27,120 --> 00:06:29,760
interact with the state channels
on a less frequent basis.
78
00:06:31,600 --> 00:06:36,560
So Scalar is essentially a
framework for microtransactions for
79
00:06:36,560 --> 00:06:39,840
query fees, that is based
on the sections framework,
80
00:06:40,400 --> 00:06:44,320
but it uses the sections framework
more like a transport layer for,
81
00:06:45,600 --> 00:06:47,920
I think of it as a transfer layer but
it's not really a transfer layer, it's
82
00:06:47,920 --> 00:06:52,000
more like a settlement layer, where
two parties come to an agreement
83
00:06:52,000 --> 00:07:03,360
of what the overall query fees were
and they resolve accumulated query fees
84
00:07:06,160 --> 00:07:10,320
and then part ways again like
the party that got query fees or
85
00:07:10,320 --> 00:07:13,360
collected query fees can then
go and take them on-chain.
86
00:07:14,320 --> 00:07:21,600
And so the way Scalar is designed,
is you have queries starting at the
87
00:07:21,600 --> 00:07:23,760
top here, let me see if I can zoom in.
88
00:07:23,760 --> 00:07:28,320
So you have queries, every
query is associated with a
89
00:07:30,560 --> 00:07:37,280
query fee transaction and those are
kind of collected within receipts.
90
00:07:37,280 --> 00:07:42,480
And you can think of receipts
as like lanes of parallelism.
91
00:07:42,480 --> 00:07:45,360
If you have 20 parallel
requests between a client and
92
00:07:45,360 --> 00:07:50,000
an Indexer you might need
20 receipts to be able to,
93
00:07:50,000 --> 00:07:51,920
because you can only use
one receipt at a time,
94
00:07:53,120 --> 00:07:58,400
you would then create different
receipts in parallel and
95
00:07:58,400 --> 00:08:03,200
send those over and basically
every query is, along with
96
00:08:03,200 --> 00:08:06,160
every query there's an updated
receipt sent to the Indexer
97
00:08:06,160 --> 00:08:12,160
with the latest query fees added, on top of
what was already collected within the receipt.
98
00:08:12,880 --> 00:08:18,720
And so the Indexer basically just
has to check, is this receipt unique?
99
00:08:19,280 --> 00:08:22,800
Does this relate to something I already know?
100
00:08:22,800 --> 00:08:25,840
We'll talk a little bit more about
what context it needs in a bit.
101
00:08:26,800 --> 00:08:33,600
And the receipt is generated and
sent by a gateway or client and
102
00:08:34,240 --> 00:08:37,840
all the Indexer needs to
know, is this a valid receipt?
103
00:08:37,840 --> 00:08:41,120
And it's higher than the
amount I received last time.
104
00:08:42,000 --> 00:08:46,640
Just that it's kind of query
fee tally doesn't go down over
105
00:08:47,200 --> 00:08:50,160
suddenly or something, so it
just needs to know, there's
106
00:08:50,160 --> 00:08:54,800
more coming in, I'll take that and I'll
kind of cache it and put it on the side.
107
00:08:55,600 --> 00:08:58,480
So that's what the receipts are,
just like bundling up query fees
108
00:08:59,360 --> 00:09:01,680
in a very efficient way,
basically in memory,
109
00:09:01,680 --> 00:09:03,840
but kind of cache in the
database for later.
110
00:09:05,360 --> 00:09:08,560
And a nice thing about these
receipts also is that
111
00:09:08,560 --> 00:09:12,160
they are pretty fault tolerant,
especially when a client
112
00:09:12,160 --> 00:09:15,280
or a gateway crashes, it
can just come up again,
113
00:09:15,280 --> 00:09:19,040
create new receipts and send
those over and the Indexer
114
00:09:19,040 --> 00:09:23,520
will say okay, I don't know this
yet, I'll add it to my list, it's
115
00:09:23,520 --> 00:09:26,640
definitely better than receiving
nothing and over time that
116
00:09:26,640 --> 00:09:29,360
receipt will collect, will
likely collect more query fees
117
00:09:29,360 --> 00:09:32,480
because now the client is back up
again and we'll send more queries
118
00:09:33,120 --> 00:09:38,720
and so, no trouble for the Indexer to
119
00:09:38,720 --> 00:09:40,880
also use that receipt in
this, except that receipt.
120
00:09:43,920 --> 00:09:51,920
Then, on these receipts, eventually need to
be rolled up into these lower level
121
00:09:51,920 --> 00:09:55,440
state channels and the way
that happens is through an
122
00:09:55,440 --> 00:09:57,360
intermediate construction called the transfer.
123
00:09:58,080 --> 00:09:59,680
It's a pretty common construction of state
124
00:09:59,680 --> 00:10:02,480
channels, that you have a state channel
inside the state channel you create.
125
00:10:03,680 --> 00:10:06,480
Different terminology using
different projects, you create an app
126
00:10:06,480 --> 00:10:09,680
that's installed inside the state
channel and that kind of plays,
127
00:10:10,560 --> 00:10:13,760
represents a game of state transitions, something.
128
00:10:15,200 --> 00:10:19,520
And in this case, these apps
are called transfers and they
129
00:10:20,240 --> 00:10:24,480
are really just created
once and then resolved once.
130
00:10:24,480 --> 00:10:26,640
So they are created by a client or gateway,
131
00:10:28,720 --> 00:10:33,600
to prepare kind of a body
to put all these receipts in
132
00:10:34,160 --> 00:10:38,000
or to associate all these
receipts with and the Indexer
133
00:10:38,000 --> 00:10:41,680
ultimately will resolve the
transfer and the way it does that,
134
00:10:41,680 --> 00:10:44,800
is it will bundle up or take all
the receipts that were created
135
00:10:45,520 --> 00:10:48,880
in association with the transfer,
so that's the context it needs.
136
00:10:48,880 --> 00:10:53,120
It needs to know, does the
receipt relate to a transfer that
137
00:10:53,120 --> 00:10:56,240
I have seen or that was created
with me as the counterparty?
138
00:10:57,040 --> 00:11:02,480
And then it can collect all these receipts
that it has seen in cache in the database,
139
00:11:03,440 --> 00:11:06,640
put them in the transfer and
say, I resolve this transfer
140
00:11:06,640 --> 00:11:10,240
with these receipts and the
counterparty can check that
141
00:11:11,600 --> 00:11:14,880
and then the transfer can be resolved.
142
00:11:14,880 --> 00:11:18,160
And when the transfer is
resolved, what happens is,
143
00:11:19,600 --> 00:11:22,560
essentially the state channel
ultimately, aside from these
144
00:11:22,560 --> 00:11:24,720
transfers, consists of two balances.
145
00:11:26,720 --> 00:11:28,800
There's a balance for one
side of the channel and a
146
00:11:28,800 --> 00:11:30,160
balance for the other side of the channel.
147
00:11:30,160 --> 00:11:36,000
And when the transfer is resolved, the amount
that's part of the transfer or that the
148
00:11:36,000 --> 00:11:38,320
transfer is resolved for, so
basically the sum of all the
149
00:11:39,840 --> 00:11:42,960
query fees in the receipts gets moved from
150
00:11:42,960 --> 00:11:44,960
one side to the other side in the channel.
151
00:11:44,960 --> 00:11:46,720
And that's what happens.
152
00:11:47,360 --> 00:11:53,120
So afterwards, one side has more, one
side has less and that's it really.
153
00:11:54,720 --> 00:11:58,320
And then there's a bit more
that the Indexer would do.
154
00:11:58,320 --> 00:12:02,400
The Indexer would ultimately
take this balance, that it
155
00:12:02,400 --> 00:12:04,320
has all these query fees
that it has gained, that
156
00:12:04,320 --> 00:12:06,400
have been moved into its
state channel balance.
157
00:12:08,320 --> 00:12:10,400
Take them on-chain into a rebate pool
158
00:12:10,400 --> 00:12:16,400
that is associated with the subgraph,
that it received these query fees
159
00:12:16,400 --> 00:12:21,840
for and that then goes, those fees
are then distributed and ultimately,
160
00:12:21,840 --> 00:12:24,880
after a dispute period the
Indexer can claim those,
161
00:12:25,600 --> 00:12:32,400
the resulting fees that it itself
has earned and can withdraw
162
00:12:32,400 --> 00:12:35,120
those from, even the staking contract.
163
00:12:36,960 --> 00:12:43,040
One thing to note here is
that there isn't just a single
164
00:12:43,040 --> 00:12:45,920
channel between every
client and every Indexer,
165
00:12:45,920 --> 00:12:50,800
but there will actually be more
channels than there really are.
166
00:12:50,800 --> 00:12:53,600
Instead there is a router sitting in between
167
00:12:54,960 --> 00:12:58,800
the, let's say, consumer or
client or gateway and the Indexer.
168
00:12:58,800 --> 00:13:01,520
And this router,
169
00:13:02,160 --> 00:13:08,800
long term, the plan for that is to become
a network of routers with an incentivization for
170
00:13:08,800 --> 00:13:10,560
people to run the routers and operate them.
171
00:13:11,200 --> 00:13:12,880
Routers can for instance take a fee,
172
00:13:13,520 --> 00:13:15,280
earn as part of their operations
173
00:13:17,840 --> 00:13:22,480
and whoever interacts with the
router can get a quote for the fees that
174
00:13:22,480 --> 00:13:25,600
the router takes and can decide
which routers to use etc.
175
00:13:25,600 --> 00:13:28,320
There's a lot there that we're
not going to cover today,
176
00:13:28,880 --> 00:13:35,200
that I'm also not the expert on,
but what this results in, in this
177
00:13:35,200 --> 00:13:39,680
construction, is that you have a
single channel for every client with
178
00:13:39,680 --> 00:13:44,640
the router and a channel between
the router and each Indexer.
179
00:13:44,640 --> 00:13:50,000
So if you have, like n Indexers and one
client you have n plus one channels.
180
00:13:52,240 --> 00:13:57,680
So there's a few more consequences to that
and that we'll talk about in a moment.
181
00:13:59,200 --> 00:14:02,240
So these transfers exist in both
these channels and we'll talk about
182
00:14:02,240 --> 00:14:05,520
like how they are kept in sync
and all of that in a little bit.
183
00:14:07,600 --> 00:14:11,120
I'm not able to fully follow the
channel, I see that Zac's fielding some
184
00:14:11,120 --> 00:14:17,680
questions, anything that we want to talk
about, bring up later again, we can.
185
00:14:18,640 --> 00:14:21,840
So, we've talked about the
nomenclature a little bit,
186
00:14:23,200 --> 00:14:26,080
queries obviously you know,
receipts, we've talked about the
187
00:14:26,080 --> 00:14:30,320
transfers and the state channel.
188
00:14:30,320 --> 00:14:34,240
And one thing to also note about
the transfers perhaps is that
189
00:14:35,120 --> 00:14:39,840
they are created, so like the receipts
are created against a transfer,
190
00:14:39,840 --> 00:14:42,880
the transfers are created against
an allocation made by the Indexer.
191
00:14:43,440 --> 00:14:47,440
So there's a direct link between the
allocations created by the Indexer and
192
00:14:48,000 --> 00:14:51,440
the transfers and the receipts,
so that ties everything together
193
00:14:51,440 --> 00:14:56,720
and associates the fees with
the Indexer, allocating some
194
00:14:56,720 --> 00:14:58,720
stake towards the subgraph
that's being queried.
195
00:15:00,640 --> 00:15:06,320
Cool! So, we've talked about the
participants as well a little bit,
196
00:15:07,360 --> 00:15:10,560
so the client obviously
sends queries to Indexers,
197
00:15:10,560 --> 00:15:14,880
each query comes with an updated
receipt, that includes latest query fees.
198
00:15:14,880 --> 00:15:18,960
Indexer receives those, just
does some sanity checking,
199
00:15:20,080 --> 00:15:25,680
verifies that the receipt is valid,
caches it on the side, resolves.
200
00:15:26,320 --> 00:15:29,440
So the way it's currently implemented
in the Indexer agent is that
201
00:15:30,640 --> 00:15:34,560
transfers are resolved at about the
same time that an allocation is closed.
202
00:15:34,560 --> 00:15:38,240
So whenever you close an allocation,
there's a little bit of a period
203
00:15:38,240 --> 00:15:41,760
like a gap and after about, I think
10 minutes is currently what's the
204
00:15:41,760 --> 00:15:47,360
default, the transfers corresponding
to the allocation are then resolved.
205
00:15:49,200 --> 00:15:52,560
That doesn't take long and
then once they are resolved,
206
00:15:53,920 --> 00:15:56,800
then there's I think additional
delay or maybe it's immediate.
207
00:15:57,840 --> 00:16:01,520
The fees collected through all
these transfers associated with
208
00:16:01,520 --> 00:16:06,320
the allocation gets withdrawn
into the staking contract.
209
00:16:08,320 --> 00:16:15,280
And so, that's similar to how it
was done so far, until now where
210
00:16:15,280 --> 00:16:19,840
after an allocation was closed, the
Indexer could then call to collect
211
00:16:19,840 --> 00:16:23,680
query fees accumulated
in the state channels.
212
00:16:26,720 --> 00:16:28,480
Cool! So we've talked about that.
213
00:16:29,920 --> 00:16:34,080
And then we have the router,
which is, aside from just being
214
00:16:34,080 --> 00:16:38,080
this bridge and reducing the
number of channels that you create,
215
00:16:38,880 --> 00:16:43,120
it essentially also makes the
collateralization of these state channels
216
00:16:43,680 --> 00:16:49,840
and transfers more efficient and we'll
talk about that a little bit as well.
217
00:16:52,000 --> 00:16:54,640
Cool! We've talked about
the channel always being
218
00:16:54,640 --> 00:16:57,920
created between a client and
router or Indexer and router.
219
00:16:59,920 --> 00:17:05,920
That has a consequence for these
transfers that we've looked at
220
00:17:05,920 --> 00:17:08,800
and we will look at this one,
this image here real quick.
221
00:17:09,440 --> 00:17:12,080
So, okay, that was not
what I wanted to do.
222
00:17:19,600 --> 00:17:20,560
I think you can still see it.
223
00:17:20,560 --> 00:17:24,240
So, if you imagine a client
or gateway having a channel
224
00:17:24,240 --> 00:17:25,840
at the router and a router
having a state channel with
225
00:17:25,840 --> 00:17:30,480
the Indexer, then when the
gateway creates a transfer to
226
00:17:31,440 --> 00:17:38,080
associate receipts with, then it
will do that in its channel with
227
00:17:38,080 --> 00:17:43,360
the router and the router is responsible
to kind of copy that transfer
228
00:17:43,360 --> 00:17:50,000
or create a copy of that, in the state
channel with the corresponding Indexer.
229
00:17:50,000 --> 00:17:52,640
So the transfer is created with
the Indexer as a counterparty,
230
00:17:52,640 --> 00:17:55,680
but it's not really like an
end-to-end single transfer,
231
00:17:55,680 --> 00:18:00,160
instead there's two of them and
the router keeps them in sync.
232
00:18:01,760 --> 00:18:04,800
And if you imagine like a multi-hop
story, where there's multiple routers
233
00:18:04,800 --> 00:18:10,000
between, that'll be similar and there's
an event in Vector that tells you
234
00:18:10,000 --> 00:18:14,160
whether a transfer was fully
set up end-to-end between
235
00:18:14,880 --> 00:18:19,120
the client or you as the
creator and the other side.
236
00:18:23,040 --> 00:18:26,560
Let's just talk just briefly
about the state channel balances,
237
00:18:27,120 --> 00:18:30,720
just to recap that because I know
I'm talking pretty fast this way.
238
00:18:32,240 --> 00:18:36,640
So let's assume we have an
initial state channel construction
239
00:18:36,640 --> 00:18:39,360
where we have a client router
channel, where the client has,
240
00:18:39,360 --> 00:18:43,120
say 10 GRT and the router
has 0 and we have a channel
241
00:18:43,120 --> 00:18:45,920
where the router has 20 GRT
and the Indexer has 0.
242
00:18:46,720 --> 00:18:51,360
And there is a single transfer
between this client and the Indexer
243
00:18:51,360 --> 00:18:56,560
for 6 GRT with 2 receipts maybe,
doesn't really matter at this level.
244
00:18:57,920 --> 00:19:01,760
And this transfer is resolved,
that means the gateway
245
00:19:02,720 --> 00:19:08,320
will have to pay 6 GRT or like
send 6 GRT to the router and
246
00:19:08,320 --> 00:19:13,360
the router will have to update the
balances in its channel with the Indexer,
247
00:19:13,360 --> 00:19:19,040
so that the router has also 6 less than before,
so 20 goes down to 14, the Indexer has 6.
248
00:19:19,680 --> 00:19:24,960
So in total, router still has 20, the gateway
has 6 less and the Indexer has 6 more.
249
00:19:27,840 --> 00:19:29,440
Pretty basic but the router
250
00:19:29,440 --> 00:19:32,320
or the network of routers in the future
will make sure that that's the case.
251
00:19:35,520 --> 00:19:41,120
So let's talk a little bit about collateralization
and the overall flow of query fees just to...
252
00:19:43,280 --> 00:19:46,960
There's a zoom button somewhere in these images.
253
00:19:49,920 --> 00:19:52,560
Maybe. Maybe not.
254
00:19:54,000 --> 00:19:55,520
Okay, let's do this.
255
00:19:57,520 --> 00:20:03,440
So, just like this is the, basically the
full picture of how everything fits together,
256
00:20:04,480 --> 00:20:08,960
we have these state channels in green,
I have the router sitting between.
257
00:20:10,640 --> 00:20:15,280
We're kind of managing these two,
or managing this one in particular,
258
00:20:17,440 --> 00:20:22,480
this one as well and these channels
need to be collateralized so that,
259
00:20:28,240 --> 00:20:32,640
for instance, if you create a channel
and you create a transfer in that
260
00:20:32,640 --> 00:20:36,800
and ultimately you want to update the
balances when the transfer is resolved,
261
00:20:37,520 --> 00:20:41,920
there needs to be a balance, say, on
the client side for the router and
262
00:20:41,920 --> 00:20:46,320
client balances to be updated in the client, to
have less afterwards and the router, to have more.
263
00:20:46,960 --> 00:20:53,440
Same with the router Indexer, the
router will transfer query fees
264
00:20:53,440 --> 00:20:57,600
to the Indexer's balance and for
that it needs to have some balance
265
00:20:57,600 --> 00:21:02,560
of its own locked up in the
channel, so that can work.
266
00:21:03,600 --> 00:21:10,640
And what that means is, in order
to keep these channels alive and
267
00:21:10,640 --> 00:21:14,000
sufficiently collateralized, the
client needs to provide collateral
268
00:21:14,000 --> 00:21:16,320
and can deposit it into the state channel.
269
00:21:17,520 --> 00:21:22,160
And in a similar way the router has to also
provide collateral into its state channel.
270
00:21:25,040 --> 00:21:29,680
So these red lines are kind of the
on chain transactions or can be the
271
00:21:29,680 --> 00:21:34,320
on-chain transactions where there's
collateral provided into the state channels.
272
00:21:35,920 --> 00:21:38,400
Another transaction that happens on-chain
273
00:21:38,960 --> 00:21:42,400
is the Indexer ultimately withdrawing
query fees from, based on the
274
00:21:42,400 --> 00:21:46,080
balance it has and the amounts
of query fees it has collected
275
00:21:46,080 --> 00:21:49,200
for different allocations, it
will withdraw the corresponding
276
00:21:49,200 --> 00:21:51,440
amounts, that's something
the agent does automatically.
277
00:21:52,640 --> 00:21:56,240
So those are the on-chain transactions
and the rest is all off-chain.
278
00:21:56,240 --> 00:22:03,280
And so, then we have these dark
gray lines, that's the transfers
279
00:22:03,280 --> 00:22:09,680
and all the synching that happens, goes
on there between these three parties.
280
00:22:11,840 --> 00:22:15,360
So that's kind of the base layer
for Scalar and then we have the,
281
00:22:15,360 --> 00:22:20,080
kind of the fast path in light gray
where the client sends queries and
282
00:22:20,080 --> 00:22:24,240
these updated receipts to the Indexer and the
Indexer just caches these, if they make sense.
283
00:22:26,400 --> 00:22:33,120
That path is extremely fast and
requires no database interactions,
284
00:22:33,120 --> 00:22:34,720
either on the client, or on the Indexer.
285
00:22:34,720 --> 00:22:37,760
Sure, the Indexer will store the
receipts, but it can do that out of band
286
00:22:39,600 --> 00:22:44,720
and even if you have multiple Indexer
services they can do that in parallel.
287
00:22:44,720 --> 00:22:50,080
They can periodically flush and
do that with rules that make sure
288
00:22:50,080 --> 00:22:55,040
that they don't override receipts that have a
higher amount of query fees in them already.
289
00:22:55,840 --> 00:22:58,320
Because if you have a low
balance that's sitting here with,
290
00:22:58,320 --> 00:23:03,520
let's say 10 Indexer services
and the same receipt is used
291
00:23:03,520 --> 00:23:06,240
multiple times but because of
the low balance and not knowing
292
00:23:06,240 --> 00:23:09,520
anything about the receipts, the
same receipt with different values
293
00:23:09,520 --> 00:23:13,760
will go to different instances
over time and so you want to
294
00:23:13,760 --> 00:23:20,320
make sure that you don't accidentally override a
higher value of query fees with the receipts
295
00:23:22,160 --> 00:23:22,800
by accident.
296
00:23:22,800 --> 00:23:25,200
So, it's a few things the Indexer needs to
297
00:23:25,200 --> 00:23:28,400
take care of but none of this is
in the critical path of queries
298
00:23:29,520 --> 00:23:34,400
and so this path is really efficient
and really crash tolerant because
299
00:23:34,400 --> 00:23:37,200
if the client goes away, can
create new receipts, send
300
00:23:37,200 --> 00:23:43,120
those over, those receipts
are really small as well.
301
00:23:43,120 --> 00:23:50,240
I'm not sure how many bytes, was
something like, 160 bytes or so in total.
302
00:23:50,240 --> 00:23:52,720
So, pretty small payload
to send along as well.
303
00:23:55,040 --> 00:24:01,840
And so that's the construction
of Scalar in about 20 minutes.
304
00:24:03,680 --> 00:24:11,200
So, how does this affect
the Indexer infrastructure?
305
00:24:12,480 --> 00:24:15,360
So this is particularly
for all you Indexers
306
00:24:15,360 --> 00:24:17,840
and we've covered this in the
Indexer office hours already.
307
00:24:18,400 --> 00:24:19,600
I will do it again here.
308
00:24:22,080 --> 00:24:25,440
See if this loads. Okay,
there's something.
309
00:24:25,440 --> 00:24:31,440
Cool! So, there's only one new
component on the Indexer side,
310
00:24:31,440 --> 00:24:33,600
but there is a new component,
which is a Vector Node
311
00:24:33,600 --> 00:24:37,520
that you have to deploy alongside
the agent and servers and
312
00:24:37,520 --> 00:24:45,760
The Graph Nodes and that takes care of all
the interactions with the Vector Protocol.
313
00:24:45,760 --> 00:24:49,360
Managing your channel, managing
your transfers, allowing you to
314
00:24:50,000 --> 00:24:54,640
look up the transfers that are created and so on.
315
00:24:54,640 --> 00:24:58,640
And also I think stores history of the
transfers, I'm not entirely sure but probably.
316
00:25:00,400 --> 00:25:04,880
So that is a new component, comes
in the form of Docker image.
317
00:25:04,880 --> 00:25:09,040
You can probably run it in like bare metal
too, but I personally haven't tried that.
318
00:25:10,960 --> 00:25:12,480
And essentially has one port,
319
00:25:13,200 --> 00:25:18,480
8000 that both the Indexer agent and
service will use for different purposes.
320
00:25:20,000 --> 00:25:25,520
So this component is new and this
port is used for a variety of things.
321
00:25:25,520 --> 00:25:31,840
So the agent will subscribe to events
for incoming transfers, for instance.
322
00:25:31,840 --> 00:25:34,400
So whenever a client creates
a transfer with this Indexer
323
00:25:34,400 --> 00:25:37,760
as the counterparty, Indexer
agent will receive notification
324
00:25:37,760 --> 00:25:44,240
or event and can then, forgot
what exactly it does, wait for
325
00:25:44,240 --> 00:25:47,280
resolutions for instance, when
transfers are resolved it will
326
00:25:50,240 --> 00:25:54,080
kind of keep a list of all the
transfers that are associated with an
327
00:25:54,080 --> 00:25:56,720
allocation and that have
been resolved and kind of
328
00:25:56,720 --> 00:26:00,320
keep track of the states there, in
a simple way to know when all the
329
00:26:00,320 --> 00:26:04,960
transfers for an allocation are
done resolving and when it's ready
330
00:26:04,960 --> 00:26:10,320
to withdraw any fees into
the staking contract.
331
00:26:11,680 --> 00:26:15,440
Agent also initiates the
resolution of the transfers.
332
00:26:16,880 --> 00:26:21,120
Naturally, whenever an allocation
closes, that's what it wants to do next.
333
00:26:22,640 --> 00:26:27,200
It also withdraws the query fees
from the state channel balance,
334
00:26:27,200 --> 00:26:29,200
after resolving all the
transfers to the rebate pool
335
00:26:30,320 --> 00:26:35,760
and then both agent servers use this Vector Node
to look up transfers for incoming query fees.
336
00:26:36,560 --> 00:26:43,680
And you can do that as well, there's
an API to talk to this Vector Node,
337
00:26:43,680 --> 00:26:46,080
where you can look up individual transfers etc.
338
00:26:47,280 --> 00:26:53,040
The Indexer agent and service
logs have added fields in log
339
00:26:53,040 --> 00:26:56,960
messages and also dedicated
new log messages that
340
00:26:56,960 --> 00:27:01,920
include a so-called routing ID and that
routing ID is kind of like a global,
341
00:27:01,920 --> 00:27:09,440
or less global unique identifier that allows
you to look up the transfers associated with...
342
00:27:10,160 --> 00:27:14,960
Well, so, if you think about these two
channel approach, or these two channel
343
00:27:14,960 --> 00:27:18,480
construction where you have,
client, router and the counterparty.
344
00:27:18,480 --> 00:27:24,160
You have these two transfer copies, and
this routing ID identifies the same,
345
00:27:25,520 --> 00:27:30,080
the two copies that belong together in
each of the different state channels.
346
00:27:30,080 --> 00:27:32,960
So they each have like a unique
transfer ID that's different,
347
00:27:32,960 --> 00:27:34,560
they have a routing ID
that they have in common.
348
00:27:35,520 --> 00:27:38,400
And so, for debugging purposes
the API is really nice.
349
00:27:41,120 --> 00:27:44,240
Cool thing also, you can run
this Vector Node in the browser,
350
00:27:44,240 --> 00:27:47,840
so, there's all kinds of
future possibilities there.
351
00:27:49,360 --> 00:27:54,720
Cool! There's one intricacy that I've commented
on in the Indexer office hours too because
352
00:27:57,040 --> 00:27:59,840
this isn't super obvious the way it's done.
353
00:27:59,840 --> 00:28:08,000
The Indexer agent, the way it subscribes to
the Vector Node is it posts a URL of itself
354
00:28:08,000 --> 00:28:13,200
with a special port, 8001 by default,
to the Vector Node because the
355
00:28:13,200 --> 00:28:16,480
subscriptions are not done with web
sockets, they are done over HTTP.
356
00:28:17,440 --> 00:28:21,520
So the Indexer agent runs its own
kind of mini server that's internal,
357
00:28:21,520 --> 00:28:25,360
should be internal to your
infrastructure and then the Vector Node
358
00:28:25,360 --> 00:28:28,400
can talk to and so then, it
will just make post requests
359
00:28:28,400 --> 00:28:32,480
with those state channel events,
transfer events for instance,
360
00:28:32,480 --> 00:28:38,240
to the Indexer agent through that URL, so
you have to have this internal service.
361
00:28:38,240 --> 00:28:40,720
The Vector Node itself does
not have to be exposed
362
00:28:41,440 --> 00:28:47,040
directly, it will connect to a
set of NATS services that are
363
00:28:47,760 --> 00:28:53,920
currently run by Connext and those
can handle all the communication
364
00:28:53,920 --> 00:28:58,080
between all the different nodes
in the network, currently.
365
00:28:59,520 --> 00:29:01,680
There is one other thing.
366
00:29:02,560 --> 00:29:05,120
So, the channel-messages-inbox endpoint that
367
00:29:05,920 --> 00:29:11,040
Indexer's UI checks for when you're
on Graph Indexer status, that's gone.
368
00:29:11,040 --> 00:29:17,040
Indexer’s UI needs to be updated
still, to get rid of that test.
369
00:29:17,920 --> 00:29:19,360
It could do other tests instead.
370
00:29:21,280 --> 00:29:27,680
One thing we're adding to the
Indexer service, is a way to
371
00:29:27,680 --> 00:29:31,600
look up the version of Vector being used
and so if that said correctly,
372
00:29:32,160 --> 00:29:33,840
this one will look it up from the Vector Node.
373
00:29:34,880 --> 00:29:37,760
If you get that information you know
the Vector Node is up and running and
374
00:29:38,720 --> 00:29:43,920
can give you its version, so then, that
will replace that check for this endpoint.
375
00:29:46,320 --> 00:29:47,680
And so that's really it.
376
00:29:48,800 --> 00:29:52,000
All the transfer and receipt
management is kind of tied in
377
00:29:52,000 --> 00:29:55,280
with the allocations, handled
automatically by the agent and service,
378
00:29:56,400 --> 00:30:02,720
so apart from updating the infrastructure
like this, there's nothing to do really.
379
00:30:04,320 --> 00:30:08,800
And so, this is available
on testnet right now and
380
00:30:08,800 --> 00:30:15,920
I can only hope that as many
Indexers as possible update to that,
381
00:30:16,480 --> 00:30:19,520
so there's a lot of
testing that can be done
382
00:30:19,520 --> 00:30:24,000
and that we can test at the
scale that mainnet requires.
383
00:30:29,760 --> 00:30:36,720
So, two things that, before we
go into questions, two things.
384
00:30:36,720 --> 00:30:43,200
So one is that we have this mainnet
testnet configuration docs page
385
00:30:44,000 --> 00:30:48,960
in the Indexer repo and there you
can find the latest releases to use
386
00:30:48,960 --> 00:30:54,000
for Vector Node and Graph Node
obviously, Indexer agent etc.
387
00:30:55,040 --> 00:31:00,160
And there is a section for the testnet
as well, of course, that, where is it?
388
00:31:02,160 --> 00:31:02,660
There we go.
389
00:31:03,280 --> 00:31:08,880
And that will also have some details
on what new things you need to
390
00:31:08,880 --> 00:31:12,800
pass to the agent and the service to
be able to talk to the Vector Node.
391
00:31:14,240 --> 00:31:17,520
There is a single router right
now and all of the participants
392
00:31:17,520 --> 00:31:20,160
in the Vector Protocol
have this identifier,
393
00:31:20,880 --> 00:31:23,920
that's constructed from the public key
394
00:31:25,200 --> 00:31:33,760
of these participants, so in your case
that would be the, I think it's the Indexer
395
00:31:33,760 --> 00:31:37,840
address, it could be the operator address,
I'm not entirely sure right now but it works.
396
00:31:39,600 --> 00:31:42,560
So this is the router that you
currently have to configure,
397
00:31:43,360 --> 00:31:46,320
so that it will use that router to
create these channels between the,
398
00:31:48,800 --> 00:31:53,120
that'll use that router as the counterparty
to set up the Indexer router state channel.
399
00:31:56,160 --> 00:31:59,760
Same way that the service needs
to know those two things as well,
400
00:31:59,760 --> 00:32:06,400
the Vector Node URL internally, in
your cluster and the router ID as well.
401
00:32:07,040 --> 00:32:12,160
And then Vector itself has
a few configuration options.
402
00:32:12,800 --> 00:32:15,280
I'm not entirely sure if
this one is really needed,
403
00:32:15,280 --> 00:32:17,440
because the mnemonic is also included here.
404
00:32:18,960 --> 00:32:24,960
I think you, I think for now it's
required that you use the same
405
00:32:24,960 --> 00:32:27,840
mnemonic as you also use for the
agents of the operator mnemonic.
406
00:32:29,520 --> 00:32:33,600
So that would also mean that the
public identifier here is based on the
407
00:32:33,600 --> 00:32:37,840
public key of the Indexer agent
mnemonic or the operator, if
408
00:32:37,840 --> 00:32:42,960
you have a separate one
from the Indexer key and it
409
00:32:42,960 --> 00:32:50,240
needs to talk to Ethereum, needs
to talk to the NATS services
410
00:32:50,960 --> 00:32:53,840
as a few, just to make sure that there's
enough redundancy, I think we can
411
00:32:53,840 --> 00:32:59,040
also run our own. I'm not sure if they are
required to all be the same or whether that’s,
412
00:32:59,040 --> 00:33:02,800
I think there's some extensions
to the NATS software, that
413
00:33:02,800 --> 00:33:05,600
have been made for Connext
or Vector in this case,
414
00:33:06,880 --> 00:33:10,720
but it might be possible
to run your own, I haven't
415
00:33:10,720 --> 00:33:13,600
asked about that, but that
might be worth doing.
416
00:33:15,760 --> 00:33:18,400
So this is the configuration, it's
basically a JSON object that you have
417
00:33:18,400 --> 00:33:20,880
to pass in as an environment variable right now.
418
00:33:22,000 --> 00:33:26,240
I believe they've also added being
able to pass like a file path
419
00:33:26,240 --> 00:33:31,040
in for this and then you
can amount this data as, for
420
00:33:31,040 --> 00:33:34,720
instance like a Docker secret or
Kubernetes secret or something like that.
421
00:33:38,160 --> 00:33:44,960
That's what's changed there and then also
we've just added an overview of Scalar.
422
00:33:45,600 --> 00:33:49,520
It's pretty much the same that we've just
gone through to the Indexer repo as well,
423
00:33:49,520 --> 00:33:53,120
so that can be found here,
“Indexer/docs/Scalar.md”
424
00:33:54,160 --> 00:33:57,680
and points to the blog post, if
you want to read that and then
425
00:33:57,680 --> 00:33:59,760
also covers the things that
we've just talked about,
426
00:34:01,040 --> 00:34:05,200
the query fee flow, the state
channels and like how the
427
00:34:05,200 --> 00:34:10,560
transfers work. Just some basics and
the infrastructure as well, the ports,
428
00:34:10,560 --> 00:34:18,000
what changes are involved in
adding Scalar on the Indexer side.
429
00:34:19,840 --> 00:34:25,440
One thing that's worth noting is, Vector
Node also stores these transfers etc in a
430
00:34:25,440 --> 00:34:29,120
Postgres database and I would
strongly recommend to keep that
431
00:34:29,120 --> 00:34:34,400
separate from the Indexer database,
because never know, especially
432
00:34:34,400 --> 00:34:38,480
during testnet right now, you
may need to wipe that database
433
00:34:39,200 --> 00:34:41,760
or whatever, so you definitely
want to keep that separate.
434
00:34:45,600 --> 00:34:54,720
And that I think is the end of my
presentation and the rest will be questions.
435
00:34:54,720 --> 00:35:01,520
I think we can start with three
questions that we went through in the
436
00:35:01,520 --> 00:35:06,960
Indexer office hours too, so, one thing
is, does the Vector Node need to be scaled?
437
00:35:09,120 --> 00:35:12,560
So, similar to the Indexer service, if you
438
00:35:12,560 --> 00:35:15,840
want to be able to serve more queries,
you'll spin up more Indexer services.
439
00:35:17,200 --> 00:35:22,960
The Vector Node itself doesn't see
a lot of traffic, it's part of the
440
00:35:22,960 --> 00:35:26,320
reasons why Scalar is designed the
way it is, with these layers, with the
441
00:35:26,320 --> 00:35:33,440
receipts being the only critical
path, that that's interacted with
442
00:35:33,440 --> 00:35:36,880
very frequently with that, whereas
the Vector Node, especially
443
00:35:36,880 --> 00:35:42,560
at creating transfers, a few times per allocation
maybe, resolving transfers, the same thing.
444
00:35:43,440 --> 00:35:47,920
So the interactions here will be pretty
low compared to the Indexer service.
445
00:35:49,120 --> 00:35:52,240
So, it's not necessary to
scale Vector Node horizontally,
446
00:35:53,280 --> 00:35:57,280
it may even break things, I'm not
sure. Two Vector Nodes running
447
00:35:57,280 --> 00:36:04,480
against the same database for the
same mnemonic Ethereum account,
448
00:36:04,480 --> 00:36:09,760
is probably not gonna fly, so don't do
that, it just makes your life easier.
449
00:36:11,200 --> 00:36:14,240
We're testing on testnet right now,
450
00:36:14,240 --> 00:36:18,320
so it's live there and there's
currently a bit of a performance issue
451
00:36:18,320 --> 00:36:22,080
that we're working out
with creating transfers,
452
00:36:24,320 --> 00:36:28,400
so that should be resolved
sometime today, hopefully.
453
00:36:29,280 --> 00:36:31,120
Don't want to be too optimistic,
but it's looking good.
454
00:36:32,640 --> 00:36:38,480
So, we'll keep stabilizing there and
hopefully that shouldn't take long.
455
00:36:39,120 --> 00:36:42,640
And the timeline for Scalar
on mainnet is that we
456
00:36:44,000 --> 00:36:48,480
intend to make it possible to
receive query fees using Scalar
457
00:36:48,480 --> 00:36:53,840
on mainnet in about a
similar timeline that we're
458
00:36:53,840 --> 00:36:56,080
looking at for the migration
of subgraphs to mainnet.
459
00:36:57,920 --> 00:37:00,400
So very, very soon.
460
00:37:02,640 --> 00:37:03,520
Okay. So,
461
00:37:04,640 --> 00:37:10,080
that's it for me for now. The
next three days I think we'll want
462
00:37:10,080 --> 00:37:15,680
to test a bit more on testnet, so
don't think we'll be quite ready
463
00:37:15,680 --> 00:37:18,400
to run this on mainnet in three days.
464
00:37:20,720 --> 00:37:24,160
But shortly after.
465
00:37:26,080 --> 00:37:28,400
May 4, May 5, we'll see.
466
00:37:30,200 --> 00:37:33,440
What other questions are there?
467
00:37:41,040 --> 00:37:44,080
Zac definitely covered
most of them but...
468
00:37:48,560 --> 00:37:53,600
We are seeing nobody sending questions, and now
there's one new, Zack's been killing it.
469
00:37:53,600 --> 00:37:54,600
Cool!
470
00:37:55,360 --> 00:37:59,680
What about Indexer
operational question, Jannis?
471
00:37:59,680 --> 00:38:04,000
I did put it in the chat there, it was a pretty banal
question to be honest, it's not that interesting,
472
00:38:04,000 --> 00:38:08,400
but we've got metric endpoints for
pretty much every piece of the stack.
473
00:38:09,360 --> 00:38:14,320
When I first ran the, sort
of the repo implementation
474
00:38:14,880 --> 00:38:21,120
of Vector in dev mode, it sort of
sets up a whole bunch of additional
475
00:38:21,120 --> 00:38:27,040
containers around Vector to let you monitor
things and various other bits and bobs.
476
00:38:27,040 --> 00:38:31,520
Do you know if Vector itself has a
metrics endpoint that we can expose?
477
00:38:31,520 --> 00:38:35,520
Or maybe it's a bit more convoluted
than that, I don't have the skills to go
478
00:38:35,520 --> 00:38:39,360
into what the repo does and figure
out exactly what it's doing.
479
00:38:39,360 --> 00:38:43,360
Is it using prometheus, or is it doing,
like log parsing or something like that?
480
00:38:49,280 --> 00:38:55,360
I could be wrong, I don't think there
is a commited endpoint yet, although
481
00:38:55,360 --> 00:39:00,160
I think that will, I'm
hoping that will come and
482
00:39:00,160 --> 00:39:03,520
what we can do, but haven't
done yet, for instance,
483
00:39:03,520 --> 00:39:08,240
is to add metrics, for instance, that
the transfers create per allocation etc,
484
00:39:08,800 --> 00:39:12,320
in the agent for instance, where
you get notified of all that.
485
00:39:14,000 --> 00:39:16,480
And so we could provide
some insights there.
486
00:39:17,120 --> 00:39:22,720
The logs, logged by Vector Node
are, just like the Indexer agent and
487
00:39:22,720 --> 00:39:28,000
service, logged using Pino or
Pino, so that's JSON, which
488
00:39:28,000 --> 00:39:33,520
Google cloud or Elasticsearch,
for instance, will pass
489
00:39:33,520 --> 00:39:36,240
and you'll be able to filter
by the different fields
490
00:39:36,880 --> 00:39:43,120
pretty easily. It's a good question,
I'll check with Connext there.
491
00:39:43,760 --> 00:39:46,320
Chris writes that it looks
like it exposes prometheus
492
00:39:46,320 --> 00:39:48,800
endpoint, it might be a
default prometheus endpoint.
493
00:39:50,320 --> 00:39:53,920
It's scraping, I've just taken a
quick look, it's scraping port 8000
494
00:39:53,920 --> 00:39:55,840
on the Vector Node, which
seems strange to me.
495
00:39:56,720 --> 00:39:59,280
Unless port 8000 is doing something
different in their container,
496
00:39:59,280 --> 00:40:02,640
but thanks Chris, there's something to
start looking at, we can figure out from
497
00:40:02,640 --> 00:40:04,800
there what it's doing and
maybe work it out ourselves.
498
00:40:05,840 --> 00:40:14,400
I definitely would like more insights into the
inner workings and the health of the Vector Node.
499
00:40:14,400 --> 00:40:19,440
Now it's not like they're in the critical
path. If transfers, for instance,
500
00:40:20,240 --> 00:40:23,920
fail to resolve, I think what we're
trying, or certainly can we try in the
501
00:40:23,920 --> 00:40:30,400
agent and it's not gonna
eat a lot of resources so,
502
00:40:30,400 --> 00:40:34,560
it also shouldn't crash a lot, for
instance, due to, for instance, memory,
503
00:40:35,280 --> 00:40:38,240
not running out of memory or anything.
504
00:40:38,240 --> 00:40:42,480
So, I think that situation
would probably be improved.
505
00:40:45,120 --> 00:40:47,280
I mean, I put 8000
differently at the right one.
506
00:40:49,040 --> 00:40:54,240
I think the router, my understanding
is the router is basically,
507
00:40:54,240 --> 00:40:57,280
a thin wrapper around a
regular node or something.
508
00:40:57,840 --> 00:41:02,480
So, I think port 8000 might serve
a similar purpose in the router and
509
00:41:02,480 --> 00:41:08,160
not the node, as it does in the node so,
may not be the right port to describe there.
510
00:41:12,560 --> 00:41:16,160
And there is a Connext
Discord. There is also,
511
00:41:16,160 --> 00:41:20,720
I can forward a link to those who are
interested in potentially becoming routers.
512
00:41:22,880 --> 00:41:28,720
They've sent me a link where you can basically
sign up to, if you're interested in knowing more.
513
00:41:29,680 --> 00:41:36,800
I can forward that to those who are,
because that could be convenient
514
00:41:36,800 --> 00:41:40,480
to run it alongside the Indexer, especially
if it's not super high traffic, for instance.
515
00:41:42,000 --> 00:41:46,960
And there's a way to earn, take a
fee, for instance, for any operations
516
00:41:46,960 --> 00:41:50,880
and although, of course, if you pick your
own fee, there's nothing you gain from that.
517
00:41:59,840 --> 00:42:06,480
Would the gateway need to be aware of a local
router instance, if we decided to run one?
518
00:42:08,480 --> 00:42:11,840
I think that would all be communication
that happens through NATS.
519
00:42:11,840 --> 00:42:17,200
So NATS is like this high
performance, low overhead
520
00:42:17,200 --> 00:42:19,200
alternative to HTTP messaging.
521
00:42:20,000 --> 00:42:23,440
Well, it's probably not like a
one-to-one alternative, it does
522
00:42:24,720 --> 00:42:29,840
the subscriptions, as far as I know and
as like, retry ability etc, built in.
523
00:42:30,640 --> 00:42:33,040
And all the communication
between the different nodes
524
00:42:33,040 --> 00:42:36,880
and the router, slash routers
as well, happens through NATS.
525
00:42:38,640 --> 00:42:42,560
And so, that's why, for instance,
I'm not sure exactly how it works but
526
00:42:42,560 --> 00:42:48,720
the Vector Node will connect to NATS and it
doesn't need to expose a port publicly for that,
527
00:42:48,720 --> 00:42:52,640
same with the routers, you don't
need a direct line of communication.
528
00:42:53,360 --> 00:42:53,860
Okay.
529
00:42:54,240 --> 00:42:58,800
Does that mean that service discovery
is effectively facilitated by NATS?
530
00:43:00,320 --> 00:43:02,080
Yes. Cool!
531
00:43:03,280 --> 00:43:03,780
Thanks.
532
00:43:07,440 --> 00:43:14,800
And NATS was kind of more critical
in our first attempt with Connext.
533
00:43:17,280 --> 00:43:23,840
Pretty much about a year ago, in a month or
so we worked with Connext, with their previous
534
00:43:24,480 --> 00:43:27,760
solution, which wasn't
called Vector but Indra.
535
00:43:28,560 --> 00:43:31,680
And our initial approach
there was to basically create
536
00:43:32,640 --> 00:43:36,480
a transfer for every single
query fee transaction, I think.
537
00:43:37,440 --> 00:43:41,440
And there was a lot more critical
NAS like NATS was, well NATS
538
00:43:41,440 --> 00:43:45,600
itself is efficient, but also that,
the equivalent of the Vector Node
539
00:43:45,600 --> 00:43:52,880
was efficient and all the transfer,
creations and any updates were
540
00:43:52,880 --> 00:43:58,320
super efficient and that's no longer the
case, thanks to this new design of Scalar,
541
00:43:59,040 --> 00:44:02,160
we don't need to interact with the
underlying state channel very often.
542
00:44:04,240 --> 00:44:07,840
It's a really nice separation of concerns.
543
00:44:08,400 --> 00:44:09,200
I would say so.
544
00:44:11,840 --> 00:44:16,800
How would migration be happening if it happens
both with Vector and without Vector Nodes?
545
00:44:16,800 --> 00:44:18,320
Would work simultaneously in mainnet?
546
00:44:20,800 --> 00:44:27,600
Yeah, so, until this lands on mainnet,
what's out there, will still work.
547
00:44:31,280 --> 00:44:37,040
So the migration, there's two parts of the
migration kind of, there's the indexing
548
00:44:37,040 --> 00:44:42,480
which doesn't change at all, whether
Scalar is involved in the queries or not.
549
00:44:43,920 --> 00:44:48,000
So that can start right away
and then for query fees,
550
00:44:51,360 --> 00:44:54,480
the thinking there is that
projects probably won't
551
00:44:54,480 --> 00:45:00,080
start testing their subgraphs
on mainnet without Scalar being
552
00:45:00,080 --> 00:45:05,040
deployed so, I think that
migration will test Scalar,
553
00:45:05,600 --> 00:45:12,920
make sure it works and then deploy to mainnet,
have it ready for the traffic to come.
554
00:45:21,520 --> 00:45:24,320
Any comments regarding recent
unofficial subgraph deployments?
555
00:45:25,920 --> 00:45:32,160
Not really. So from my perspective
it's a decentralized network,
556
00:45:32,800 --> 00:45:39,360
subgraphs, might be unofficial
if they are not created by
557
00:45:39,360 --> 00:45:46,400
the original project but, it's not
something that's locked down anyway.
558
00:45:52,960 --> 00:45:56,720
When Indexers feel comfortable,
does this mean this migration
559
00:45:56,720 --> 00:46:00,720
to Scalar will be optional, like we were
running in a hybrid situation sometime?
560
00:46:02,160 --> 00:46:05,840
I wouldn't say so. So it's basically up
to the client to decide which Indexers are
561
00:46:05,840 --> 00:46:13,520
considered compatible, which
Indexers it'll query and basically
562
00:46:13,520 --> 00:46:19,840
once Scalar is ready and people
have, first Indexers have migrated
563
00:46:19,840 --> 00:46:24,240
to Scalar on mainnet, that's when
that'll become a requirement.
564
00:46:26,400 --> 00:46:30,720
So, there will be no two solutions running in
parallel, I think that would be too complicated.
565
00:46:35,360 --> 00:46:38,880
Are the next major projects scheduled
in Graph after mainnet migration?
566
00:46:41,440 --> 00:46:46,520
That'll be, that's a topic
for another time, I'm afraid.
567
00:46:58,320 --> 00:47:03,120
All right, one note on the, when Indexers
feel comfortable with this new setup,
568
00:47:03,760 --> 00:47:06,720
that's why we have the testnet, so
I encourage you to upgrade there.
569
00:47:07,360 --> 00:47:11,040
If you haven't, make sure
that things are working,
570
00:47:11,040 --> 00:47:15,120
we'll test and we'll work with everyone
to make sure that the setup works.
571
00:47:20,400 --> 00:47:23,840
In case anyone isn't aware, there's
a faucet now for GRT on Discord,
572
00:47:24,560 --> 00:47:29,200
so you don't need to wait for somebody
to deliver you some testnet GRT.
573
00:47:29,200 --> 00:47:34,920
You can get started right away with the deployment,
no waiting around so come and join us.
574
00:47:43,360 --> 00:47:46,320
BTC Blockchain, Bitcoin Blockchain
migration consideration.
575
00:47:47,120 --> 00:47:50,400
Multi blockchain is being worked on.
576
00:47:55,680 --> 00:47:56,800
That's all I can say about that.
577
00:47:58,880 --> 00:48:04,320
All right, cool! It doesn't seem like
there are any more questions about Scalar,
578
00:48:06,320 --> 00:48:14,640
Vector, Vector integration etc, so maybe we can
take the last eight minutes back, wrap up early.
579
00:48:15,360 --> 00:48:16,720
Maybe I can ask one more question?
580
00:48:19,040 --> 00:48:24,000
Do you have anything scheduled right
now in terms of load testing on testnet?
581
00:48:26,480 --> 00:48:32,720
It should roll out that upgrade, well, I've just
rolled an upgrade and there's one issue with it.
582
00:48:34,640 --> 00:48:39,440
Goal is to have that working within
the next few hours again, and
583
00:48:39,440 --> 00:48:42,320
there may be an update necessary
on the Indexer side so new
584
00:48:42,320 --> 00:48:47,840
commits, or new Vector
pre-release to upgrade to.
585
00:48:47,840 --> 00:48:51,120
And then we'll do stress
testing once that's up again.
586
00:48:52,720 --> 00:48:54,120
Great! Thanks, that was all.
587
00:48:54,840 --> 00:49:00,000
Internal URL, are those
HTTP style? Internal URL.
588
00:49:01,520 --> 00:49:02,400
What do you mean gone?
589
00:49:06,240 --> 00:49:07,520
That’s something on the screen here?
590
00:49:13,120 --> 00:49:16,720
In the Indexer agent and Indexer service.
591
00:49:19,200 --> 00:49:21,920
You have to specify Vector
Node and Vector event service
592
00:49:22,480 --> 00:49:25,360
and it says, internal URL
in the infrastructure.
593
00:49:29,200 --> 00:49:30,720
Yes, so, like I said
594
00:49:30,720 --> 00:49:33,440
the Vector Node doesn't have to be,
or shouldn't be exposed to the public,
595
00:49:35,280 --> 00:49:41,200
because it doesn't need to be. Just like the,
was it like Graph Node, API endpoints, for
596
00:49:41,200 --> 00:49:45,760
instance, the only thing you should
post is the Indexer service and so this
597
00:49:45,760 --> 00:49:50,160
is the internal URL, like internal
to the Indexer's environment.
598
00:49:50,160 --> 00:49:53,600
So if you run the Kubernetes
cluster that would be a referral to that, not exposed
599
00:49:55,120 --> 00:49:58,840
If you're on bare metal,
not like a public port.
600
00:50:06,720 --> 00:50:11,840
Okay, cool! Because for example, for the
Postgres you don't put anything in front.
601
00:50:14,720 --> 00:50:16,160
Yeah, that's true.
602
00:50:18,560 --> 00:50:20,880
I can add that and these are HTTP URLs,
603
00:50:21,440 --> 00:50:25,840
because the Vector Node
basically, what it provides is a,
604
00:50:26,960 --> 00:50:31,720
it's not a JSON RPC endpoint,
but it's a REST style endpoint.
605
00:50:46,160 --> 00:50:49,600
All right. We can take the time back?
606
00:50:52,720 --> 00:50:53,520
Sounds good to me.
607
00:50:54,340 --> 00:50:55,920
Unless there are last minute questions?
608
00:50:57,200 --> 00:51:02,960
If not or if there are any and we're wrapping up
early, then please head over to Discord and
609
00:51:02,960 --> 00:51:05,200
we'll be happy to answer anything there.
610
00:51:08,160 --> 00:51:09,600
Great! Thanks everyone for joining.
611
00:51:10,480 --> 00:51:11,680
See you all around Discord.
612
00:51:12,560 --> 00:51:13,120
Thank you all.
613
00:51:13,920 --> 00:51:15,120
Take care.
1
00:00:04,080 --> 00:00:06,640
We've been looking forward to
this day for a really long time.
2
00:00:07,200 --> 00:00:10,560
We launched The Graph Network
last December and since then,
3
00:00:10,560 --> 00:00:11,687
Indexers and Delegators
have been working together
4
00:00:11,687 --> 00:00:16,720
to bootstrap the network
and get it ready for today.
5
00:00:16,720 --> 00:00:20,080
The Graph is, without a doubt, one of
the most complex protocols on Ethereum.
6
00:00:20,640 --> 00:00:26,640
There's staking by Indexers, delegation to help
secure and allocate resources on the network
7
00:00:26,640 --> 00:00:31,760
and curation to help organize all of the
data on The Graph in a decentralized way.
8
00:00:32,400 --> 00:00:37,760
And query fees for Consumers using state
channels, which is a technology we've been
9
00:00:37,760 --> 00:00:42,400
working on over the last several
years in partnership with Connext.
10
00:00:42,400 --> 00:00:46,480
All of these components, roles
and incentives, are tightly
11
00:00:46,480 --> 00:00:51,440
interconnected and form the basis
for the economy in The Graph Network.
12
00:00:52,160 --> 00:00:58,000
We needed to build a set of products
that could help hide the complexity
13
00:00:58,000 --> 00:01:00,080
of all of these different
components and make it
14
00:01:00,080 --> 00:01:04,320
really easy for participants to
contribute and use The Graph Network.
15
00:01:04,880 --> 00:01:06,960
Today we're launching two new products.
16
00:01:07,520 --> 00:01:09,920
Graph Explorer, which
is a whole new way
17
00:01:09,920 --> 00:01:12,880
to participate in and interact
with The Graph Network
18
00:01:12,880 --> 00:01:16,720
and Subgraph Studio, a new
way for developers to build,
19
00:01:16,720 --> 00:01:20,080
test and publish their subgraphs
on the decentralized network.
20
00:01:20,080 --> 00:01:21,600
Together, these create the
21
00:01:21,600 --> 00:01:27,440
full self-service experience, so that everybody
can participate in and contribute to The Graph.
22
00:01:28,080 --> 00:01:31,600
We've been building The Graph as a
decentralized network from the very
23
00:01:31,600 --> 00:01:36,880
beginning because we really believe that every
layer of the stack has to be decentralized,
24
00:01:36,880 --> 00:01:40,480
in order to enable the kind of
transformation that we want to see.
25
00:01:40,480 --> 00:01:43,920
Long term, our vision for
Web3 is that end users are
26
00:01:43,920 --> 00:01:47,840
going to pay for the metered usage
of the services that they consume,
27
00:01:47,840 --> 00:01:52,160
that's the only way to make sure that users
have power and control in their own hands.
28
00:01:52,160 --> 00:01:55,200
In the meantime, we're
launching a set of gateways
29
00:01:55,200 --> 00:01:58,640
that make interacting with the
network seamless for users.
30
00:01:59,600 --> 00:02:03,680
DApp Developers can pay the
query fees using these gateways
31
00:02:03,680 --> 00:02:06,960
that are geographically
distributed all across the globe.
32
00:02:06,960 --> 00:02:10,320
When an end user makes a query on
an application using The Graph,
33
00:02:10,320 --> 00:02:16,160
that query gets routed to the nearest gateway and
from there to a nearby Indexer, using an Indexer
34
00:02:16,160 --> 00:02:23,200
selection algorithm that takes factors into
account, such as performance, cost and security.
35
00:02:23,200 --> 00:02:29,280
So now developers can easily publish their
subgraphs on the decentralized network, set up API
36
00:02:29,280 --> 00:02:35,120
keys where they can pay query fees and Curators
can come signal on those subgraphs, so that
37
00:02:35,120 --> 00:02:40,480
all of the data can be well organized and it's
easily accessible to developers and end users.
38
00:02:41,040 --> 00:02:46,320
On the network you have Indexers competing to
provide the best service at the lowest price
39
00:02:46,320 --> 00:02:50,400
and so you never have to worry
about uptime for your applications.
40
00:02:50,400 --> 00:02:55,200
Indexers can granularly set their
prices per query using a cost model
41
00:02:55,200 --> 00:02:57,120
language we've developed, called Agora.
42
00:02:57,760 --> 00:03:00,960
This lets them granularly specify prices
43
00:03:00,960 --> 00:03:05,600
for different types of queries, so that
there's always an incentive to go in and
44
00:03:05,600 --> 00:03:09,280
hyper optimize any slow
parts of your application.
45
00:03:09,280 --> 00:03:12,320
This way, you just don't
have to think about
46
00:03:12,320 --> 00:03:16,480
the back end and of course, The Graph
is known for its world-class design.
47
00:03:17,120 --> 00:03:21,040
We think it's really important that
Web3 is easy for people to use.
48
00:03:21,040 --> 00:03:24,160
These releases are the
culmination of almost four years
49
00:03:24,160 --> 00:03:27,520
of work by our design,
products and engineering teams.
50
00:03:27,520 --> 00:03:29,680
We started the initial designs
51
00:03:29,680 --> 00:03:34,560
for these products over 18 months ago
and it's just taken an incredible amount
52
00:03:34,560 --> 00:03:41,920
of refinement to get the entire product experience
to be something that exposes all of the power of
53
00:03:41,920 --> 00:03:46,160
the network but in a way that
anybody can easily interact with.
54
00:03:46,160 --> 00:03:50,240
And to give you a first look, I'm
going to pass it over to Nena Djaja.
55
00:03:51,360 --> 00:03:54,400
Hi, I'm Nena, Product
Engineering Lead at Edge & Node.
56
00:03:54,960 --> 00:03:58,400
I'm going to demo the brand new
products we're launching today,
57
00:03:58,400 --> 00:04:01,120
Graph Explorer and Subgraph Studio.
58
00:04:01,120 --> 00:04:06,320
When you land here, you can view the list of
all the published subgraphs on the network.
59
00:04:06,320 --> 00:04:10,000
You can see what their signal
is, sort by the Highest Signal
60
00:04:10,000 --> 00:04:14,240
and decide which ones are the
ones you're most interested in.
61
00:04:14,240 --> 00:04:17,120
You can view the details of
a subgraph by going here.
62
00:04:18,000 --> 00:04:22,000
For example, you can see the creator
of them, their website, who are the
63
00:04:22,000 --> 00:04:25,920
Indexers and Curators and might
be especially interesting,
64
00:04:25,920 --> 00:04:27,840
query fees and curation over time.
65
00:04:29,120 --> 00:04:31,920
If you decide you wanna signal
on a subgraph, you go here.
66
00:04:32,960 --> 00:04:37,520
You can select a specific version you want
to signal on, decide on the amount of GRT
67
00:04:38,320 --> 00:04:42,400
and with a few very simple clicks you
can provide your signal to the subgraph.
68
00:04:43,200 --> 00:04:46,480
Here you can view all the
participants in the network.
69
00:04:46,480 --> 00:04:48,320
Let's check out the Indexers table.
70
00:04:49,200 --> 00:04:50,880
You can preview a couple
71
00:04:50,880 --> 00:04:55,840
of parameters that might help you decide, in case
you want to delegate some of your GRT to them.
72
00:04:56,560 --> 00:04:59,680
And if you select one of them you
can see the detailed information,
73
00:05:00,720 --> 00:05:05,520
stake and income over time, delegation
that's available for this Indexer and
74
00:05:05,520 --> 00:05:07,520
other parameters that can
help you take a decision.
75
00:05:08,480 --> 00:05:14,000
If you decide you want to delegate to this
Indexer, you go here, input the amount of GRT
76
00:05:14,000 --> 00:05:19,360
you'd like to delegate and with a few simple
clicks you can complete your delegation.
77
00:05:20,320 --> 00:05:22,720
You can view the network
activity and epochs here.
78
00:05:23,440 --> 00:05:28,400
For example, you can view information such as
total stake in the protocol, total token supply,
79
00:05:28,400 --> 00:05:33,440
indexing rewards, query fees per epoch
and different protocol parameters.
80
00:05:34,080 --> 00:05:36,560
We built Subgraph Studio
for Subgraph Developers
81
00:05:37,280 --> 00:05:39,680
because we love you and we want
you to build more subgraphs.
82
00:05:40,320 --> 00:05:42,240
When you go here, you land
83
00:05:42,240 --> 00:05:45,440
on the subgraphs page if you have any,
if not, it's very easy to create one.
84
00:05:46,720 --> 00:05:49,920
You click here, you Create a
Subgraph and then what you need
85
00:05:49,920 --> 00:05:51,840
to do is develop it
on your local machine.
86
00:05:54,000 --> 00:06:00,400
When you're ready to publish your subgraph to
the decentralized network, you go back to the UI,
87
00:06:00,400 --> 00:06:05,840
click on Publish, select Testnet or
Mainnet and simply publish your subgraph.
88
00:06:07,120 --> 00:06:12,480
Then you can go back into the explorer and see
your subgraph live on the decentralized network.
89
00:06:13,200 --> 00:06:17,440
In order to query subgraphs for your
DApp, you need to create an API key.
90
00:06:18,240 --> 00:06:19,840
You do that in studio.
91
00:06:20,800 --> 00:06:22,000
It's very simple.
92
00:06:22,000 --> 00:06:26,240
You go here, follow the steps
and there you have your API key.
93
00:06:28,880 --> 00:06:33,840
After creating an API key, you can restrict
it by certain subgraphs or certain domains.
94
00:06:35,200 --> 00:06:38,080
Then you would go to billing
and deposit some GRT.
95
00:06:39,760 --> 00:06:47,840
We use Polygon Bridge to make
deposit in GRT faster and cheaper.
96
00:06:49,200 --> 00:06:55,200
After you're done with that, you'll have your
balance and you can now use your API key.
97
00:06:56,480 --> 00:06:59,520
It's just that easy to publish
subgraphs to the network
98
00:06:59,520 --> 00:07:01,840
and participate, no
matter what your role is.
99
00:07:02,400 --> 00:07:06,640
We hope you love using these products
and we can't wait to see what you build.
100
00:07:08,080 --> 00:07:11,280
Blockchains are the future
of the Internet and The Graph
101
00:07:11,280 --> 00:07:15,840
is the decentralized indexing
and query layer for this future.
102
00:07:15,840 --> 00:07:19,440
Subgraphs are open APIs
that developers build.
103
00:07:20,000 --> 00:07:22,720
Examples of this are
like uniswap.info.
104
00:07:23,280 --> 00:07:26,320
So uniswap.info is a
subgraph and that's how
105
00:07:26,320 --> 00:07:32,640
Uniswap pulls trade data from the Ethereum
blockchain to serve that data to users of
106
00:07:32,640 --> 00:07:38,000
uniswap.info and then you see Coinmarketcap,
Coingecko, which are some of the most used
107
00:07:38,000 --> 00:07:40,160
websites across the crypto space.
108
00:07:40,160 --> 00:07:42,480
They actually pull that data from
109
00:07:42,480 --> 00:07:46,000
Uniswap and so these subgraphs are
kind of like lego building blocks
110
00:07:46,000 --> 00:07:51,600
on top of one another and we've seen so much
transparency and so much innovation across
111
00:07:51,600 --> 00:07:57,280
so many segments within the blockchain space,
including DeFi which is decentralized finance,
112
00:07:57,280 --> 00:08:01,760
as well as NFTs which is digital art and
now we're seeing an emergence in DAOs,
113
00:08:01,760 --> 00:08:06,080
which is kind of creating
a new work culture or a new
114
00:08:06,080 --> 00:08:09,120
governance structure,
which is really exciting.
115
00:08:09,120 --> 00:08:13,920
The Graph has taken this progressive
decentralization path and in December of
116
00:08:13,920 --> 00:08:20,560
last year the decentralized network launched and
so The Graph is making open data a public good
117
00:08:20,560 --> 00:08:25,760
and so anyone across the globe can interact
with this, Graph’s decentralized network.
118
00:08:25,760 --> 00:08:29,840
And we've been really bootstrapping the
supply side of that network so there are over
119
00:08:29,840 --> 00:08:33,360
150 different Indexers
on The Graph Network,
120
00:08:33,360 --> 00:08:36,400
which each are their own
business, their own organization.
121
00:08:36,400 --> 00:08:39,840
Some are just one person
running a node but others have
122
00:08:39,840 --> 00:08:43,680
teams wide and they're spending
a lot of time doing devops.
123
00:08:43,680 --> 00:08:44,640
Also Delegators.
124
00:08:44,640 --> 00:08:47,520
So there's over seven thousand
different Delegators which,
125
00:08:47,520 --> 00:08:50,720
Delegators, it's almost
like a passive income role,
126
00:08:50,720 --> 00:08:55,280
anyone across the world can be a
Delegator on The Graph Network.
127
00:08:55,280 --> 00:09:00,160
You choose an Indexer, the Indexer does the
heavy lifting and then the Indexer splits
128
00:09:00,160 --> 00:09:04,400
the amount that they earn, the amount of
GRT they earn with you as the Delegator.
129
00:09:04,400 --> 00:09:11,120
GRT is a work utility token so this means
that GRT is used, the creation of GRT
130
00:09:11,120 --> 00:09:16,160
was so that it could be used in The Graph
Network and earned in The Graph Network.
131
00:09:16,160 --> 00:09:23,200
Right now, DApp developers have actually taken
the cost on for their users to reduce friction
132
00:09:23,200 --> 00:09:30,560
to pay for queries on those DApps and so who
earns that GRT that's being spent on these DApps,
133
00:09:30,560 --> 00:09:35,440
it's the Indexers, Delegators and the
Curators within The Graph ecosystem.
134
00:09:35,440 --> 00:09:38,160
The Indexers, that's
the most technical role.
135
00:09:38,160 --> 00:09:46,880
So Indexers stake GRT on subgraphs and they serve
queries in a quick and efficient way and then the
136
00:09:46,880 --> 00:09:49,680
Delegators, this is the
least technical role.
137
00:09:49,680 --> 00:09:52,080
So the Delegators are
just delegating stake
138
00:09:52,080 --> 00:09:55,920
to the Indexers and the Indexers, they're
doing that heavy lift and then splitting
139
00:09:55,920 --> 00:10:01,440
that stake with the Delegators and then the
Curators are minting signal on these subgraphs
140
00:10:01,440 --> 00:10:07,120
and then the Indexers look at that signal
to then stake on those different subgraphs.
141
00:10:07,120 --> 00:10:12,320
So this upcoming product launch is really
bringing The Graph into the Graph's full form.
142
00:10:12,880 --> 00:10:18,560
As I mentioned, we've taken the progressive
decentralization approach and this upcoming
143
00:10:18,560 --> 00:10:23,040
launch is really helping to
reinvent the Internet as we know it.
144
00:10:23,920 --> 00:10:26,320
Now there's the explorer and with
145
00:10:26,320 --> 00:10:31,040
the explorer anyone across The Graph ecosystem,
there's so many different roles, can all
146
00:10:31,040 --> 00:10:35,440
interact and see everything that's happening
within The Graph ecosystem transparently.
147
00:10:36,000 --> 00:10:40,640
There's the Subgraph Studio and then
curation is going live and we interact
148
00:10:40,640 --> 00:10:45,280
with curation from centralized companies all
the time, when it comes to like Apple Music,
149
00:10:45,840 --> 00:10:50,560
but the difference is that there's Apple
and Google deciding who's the Curator.
150
00:10:50,560 --> 00:10:51,600
Is it the algorithms?
151
00:10:51,600 --> 00:10:53,040
Is it different people?
152
00:10:53,040 --> 00:10:56,240
They're hiring those people
and with The Graph ecosystem
153
00:10:56,240 --> 00:11:01,360
curation is decided by the free market, as
opposed to a centralized company at the middle
154
00:11:01,360 --> 00:11:05,680
and I think that's really powerful and
we're going to see how that plays out.
155
00:11:05,680 --> 00:11:09,840
So these Curators are, I call
them open data alpha finders,
156
00:11:09,840 --> 00:11:14,000
so anyone who's like an investor in crypto
or anyone who's a trader in crypto will
157
00:11:14,000 --> 00:11:18,160
make a great Curator because what the
Curators are doing is they're looking for
158
00:11:18,160 --> 00:11:23,280
the up and coming subgraphs and they're trying
to be the earliest to those subgraphs to mint
159
00:11:23,280 --> 00:11:29,280
signal on a bonding curve, so that they can
earn if they're correct about those subgraphs.
160
00:11:29,280 --> 00:11:33,520
So it's kind of the same approach as you
take to investing in different projects.
161
00:11:33,520 --> 00:11:34,960
Do you believe in this subgraph?
162
00:11:34,960 --> 00:11:37,920
Do you think that this is
going to be a used subgraph
163
00:11:37,920 --> 00:11:40,560
or is this subgraph going
to have a lot of queries?
164
00:11:40,560 --> 00:11:44,320
And then the Indexers see that
signal that the Curators mint
165
00:11:44,320 --> 00:11:48,800
and then they know to index that subgraph
and serve queries quickly and efficiently.
166
00:11:49,360 --> 00:11:52,480
So we've seen this impact
already, the DeFi boom.
167
00:11:52,480 --> 00:11:53,680
You know, I look back,
168
00:11:53,680 --> 00:11:57,440
from the traditional finance space,
I look back to the financial crisis
169
00:11:57,440 --> 00:12:02,480
and that crisis might not have happened if we had
transparency, if that was built on blockchains.
170
00:12:02,480 --> 00:12:04,880
A lot of those bonds were
there, they were owned,
171
00:12:04,880 --> 00:12:07,680
the banks just couldn't
identify where those bonds were.
172
00:12:07,680 --> 00:12:13,440
Within the decentralized finance
space, the DeFi space,
we have transparency and
173
00:12:13,440 --> 00:12:16,800
so there's so much innovation that
happened with like flash loans.
174
00:12:16,800 --> 00:12:19,040
That's something that has
never before happened,
175
00:12:19,040 --> 00:12:22,400
it's never been possible in
any other industry and because
176
00:12:22,400 --> 00:12:28,960
of these open APIs, because of these subgraphs,
we saw this big movement occur within DeFi.
177
00:12:29,760 --> 00:12:34,160
And then a lot of capital flowed in, then
this kind of DeFi bubble popped a little bit
178
00:12:34,160 --> 00:12:39,200
and then we saw a massive emergence
again, of DeFi, afterwards and so
179
00:12:39,200 --> 00:12:45,600
I think that it's just so powerful to
have open APIs as opposed to closed APIs.
180
00:12:45,600 --> 00:12:48,800
An example of a closed API
is like Facebook or Linkedin.
181
00:12:48,800 --> 00:12:52,640
Even though that's our data, we don't
own that data, Facebook owns that data,
182
00:12:52,640 --> 00:12:55,040
so I can't port the
Facebook data to my own
183
00:12:55,040 --> 00:12:57,840
application, nor can I port
it to like a Crunchbase,
184
00:12:57,840 --> 00:13:00,160
because Linkedin and
Facebook own that data.
185
00:13:00,960 --> 00:13:07,120
With subgraphs the data is open, so
anyone can use that data, anyone can
186
00:13:07,120 --> 00:13:11,920
port that data to their own application,
like lego building blocks for innovation.
187
00:13:12,640 --> 00:13:18,320
Decentralization is really important for
protocols, it's important for developers to
188
00:13:18,320 --> 00:13:21,200
know that what they're building
on will always be there.
189
00:13:21,200 --> 00:13:25,840
Also, decentralization within The Graph
Network means that there's redundancy.
190
00:13:25,840 --> 00:13:29,680
So not only is there one Indexer
that's helping to index these
191
00:13:29,680 --> 00:13:32,800
different subgraphs, there are
now multiple different Indexers.
192
00:13:32,800 --> 00:13:39,280
So if one goes down, there are many others that
can pick up that slack to index those subgraphs,
193
00:13:39,280 --> 00:13:43,280
which is really great for developers
who had previously maybe been running
194
00:13:43,280 --> 00:13:49,600
a secondary Indexer in-house or a secondary
node in-house, in case there was ever downtime.
195
00:13:49,600 --> 00:13:55,200
The idea is that we're reducing the barrier to
entry for these developers to be able to build
196
00:13:55,200 --> 00:13:59,040
applications on blockchains, so any
DApp developer can be a Curator
197
00:13:59,040 --> 00:14:02,320
within The Graph ecosystem
and they can actually earn
198
00:14:02,320 --> 00:14:07,120
GRT for being Curators and providing
that service to the ecosystem.
199
00:14:07,120 --> 00:14:11,200
By saying, this is a subgraph that
I believe in, I built this subgraph
200
00:14:11,200 --> 00:14:15,120
and now I'm going to stake my GRT because
I believe in this subgraph so much
201
00:14:15,120 --> 00:14:20,080
and then many other Curators can see that
signal and they can actually follow suit
202
00:14:20,080 --> 00:14:25,280
and join in on signaling on that subgraph
because they trust that DApp developer's
203
00:14:25,280 --> 00:14:27,920
signal and then they
can also earn within
204
00:14:27,920 --> 00:14:30,640
The Graph ecosystem if
they're correct on that bet.
205
00:14:31,200 --> 00:14:33,440
As a Curator, because
curation happens on a
206
00:14:33,440 --> 00:14:37,600
bonding curve, you can make money
but you can also lose money.
207
00:14:37,600 --> 00:14:42,720
So make sure that you are carefully
choosing which subgraph you are curating on.
208
00:14:42,720 --> 00:14:45,120
The applications you know
and use on your iPhone,
209
00:14:45,120 --> 00:14:47,840
that's what people are building
in a decentralized way.
210
00:14:47,840 --> 00:14:53,040
We call them DApps and those DApps
are the future of the Internet.
211
00:14:53,040 --> 00:14:55,840
So a lot of people across
the world use smartphones,
212
00:14:55,840 --> 00:14:58,960
so they're very familiar
with applications.
213
00:14:58,960 --> 00:15:02,560
So that's what this decentralized
world will look like as well,
214
00:15:02,560 --> 00:15:07,440
so you don't need to be technical to understand
which application you enjoy using.
215
00:15:07,440 --> 00:15:10,880
So I would say there's not a
huge technical barrier to entry,
216
00:15:10,880 --> 00:15:13,440
you can come in and
you can be a Curator.
217
00:15:13,440 --> 00:15:16,320
Anyone across the world can
come in and be a Curator.
218
00:15:16,320 --> 00:15:20,000
Curators can have an expertise
in any different area.
219
00:15:20,000 --> 00:15:24,800
For example, if you're an artist
or a creator you can specialize in
220
00:15:24,800 --> 00:15:30,640
applications that are within the NFT space
and you can curate on those applications
221
00:15:30,640 --> 00:15:35,840
and so I envision that we'll actually have
Curators that are subject matter expertise.
222
00:15:35,840 --> 00:15:44,240
Some in DeFi, some in NFTs, some in DAOs and I'm
sure many of you listening are in Web2 jobs or in
223
00:15:44,240 --> 00:15:51,280
the traditional finance space and I would say
I welcome you to join this movement and join
224
00:15:51,280 --> 00:15:57,120
this ecosystem because The Graph is fundamentally
changing the way we think about work and not just
225
00:15:57,120 --> 00:16:01,040
The Graph, many different protocols
within the blockchain ecosystem
226
00:16:01,040 --> 00:16:04,160
are fundamentally shifting
how we think about work.
227
00:16:04,160 --> 00:16:07,840
And you can be a Delegator on
The Graph ecosystem earning
228
00:16:07,840 --> 00:16:15,040
passive income, while also providing Uniswap
liquidity and earning passively for doing that.
229
00:16:15,600 --> 00:16:20,880
And so I think we're no longer going to need to
only work for centralized companies or banks or
230
00:16:20,880 --> 00:16:26,720
Web2 companies and instead we can work
for ideas and work for protocols.
231
00:16:27,520 --> 00:16:33,520
If you are a Subgraph Developer that has interest
in migrating from The Graph's hosted service to
232
00:16:33,520 --> 00:16:38,640
the decentralized network, feel free to
get in touch with anyone at Edge & Node,
233
00:16:38,640 --> 00:16:43,360
The Graph Foundation, StreamingFast
or anyone within The Graph ecosystem.
234
00:16:43,360 --> 00:16:48,000
There's a Discord, there's a Telegram channel,
so please reach out, we're here to help.
235
00:16:48,000 --> 00:16:52,480
And now I'll hand it over to
Eva Beylin to share everything
236
00:16:52,480 --> 00:16:54,320
that's happening within
The Graph ecosystem.
237
00:16:56,160 --> 00:16:59,680
The Graph is the first decentralized
data market in the world.
238
00:16:59,680 --> 00:17:04,640
This means that data like your wallet
balances or maybe looking at your smartphone,
239
00:17:04,640 --> 00:17:08,240
all of that can be indexed on The
Graph once that data is on a blockchain
240
00:17:08,240 --> 00:17:11,840
and the best part is that people
can contribute to that data economy.
241
00:17:11,840 --> 00:17:16,720
Whereas in Web2 it's centralized companies or
administrators that earn the revenue from that
242
00:17:16,720 --> 00:17:21,040
data service, anyone can participate
as an Indexer, Curator or Delegator
243
00:17:21,040 --> 00:17:23,200
and contribute to the
global graph of data.
244
00:17:23,200 --> 00:17:26,000
In Web2, APIs are
centralized and closed.
245
00:17:26,000 --> 00:17:28,800
For the average user this means
it's very hard to access data
246
00:17:28,800 --> 00:17:31,280
like your bank records
or your medical records.
247
00:17:31,280 --> 00:17:33,920
For an analyst or a trader it
means it's very difficult to find
248
00:17:33,920 --> 00:17:37,760
any industry data and for a developer that
means it's difficult to create any kind of
249
00:17:37,760 --> 00:17:42,640
front-end application and query data, whether
that's on the Web2 Internet or on a blockchain.
250
00:17:42,640 --> 00:17:45,680
With The Graph we actually
incentivize Open Source APIs,
251
00:17:45,680 --> 00:17:47,520
called subgraphs and we incentivize
252
00:17:47,520 --> 00:17:50,880
people to build them because by
deploying these Open Source subgraphs,
253
00:17:50,880 --> 00:17:54,000
anyone can then also earn
fees for their contributions.
254
00:17:54,000 --> 00:17:57,040
So if you're a Subgraph Developer
you can deploy on the mainnet
255
00:17:57,040 --> 00:18:00,400
and also curate on that subgraph to
earn a portion of your query fees.
256
00:18:00,400 --> 00:18:03,840
What's most exciting for end users
is that you can use an app and then
257
00:18:03,840 --> 00:18:06,480
also earn for the usage
of your favorite app.
258
00:18:06,480 --> 00:18:09,840
So any query or opening of an
application that your friend
259
00:18:09,840 --> 00:18:11,920
makes, maybe you can earn
a portion of that fee.
260
00:18:12,640 --> 00:18:16,880
Retrieving blockchain data is hard and The Graph
solves this problem in a decentralized way.
261
00:18:16,880 --> 00:18:20,400
Instead of applications needing to
rely on one server or one Indexer,
262
00:18:20,400 --> 00:18:24,800
they rely on a network of Indexers, which are
essentially servers that are willing to provide
263
00:18:24,800 --> 00:18:28,960
querying and indexing services to them at
any point in time, anywhere in the world.
264
00:18:28,960 --> 00:18:32,880
This means that end users can rely on their
applications to never be taken down and they
265
00:18:32,880 --> 00:18:35,680
can always access their
information on-chain.
266
00:18:35,680 --> 00:18:38,480
Tim Berners-Lee had a vision
for the semantic web,
267
00:18:38,480 --> 00:18:42,320
a way of organizing and structuring the data
that's on the Internet to make it much more
268
00:18:42,320 --> 00:18:47,040
easily accessible for developers or end users
and The Graph is bringing that vision to life.
269
00:18:47,040 --> 00:18:50,400
The Graph is essentially the
semantic web for all blockchain data
270
00:18:50,400 --> 00:18:53,440
and the way you access that
data is by querying a subgraph.
271
00:18:53,440 --> 00:18:56,240
So now applications and developers
don't need to build their own
272
00:18:56,240 --> 00:19:01,280
proprietary databases or Indexers, they can
rely on the Open Source code of their peers.
273
00:19:01,280 --> 00:19:03,440
This makes innovation
much more efficient,
274
00:19:03,440 --> 00:19:07,280
allows us to collaborate a lot more
and enables an open data economy.
275
00:19:07,280 --> 00:19:11,920
This launch of The Graph Explorer and the
Subgraph Studio are monumental for developers.
276
00:19:11,920 --> 00:19:13,840
This means that now they
can collaborate a lot
277
00:19:13,840 --> 00:19:16,720
more easily and don't need
to build redundant APIs,
278
00:19:16,720 --> 00:19:21,680
they can also contribute to each other's work and
be a lot more collaborative than they are today.
279
00:19:21,680 --> 00:19:24,480
This also increases the
pace of innovation. Instead
280
00:19:24,480 --> 00:19:28,320
of requiring people to work in
silos or in independent teams,
281
00:19:28,320 --> 00:19:33,200
people are incentivized now to work together
and even stake and curate on each other's work.
282
00:19:33,200 --> 00:19:36,640
DApp developers are also
incentivized to use open APIs to ensure that
283
00:19:36,640 --> 00:19:38,960
they are no longer a
central point of failure.
284
00:19:38,960 --> 00:19:40,880
No DApp wants to hear
from an end user that
285
00:19:40,880 --> 00:19:45,040
they weren't able to access the app because of
downtime or because another service provider
286
00:19:45,040 --> 00:19:49,840
took them down and now end users can be sure that
their apps and their data are accessible globally.
287
00:19:49,840 --> 00:19:52,720
This is the first time in history
that end users can participate
288
00:19:52,720 --> 00:19:56,640
in a protocol that they actually
use and also earn rewards for it.
289
00:19:56,640 --> 00:20:01,440
So you can earn rewards from curating on your
favorite subgraph for your favorite DApp and
290
00:20:01,440 --> 00:20:07,120
earn rewards that you can then restake on another
subgraph or even become an Indexer or Delegator.
291
00:20:07,120 --> 00:20:12,240
This is revolutionary, you can either earn passive
income on The Graph or you can become a full-time
292
00:20:12,240 --> 00:20:15,840
contributor, as a Delegator,
Indexer or Curator.
293
00:20:15,840 --> 00:20:17,600
You should add that
to your Linkedin, your
294
00:20:17,600 --> 00:20:21,920
Twitter, your resume, because all of us are
contributing to the global graph of data.
295
00:20:22,640 --> 00:20:25,280
In Web2, what we currently
know as the Internet,
296
00:20:25,280 --> 00:20:28,000
there are a lot of negative-sum
and zero-sum games,
297
00:20:28,000 --> 00:20:31,920
where companies exploit their users or
other companies and develop monopolies
298
00:20:31,920 --> 00:20:34,800
and oligopolies to continue
optimizing for their bottom line.
299
00:20:35,440 --> 00:20:39,760
In Web3 with The Graph, we're incentivizing
collaboration and positive sum-games.
300
00:20:39,760 --> 00:20:43,440
So you can be an Indexer, Curator,
Delegator or Subgraph Developer.
301
00:20:43,440 --> 00:20:46,320
You can even be multiple roles at
the same time but you don't need
302
00:20:46,320 --> 00:20:49,120
to extract or exploit from
your peers to create value.
303
00:20:49,760 --> 00:20:52,480
Web3 is a revolution in ownership.
304
00:20:52,480 --> 00:20:55,280
People can now own their data,
they can own their assets
305
00:20:55,280 --> 00:20:57,360
and they can even own
their contributions.
306
00:20:57,360 --> 00:20:58,960
Anyone can contribute to The Graph,
307
00:20:58,960 --> 00:21:03,360
whether it's a few hours a week or in a
full-time capacity, there's a role for everyone.
308
00:21:03,360 --> 00:21:06,080
You can be non-technical and
a Curator and all you really
309
00:21:06,080 --> 00:21:10,240
need to know is which applications are
providing you as a user the most value.
310
00:21:10,240 --> 00:21:14,400
As a non-technical user you can also be a
Delegator and delegate the responsibility
311
00:21:14,400 --> 00:21:16,480
of indexing to another Indexer.
312
00:21:16,480 --> 00:21:19,040
You're also contributing to
the security of the network.
313
00:21:19,040 --> 00:21:22,480
Other ways you can contribute to the
ecosystem is by applying for grants.
314
00:21:22,480 --> 00:21:25,840
Whether you have an idea for a
protocol upgrade, new tooling,
315
00:21:25,840 --> 00:21:30,640
new subgraphs to deploy or DApps you'd like
to see built on Web3, we're here to help.
316
00:21:31,200 --> 00:21:34,320
Even community initiatives
such as translating content,
317
00:21:34,320 --> 00:21:39,680
guides for bringing new developers into crypto
or learning more about the open data economy.
318
00:21:39,680 --> 00:21:43,760
We're willing to fund whatever it is that
brings The Graph ecosystem to the mainstream.
319
00:21:43,760 --> 00:21:47,440
If you're a developer that knows
TypeScript, JavaScript or GraphQL,
320
00:21:47,440 --> 00:21:49,440
get involved with subgraph development.
321
00:21:49,440 --> 00:21:54,320
It can take anywhere from one week to a few weeks
to build but with a lifetime of opportunity
322
00:21:54,320 --> 00:21:57,840
a subgraph can make a huge difference in
bringing richer data to applications.
323
00:21:58,560 --> 00:22:02,560
If you're interested in getting involved in
governance of The Graph, such as upgrading the
324
00:22:02,560 --> 00:22:07,760
protocol or setting subgraph standards, contribute
to The Graph improvement proposal process.
325
00:22:07,760 --> 00:22:10,800
The GIP process is hosted
on Radical and we go
326
00:22:10,800 --> 00:22:14,240
through a community Snapshot vote
to gather community sentiment.
327
00:22:14,240 --> 00:22:17,680
Then the proposal is summarized
and The Graph Council votes.
328
00:22:17,680 --> 00:22:22,160
Head to the forum to share any ideas you
have, formulate that into a proposal,
329
00:22:22,160 --> 00:22:26,320
gain buy-in from the community and you can have
an impact on the evolution of the protocol.
330
00:22:28,080 --> 00:22:31,360
We're at the beginning of a
journey to change how humans
331
00:22:31,360 --> 00:22:34,080
cooperate and organize on the Internet.
332
00:22:34,080 --> 00:22:37,920
Today's launches is a giant
step forward in that direction.
333
00:22:38,480 --> 00:22:40,080
On the web of today,
334
00:22:40,080 --> 00:22:44,640
the software and information is controlled
by whoever runs the server and that means
335
00:22:44,640 --> 00:22:50,400
that corporations and even governments end up
making decisions around what information is
336
00:22:50,400 --> 00:22:56,080
considered to be true, but there's just no way for
a small group of people to have all the answers
337
00:22:56,080 --> 00:23:01,600
and so it's really important that we open
up the systems that we use to coordinate and
338
00:23:01,600 --> 00:23:07,040
make decisions, to make sure that we're able to
leverage the sum total of all of the knowledge,
339
00:23:07,040 --> 00:23:10,080
heart, expertise that
people have to offer.
340
00:23:10,080 --> 00:23:12,720
With The Graph we
have the foundation
341
00:23:12,720 --> 00:23:18,720
to help organize and serve all of the world's
public information in a decentralized way.
342
00:23:18,720 --> 00:23:23,840
It starts with a set of protocols
that define different roles and ways
343
00:23:23,840 --> 00:23:28,000
that people can interact
but in a verifiable way.
344
00:23:28,000 --> 00:23:32,960
Once that data exists on a blockchain,
a storage network or any crypto network,
345
00:23:32,960 --> 00:23:38,480
it can be indexed on The Graph and surface to
end users in a way that they can trust.
346
00:23:38,480 --> 00:23:42,880
Decentralized protocols create the
right kind of incentives and structure,
347
00:23:43,760 --> 00:23:49,040
so that people can work towards a common
goal in a way that was never before possible.
348
00:23:49,040 --> 00:23:52,720
We believe that protocols are
the successor to the corporation
349
00:23:52,720 --> 00:23:56,480
as being the primary means of
scaling human coordination.
350
00:23:56,480 --> 00:24:00,800
There are people in all areas of the
economy that have deep insight on some
351
00:24:00,800 --> 00:24:07,280
area of specialization or knowledge and
today, the main way that they're able to
352
00:24:07,840 --> 00:24:10,160
monetize and contribute their knowledge
353
00:24:10,160 --> 00:24:13,840
is through proprietary
contributions to corporations.
354
00:24:14,400 --> 00:24:20,000
But we believe that most of this knowledge and
information should actually be public goods.
355
00:24:20,560 --> 00:24:25,520
That information wants to be free
and that people want to contribute
356
00:24:25,520 --> 00:24:29,840
their skills and their knowledge
to humanity as a whole.
357
00:24:29,840 --> 00:24:32,240
Everybody's got something to contribute
358
00:24:32,240 --> 00:24:38,560
and the way we've structured society today is
very limiting in giving people opportunity, but
359
00:24:38,560 --> 00:24:43,840
on top of these decentralized networks and
protocols we're opening the space back up.
360
00:24:43,840 --> 00:24:48,240
So that everyone can find their passion,
what they want to do for work and
361
00:24:48,240 --> 00:24:54,240
contribute their best selves to this global
infrastructure that we can all use together.
362
00:24:54,240 --> 00:24:56,560
Thank you so much for being
on this journey with us.
363
00:24:56,560 --> 00:25:00,640
It's been truly remarkable to see
this whole community come together.
364
00:25:00,640 --> 00:25:02,880
You know, from Subgraph Developers,
365
00:25:03,440 --> 00:25:09,520
building incredible subgraphs in every area of
the crypto economy, people starting podcasts,
366
00:25:09,520 --> 00:25:14,720
really sharing interesting stats and helping
to onboard the next generation of Delegators
367
00:25:14,720 --> 00:25:20,320
and Curators, people have really made this
their own and now there's a whole new set of
368
00:25:20,320 --> 00:25:23,520
tools that we can use to take
this community to the next level.
369
00:25:23,520 --> 00:25:26,640
With the products launched
today, we now have everything
370
00:25:26,640 --> 00:25:28,880
we need to really bring
The Graph to life.
371
00:25:28,880 --> 00:25:41,840
So let's do it, let's build
that decentralized future.
372
00:26:03,360 --> 00:26:06,960
Hi, I'm Rodrigo with Edge & Node
and I know a lot of you have
373
00:26:06,960 --> 00:26:09,840
questions you've been meaning to ask
and we're here to answer them for you.
374
00:26:10,800 --> 00:26:12,400
So let's just get right into it.
375
00:26:12,400 --> 00:26:17,360
So this launch is a huge event
in the history of The Graph and
376
00:26:17,360 --> 00:26:23,840
what does this mean to you personally
and to the future of decentralization?
377
00:26:24,720 --> 00:26:30,080
To me personally, it means we've built one of
the most complex DApps that exists in Web3,
378
00:26:30,080 --> 00:26:32,640
that allows many different kinds
of participants to interact.
379
00:26:32,640 --> 00:26:36,960
So, whether you're an Indexer, Delegator or
Curator, you can all use The Graph Explorer DApp
380
00:26:37,600 --> 00:26:40,000
to do whatever it needs to
be done for the network.
381
00:26:41,360 --> 00:26:44,400
Yeah, I think we've had the
vision for a lot of this stuff,
382
00:26:44,400 --> 00:26:51,040
really just for years and so actually
seeing it live is just incredible and
383
00:26:52,880 --> 00:26:59,200
for me it's really about empowering people to
contribute to this thing that is a larger whole.
384
00:26:59,200 --> 00:27:05,040
These public goods that are part of this global
public infrastructure and I think the products
385
00:27:05,040 --> 00:27:12,480
themselves, took a lot of time to kind of get to
where they are today but finally, we feel that
386
00:27:12,480 --> 00:27:17,040
it's really ready and I think it's just a giant
step forward for the entire Ethereum ecosystem.
387
00:27:17,840 --> 00:27:22,160
Absolutely, it's been four years in the
making and it's so amazing just to see
388
00:27:22,160 --> 00:27:26,560
the products go live today and I know
there's so much more that's coming up
389
00:27:26,560 --> 00:27:30,880
but I just feel like this is such
a tremendous milestone, for not just
390
00:27:30,880 --> 00:27:34,160
people within The Graph ecosystem but
really for the future of the Internet.
391
00:27:35,200 --> 00:27:41,920
So what is it that people can do
today that they couldn't do before?
392
00:27:43,040 --> 00:27:46,720
The first thing that they can do
with this new launch is curation.
393
00:27:46,720 --> 00:27:52,000
So, curating data is a critical part
of Web2, as well as Web3, because it
394
00:27:52,000 --> 00:27:55,760
allows people to know the quality and
accuracy of whatever they're curating.
395
00:27:55,760 --> 00:27:59,840
So our Curators have been with us for
several months since the Curator Program
396
00:27:59,840 --> 00:28:03,040
in 2020 and we're excited for them to be
able to now participate in the network.
397
00:28:04,080 --> 00:28:08,640
Yeah, we're really pioneering, like a
brand new way of delivering a service.
398
00:28:08,640 --> 00:28:12,880
People are used to SaaS products which
have very standard ways of operating.
399
00:28:12,880 --> 00:28:18,000
You know, you add a credit card, you maybe
get charged a monthly recurring fee and
400
00:28:18,000 --> 00:28:20,960
The Graph is a decentralized network
and so it just works differently.
401
00:28:22,000 --> 00:28:29,200
Being able to deposit funds on Ethereum to pay
for query fees for example, is really a new thing
402
00:28:29,840 --> 00:28:35,120
but I think we've made that a really smooth
experience and so it's just as easy for developers
403
00:28:35,120 --> 00:28:39,920
to get started and to pay for this service where
you're just getting paid for the metered usage
404
00:28:39,920 --> 00:28:45,920
of the services that you're consuming but it's
running on global open public infrastructure.
405
00:28:45,920 --> 00:28:51,920
Yeah and starting today, developers of subgraphs
can actually migrate those subgraphs over to the
406
00:28:51,920 --> 00:28:56,480
decentralized network permissionlessly, so
they don't need to interact with anyone from
407
00:28:56,480 --> 00:29:01,440
the team to ask permission for them to be able
to do that and I think that's such a powerful
408
00:29:01,440 --> 00:29:05,200
thing and I think it's going to
lead to this explosive innovation on
409
00:29:05,200 --> 00:29:07,760
The Graph Network beyond
just the hosted service.
410
00:29:09,040 --> 00:29:14,240
Yeah and on a personal level I'm sure the team is
really excited to no longer have to be responsible
411
00:29:14,240 --> 00:29:18,000
for operating all of this
giant infrastructure.
412
00:29:18,000 --> 00:29:21,360
I think it's a lot of stress
for any team to have to
413
00:29:21,360 --> 00:29:26,960
try to maintain 100% uptime and with decentralized
networks you just don't have to do that.
414
00:29:26,960 --> 00:29:31,520
It's not any one team's burden to like carry
the infrastructure for the entire space.
415
00:29:32,560 --> 00:29:36,080
That was something that really stuck with me
when I first met you and joined The Graph,
416
00:29:36,080 --> 00:29:39,120
was the vision has always been
to decentralize and so we recognize
417
00:29:39,120 --> 00:29:41,600
that the hosted service was
never the long-term solution
418
00:29:41,600 --> 00:29:44,640
and we were part of that problem until
we actually launched the network itself.
419
00:29:45,200 --> 00:29:49,280
Yeah that's right, the hosted service
was always just a stepping stone
420
00:29:49,280 --> 00:29:55,280
to allow us to really prove out the
developer tools, the subgraphs themselves,
421
00:29:55,280 --> 00:29:59,440
make sure the developers could start
building but we've been working towards
422
00:29:59,440 --> 00:30:04,400
the decentralized network and now towards this
fuller kind of vision from the very beginning.
423
00:30:05,040 --> 00:30:10,720
Can you talk about some of the unique challenges
it is to build something in a decentralized
424
00:30:10,720 --> 00:30:13,840
way, as like a protocol,
versus building it in a
425
00:30:13,840 --> 00:30:16,800
centralized way, as like
a firm or a company would?
426
00:30:17,920 --> 00:30:24,720
Yeah, I mean I think it starts with like the
economic design, coming up with the incentive
427
00:30:24,720 --> 00:30:30,320
structures for all of the different roles
and doing that in a way that's robust to
428
00:30:32,480 --> 00:30:36,320
erratic behavior because you really
can't predict what people are going to do
429
00:30:36,320 --> 00:30:39,280
and these systems, they tend
to become really complex
430
00:30:40,960 --> 00:30:45,680
when they're kind of out of your control that
way and so that's kind of the first step.
431
00:30:47,600 --> 00:30:50,640
Another thing in this network
for example is security.
432
00:30:51,200 --> 00:30:54,160
So making sure that you
can verify that either
433
00:30:54,800 --> 00:30:59,360
the indexing work was done correctly or
that a query response is done correctly
434
00:30:59,360 --> 00:31:04,000
and be able to launch a dispute
and have that dispute be resolved,
435
00:31:04,000 --> 00:31:07,520
if an Indexer for example
is acting maliciously.
436
00:31:07,520 --> 00:31:10,400
So there are all of these types
of things that you have to do
437
00:31:11,440 --> 00:31:15,440
to create a network that's secure,
that's fast and performant.
438
00:31:16,400 --> 00:31:17,200
That's another thing.
439
00:31:17,200 --> 00:31:20,720
When you have all of the
servers just ran and controlled
440
00:31:20,720 --> 00:31:27,040
by a single company, it's easy to
build something that is fast to start
441
00:31:27,040 --> 00:31:31,840
but actually, decentralized networks can build
something that's a lot more performant at scale
442
00:31:32,480 --> 00:31:37,840
because you have so many different entities
that can independently add servers,
443
00:31:38,560 --> 00:31:43,200
optimize things in a way that like central
top-down coordination can't really do.
444
00:31:43,200 --> 00:31:48,480
And so it's that social scalability of
decentralized networks that actually can
445
00:31:48,480 --> 00:31:53,600
allow them to be even faster and more
reliable than a centralized service.
446
00:31:53,600 --> 00:31:57,680
Yeah, I would add that a company is
very limited in what it allows people
447
00:31:57,680 --> 00:32:00,960
to do and the ownership that
they can have in their impact,
448
00:32:00,960 --> 00:32:04,240
whereas a protocol allows a number
of companies to come together
449
00:32:04,240 --> 00:32:08,320
around a shared mission and have their
own hero's journey in that ecosystem.
450
00:32:08,320 --> 00:32:11,760
And we're really trying to drive towards that
goal where we have Edge & Node, as well as
451
00:32:11,760 --> 00:32:16,720
StreamingFast, The Graph Foundation, Protofire,
other organizations that are all contributing what
452
00:32:16,720 --> 00:32:22,080
they can to this mission and that makes a much
larger positive sum game than we've had in Web2.
453
00:32:23,440 --> 00:32:24,880
Yeah, that's so right.
454
00:32:24,880 --> 00:32:29,840
There's the technological decentralization,
making sure that the servers can be ran by
455
00:32:29,840 --> 00:32:34,880
anybody that is permissionless and then there's
the organizational decentralization and all of the
456
00:32:34,880 --> 00:32:40,640
time investments that it takes to actually scale
an ecosystem of independent companies and teams
457
00:32:40,640 --> 00:32:47,360
and participants that can all contribute
while still working towards a shared mission.
458
00:32:47,360 --> 00:32:50,080
I would say, instead of
all of it funneling up to
459
00:32:50,080 --> 00:32:54,560
one centralized company, now people are
able to be compensated in protocols.
460
00:32:54,560 --> 00:32:57,760
Commeasure it to the value that they're
bringing because there's no intermediary
461
00:32:57,760 --> 00:33:03,040
that's intercepting those funds and I think in
protocols, like I believe that tokens are the
462
00:33:03,040 --> 00:33:08,880
next evolution of the business model away from
SaaS but within protocols what you need to ensure
463
00:33:08,880 --> 00:33:13,840
that you do is you have revenue that instead
of flowing to the top to a centralized company,
464
00:33:13,840 --> 00:33:18,960
it instead flows peer-to-peer to the participants
in that ecosystem and that is just so powerful.
465
00:33:19,600 --> 00:33:23,840
Where do you think we are in terms of
this progressive decentralization of
466
00:33:24,800 --> 00:33:26,800
moving from Web2 to Web3?
467
00:33:26,800 --> 00:33:30,400
So, the infrastructure is still
being built out right now,
468
00:33:31,200 --> 00:33:36,320
we are seeing applications already getting
built on top that are pretty great.
469
00:33:36,320 --> 00:33:40,800
So in DeFi and NFTs and governance I
think we're seeing applications that
470
00:33:40,800 --> 00:33:45,520
already work and are better than what
you could do before crypto and Web3,
471
00:33:46,240 --> 00:33:51,840
but we're still building out this Web3 platform
and so I think that like the real kind of
472
00:33:53,520 --> 00:33:56,160
societal transformation
hasn't even started yet.
473
00:33:57,680 --> 00:34:00,480
This year, with
The Graph Network and other
474
00:34:00,480 --> 00:34:06,880
crypto protocols really maturing, we're
starting to really get to a more mature
475
00:34:06,880 --> 00:34:12,240
place with this foundation but I think
we've got a really long way to go.
476
00:34:12,240 --> 00:34:18,560
First, to even really define how to build a DApp
in a way that is really easy for developers
477
00:34:18,560 --> 00:34:25,840
to do and kind of defining that full protocol
stack and also coming up with the new types of
478
00:34:25,840 --> 00:34:28,640
interactions that users
are going to have to learn.
479
00:34:29,760 --> 00:34:35,360
What it means to have your self-sovereign
identity and reputation, to be able to take
480
00:34:35,360 --> 00:34:40,800
your data with you between different applications
and to kind of build out this full Web3 vision.
481
00:34:41,840 --> 00:34:47,280
And I think that we are seeing a mass
exodus of talent from Web2 into Web3.
482
00:34:47,280 --> 00:34:52,240
So many people have already left and more
are looking at how they can get involved in
483
00:34:52,240 --> 00:34:56,720
the blockchain space and I think that's going to
lead to a lot more innovation a lot more quickly.
484
00:34:58,480 --> 00:35:02,000
There’s that old saying, follow
where the developers go, right?
485
00:35:02,000 --> 00:35:05,280
So they kind of are ahead of
the curve, of what's coming.
486
00:35:05,280 --> 00:35:08,640
Totally, yeah and like what they
are doing on nights and weekends.
487
00:35:09,440 --> 00:35:13,600
There's so many Devs, that
are still maybe clocking
488
00:35:13,600 --> 00:35:16,240
in the day jobs but
they're not thinking about
489
00:35:16,960 --> 00:35:21,040
how to scale these web companies
on the weekends anymore.
490
00:35:21,040 --> 00:35:26,400
That stuff is played out, they've seen where
it leads and really the excitement, the stuff
491
00:35:26,400 --> 00:35:31,600
that they're doing in their spare time is
building on Ethereum, building in crypto.
492
00:35:31,600 --> 00:35:38,720
And we're already, like Tegan was saying,
seeing just a tremendous wave of Devs actually
493
00:35:38,720 --> 00:35:40,640
quitting their jobs and
jumping in full time.
494
00:35:41,200 --> 00:35:44,480
There was some recent news about
an expansion of core dev team.
495
00:35:44,480 --> 00:35:46,320
Is there anything you guys
wanted to mention about that?
496
00:35:46,880 --> 00:35:50,880
Yeah, so we're really excited to share
that the core dev team that works on
497
00:35:50,880 --> 00:35:52,640
The Graph Protocol has
been decentralized.
498
00:35:53,200 --> 00:35:57,120
Now we have Edge & Node, as well as StreamingFast
and are looking for more contributors
499
00:35:57,120 --> 00:36:01,280
to help upgrade the protocol and continue
maintaining it in perpetuity.
500
00:36:01,280 --> 00:36:04,320
Our current governance is a
6-10 multisig where we have five
501
00:36:04,320 --> 00:36:06,480
different stakeholder
groups represented and so
502
00:36:06,480 --> 00:36:11,280
part of our decentralized governance journey is to
progressively decentralize those decision makers,
503
00:36:11,280 --> 00:36:14,560
which we look to do with more core devs and
contributors entering the space.
504
00:36:16,080 --> 00:36:19,120
That's right, it makes a huge
difference to have multiple
505
00:36:19,120 --> 00:36:21,840
teams actually building
the core software.
506
00:36:22,400 --> 00:36:25,440
It really allows different
voices to be considered and
507
00:36:25,440 --> 00:36:28,160
helps with that true kind
of decentralization.
508
00:36:28,160 --> 00:36:31,920
There's also a tremendous
amount of work ahead of us.
509
00:36:31,920 --> 00:36:34,400
For example, with our multi
blockchain expansion where
510
00:36:34,400 --> 00:36:36,000
there are all of these
different networks
511
00:36:36,000 --> 00:36:42,080
that we want to support on The Graph and so having
additional engineers that have great expertise on
512
00:36:42,080 --> 00:36:45,600
those different chains is just going to allow
us as a community to get there a lot faster.
513
00:36:46,800 --> 00:36:51,120
The most shocking thing to me about working for a
protocol is just how much there is to do and it's
514
00:36:51,120 --> 00:36:55,680
really not possible for it to be done with one
team, with one subject matter expertise.
515
00:36:55,680 --> 00:36:59,200
And so it really is a journey
for the masses to come on board
516
00:36:59,200 --> 00:37:02,880
and for us to build this Graph of data that
can actually be scalable for the world.
517
00:37:03,440 --> 00:37:07,280
Yeah, and for those that
are listening that
518
00:37:07,280 --> 00:37:14,400
don't know, so StreamingFast recently joined The
Graph ecosystem as a core developer team and they
519
00:37:14,400 --> 00:37:18,880
were previously building in a centralized way
and joined The Graph ecosystem focusing on The
520
00:37:18,880 --> 00:37:24,720
Graph mission, which I think just speaks to the
power of protocols and us all aligning together.
521
00:37:24,720 --> 00:37:26,720
We were on similar paths before
522
00:37:26,720 --> 00:37:31,360
and now we all have the same path in front of us
and I think that's really exciting and powerful.
523
00:37:32,560 --> 00:37:39,440
Maybe for the less familiar out there, what
is a protocol versus a traditional company?
524
00:37:39,440 --> 00:37:43,920
So a protocol is an Internet
native way of coordinating
525
00:37:43,920 --> 00:37:49,840
participants in an open system and so the key
differences between say a protocol and a company
526
00:37:50,560 --> 00:37:53,920
is a company has a
specific set of owners,
527
00:37:55,040 --> 00:37:58,000
they're typically hierarchical in
their organizational structure.
528
00:37:59,120 --> 00:38:02,480
In order to contribute to a
company you typically have to
529
00:38:02,480 --> 00:38:06,000
get a job as an employee and all of these
types of things add a lot of friction.
530
00:38:06,640 --> 00:38:13,120
Whereas with protocols you define a set of
rules that govern some kind of ecosystem and
531
00:38:13,120 --> 00:38:17,840
there are different roles that people can
play to achieve some kind of common goal
532
00:38:18,560 --> 00:38:25,520
and those rules are open and transparent and so
anybody can participate and there's a native way
533
00:38:25,520 --> 00:38:30,480
of then transferring value and making
decisions and evolving these systems over time.
534
00:38:31,040 --> 00:38:36,480
But because they're open, transparent
and have these types of properties
535
00:38:36,480 --> 00:38:42,400
they can scale to be way bigger than
what a single company is able to do
536
00:38:42,400 --> 00:38:48,160
and so in the future we think that there's
going to be a protocol for every mission.
537
00:38:48,160 --> 00:38:51,760
Any kind of goal that you can
have, a protocol is going to
538
00:38:51,760 --> 00:38:55,120
be the best way to align
people to achieve that goal.
539
00:38:55,120 --> 00:38:58,720
Yeah, and what's really powerful is
now with the advent of the Internet
540
00:38:58,720 --> 00:39:03,600
you no longer need companies that are thousands of
people wide and instead you can have small teams
541
00:39:03,600 --> 00:39:05,360
that have communities
that are millions,
542
00:39:05,360 --> 00:39:09,920
if not billions of people wide and I think
that's what these protocols will grow into.
543
00:39:10,960 --> 00:39:14,640
Companies also incentivize their
contributors very differently than protocols.
544
00:39:14,640 --> 00:39:19,760
Whereas they're maximizing for their bottom
line growth and maybe a specific revenue target,
545
00:39:19,760 --> 00:39:23,680
with protocols it's contributors that
are putting in work and then receiving
546
00:39:23,680 --> 00:39:26,160
a reward commensurate to the
effort that they're putting in.
547
00:39:26,160 --> 00:39:29,360
And that could include putting their capital
to work, it could be running a machine,
548
00:39:29,360 --> 00:39:34,080
but it doesn't need to be driven by some
shareholder objective from a centralized company.
549
00:39:34,800 --> 00:39:40,320
Yeah and I think that shift from shareholder
capitalism to stakeholder capitalism is a
550
00:39:40,320 --> 00:39:44,160
really big shift because the interests
of the shareholders are sometimes but
551
00:39:44,160 --> 00:39:48,720
not always aligned with the interests of
the different participants in an ecosystem.
552
00:39:48,720 --> 00:39:53,840
And so it's actually better if you can have
a governance process that is for the entire
553
00:39:53,840 --> 00:39:58,480
ecosystem with equal representation between
the different stakeholder groups to make sure
554
00:39:58,480 --> 00:40:02,560
that the system is continuing to
serve all of the participants.
555
00:40:02,560 --> 00:40:06,560
I think all of us have worked at one time or
another in our careers at companies so,
556
00:40:07,120 --> 00:40:12,080
could you maybe speak personally about your
experience of the difference of what it was
557
00:40:12,080 --> 00:40:15,920
like working at a traditional company versus
working in the environment that we work in?
558
00:40:15,920 --> 00:40:18,160
It's remote, it's coordinating
lots of different people,
559
00:40:18,720 --> 00:40:20,000
the unique challenges there.
560
00:40:20,560 --> 00:40:24,480
Yeah, well we still work at a company,
at Edge & Node which is like a
561
00:40:24,480 --> 00:40:27,200
very kind of high
trust kind of environment.
562
00:40:27,200 --> 00:40:31,280
We complement each
other really well and
563
00:40:31,280 --> 00:40:36,080
that I think works great at a small scale
but Eva can speak a lot to what it's like,
564
00:40:38,400 --> 00:40:42,000
as a director of the Graph Foundation,
overseeing like a much larger ecosystem.
565
00:40:42,560 --> 00:40:46,720
Yeah, I'd say the biggest differences for
me is in a company you're coordinating
566
00:40:46,720 --> 00:40:50,880
intra companies, so between
teams, maybe trusted colleagues.
567
00:40:50,880 --> 00:40:56,000
With a protocol your job is to facilitate as
much efficiency and innovation between teams
568
00:40:56,000 --> 00:40:58,960
that technically are aligned in the same
mission but have different interests and
569
00:40:58,960 --> 00:41:03,840
also different expertise and so there's a bit
of a larger coordination overhead.
570
00:41:03,840 --> 00:41:06,960
But that's exactly why we're building
on decentralized infrastructure because
571
00:41:08,000 --> 00:41:11,600
things like DAOs, decentralized
autonomous organizations or multi-sigs
572
00:41:11,600 --> 00:41:14,240
allow for that to be much
easier for us to execute.
573
00:41:14,240 --> 00:41:15,760
How do people come to consensus?
574
00:41:15,760 --> 00:41:20,720
How are like disagreements handled when
you're dealing with like disparate opinions
575
00:41:20,720 --> 00:41:24,560
aren't necessarily in the same company?
576
00:41:25,280 --> 00:41:28,720
Yeah, so at The Graph Foundation
we have two sets of governance.
577
00:41:28,720 --> 00:41:32,000
One is around grants and
funding ecosystem projects.
578
00:41:32,000 --> 00:41:35,840
So we go to The Graph Council and make
proposals about what are the most interesting
579
00:41:35,840 --> 00:41:39,040
projects that we should be working on
and which grants we should be funding.
580
00:41:39,040 --> 00:41:42,880
Separately, The Graph Council
oversees Graph Protocol decisions,
581
00:41:42,880 --> 00:41:47,680
so anything related to a Graph Improvement
Proposal or an upgrade would then be passed by
582
00:41:47,680 --> 00:41:52,960
The Graph Council, after receiving sentiment from
our trusted community of Delegators and Indexers.
583
00:41:52,960 --> 00:41:58,000
Yeah and a lot of this comes down
to enfranchising people in a process.
584
00:41:58,000 --> 00:42:02,240
When they know that their voice
can be heard then they participate
585
00:42:02,240 --> 00:42:04,560
and the best ideas can
kind of rise in the top.
586
00:42:04,560 --> 00:42:08,320
So for example, in The Graph
a lot of the early discussions
587
00:42:08,320 --> 00:42:11,600
might happen in like a Discord
which is a public chat.
588
00:42:12,640 --> 00:42:15,760
Ideas that people want to move
forward with often then move into
589
00:42:15,760 --> 00:42:21,760
a forum, where again you can have very substantive
discussions and I think it’s very easy for
590
00:42:23,680 --> 00:42:27,840
people that are engaging in good faith
to really be able to kind of evaluate
591
00:42:27,840 --> 00:42:33,120
what are the higher quality arguments versus
others when the stuff is kind of done in public.
592
00:42:33,120 --> 00:42:35,840
So it's that ability for
these people to participate
593
00:42:36,480 --> 00:42:40,960
that allows the best ideas to kind of
rise to the top and then ultimately,
594
00:42:43,200 --> 00:42:45,840
just because you have protocols,
it doesn't mean you have leaders.
595
00:42:45,840 --> 00:42:49,440
Now the nice thing is like within The Graph
ecosystem there are leaders all over the
596
00:42:49,440 --> 00:42:54,560
world that work for like different companies but
that have kind of like risen in stature in the
597
00:42:54,560 --> 00:43:00,320
community because they're engaged, they're
thoughtful, they're helping others and so
598
00:43:00,320 --> 00:43:05,280
that also kind of allows the community
to more quickly get to consensus.
599
00:43:05,840 --> 00:43:10,000
So there may be some people that aren't
familiar with Web3 and the terminology
600
00:43:10,000 --> 00:43:15,600
that we throw around a bit, so could you explain
to us, what is Web3 and why is it so important?
601
00:43:15,600 --> 00:43:19,360
Yeah, so Web3, for me is
the decentralized web,
602
00:43:19,360 --> 00:43:22,560
it's everything that exists
within this decentralized space.
603
00:43:22,560 --> 00:43:26,720
So you hear of NFTs, you hear
of DAOs, you hear of DeFi,
604
00:43:26,720 --> 00:43:30,080
all of that is underneath
this big Web3 umbrella.
605
00:43:31,040 --> 00:43:33,440
Yeah. Well okay, so why
are NFTs important?
606
00:43:34,400 --> 00:43:37,600
So I'm really excited about
NFTs because it's kind
607
00:43:37,600 --> 00:43:43,920
of brought this funding into a new community
that maybe has been previously left out.
608
00:43:43,920 --> 00:43:47,440
So for those listening that don't
know, NFTs, it's digital art.
609
00:43:47,440 --> 00:43:52,640
And so many artists and creators were
actually compensated a lot for the work
610
00:43:52,640 --> 00:43:56,160
they've been building and I think
that's really powerful because
611
00:43:56,160 --> 00:44:02,880
creators and artists have previously maybe been
left out of financial system and inclusion.
612
00:44:02,880 --> 00:44:07,120
And so that to me was just really
exciting to see, a lot of the funding flow
613
00:44:07,120 --> 00:44:11,760
from DeFi to NFTs and I think also
within NFTs, what's exciting is that
614
00:44:12,400 --> 00:44:15,680
you can own this version of your
art and you know it's yours.
615
00:44:16,240 --> 00:44:22,880
Also that artists can receive a piece
of the sale, the secondary market sale.
616
00:44:22,880 --> 00:44:27,360
So for example what happens in traditional
art markets is that you'll, as an artist,
617
00:44:27,360 --> 00:44:32,240
sell a painting for maybe ten thousand dollars and
then the art collector sells that for 10 million
618
00:44:32,240 --> 00:44:36,160
and you don't get any of that money
and within the NFT space you actually
619
00:44:36,160 --> 00:44:39,280
can collect a portion of that
because it's within that code.
620
00:44:39,280 --> 00:44:41,600
You spoke about DAOs, why are DAOs important?
621
00:44:44,560 --> 00:44:50,160
DAOs are a revolutionary tool set and
framework for how humans can coordinate.
622
00:44:50,160 --> 00:44:55,360
Now, collaborative coordination mechanisms
like co-ops have existed for a while or
623
00:44:55,360 --> 00:45:01,040
credit unions, but what DAOs allow us to do
is automate most of that and enable much more
624
00:45:01,040 --> 00:45:06,160
anonymous or independent contributions
than we've been able to do in the past.
625
00:45:06,160 --> 00:45:10,480
So some of the top DAOs in the Web3
space either have fully anonymous
626
00:45:10,480 --> 00:45:15,840
members or are able to maintain privacy while
also contributing very highly to their space.
627
00:45:15,840 --> 00:45:20,080
Yaniv, Tegan, can you tell us
about why DeFi is important?
628
00:45:20,800 --> 00:45:22,720
So DeFi was really exciting
629
00:45:22,720 --> 00:45:27,760
for me because we are able to see things that
can't happen in the traditional finance space,
630
00:45:27,760 --> 00:45:33,280
happen within DeFi and the moment that really lit
that up for me was when the flash loans happened.
631
00:45:33,280 --> 00:45:37,040
That is not something that can happen
in the traditional space and so
632
00:45:37,040 --> 00:45:41,200
you see all this innovation and the moment
you see that one light of innovation
633
00:45:41,200 --> 00:45:45,360
you're likely going to see a waterfall
effect of innovation afterwards
634
00:45:45,360 --> 00:45:47,680
and that's really what we
saw within the DeFi space.
635
00:45:48,320 --> 00:45:52,640
Decentralized finance is important because a
lot of people have previously been left out
636
00:45:52,640 --> 00:45:56,320
of the traditional financial
system with credit scores,
637
00:45:56,320 --> 00:46:02,240
a lot of people aren't able to access capital
and the DeFi space breaks that open for everyone
638
00:46:02,240 --> 00:46:08,800
to be able to access this space and it's really
exciting for me coming from traditional finance.
639
00:46:10,480 --> 00:46:15,600
All right, so you told us about what's
been going on, the product launch.
640
00:46:16,320 --> 00:46:19,760
So how about you tell us
a bit about what's next?
641
00:46:20,480 --> 00:46:25,200
I'm personally really excited for us to welcome
this new set of stakeholders, Curators, seeing the
642
00:46:25,200 --> 00:46:29,120
subgraphs live on the network and that means that
we can actually do a lot more interesting things
643
00:46:29,120 --> 00:46:32,720
with governance, with different stakeholder
groups that have very different interests.
644
00:46:32,720 --> 00:46:35,760
So we're going to see a lot more
products and exciting announcements
645
00:46:35,760 --> 00:46:38,240
coming up from The Graph over the
next several months related to that.
646
00:46:39,200 --> 00:46:43,760
Yeah, so now that we have all of these
tools, we just need to organize all the data.
647
00:46:44,560 --> 00:46:47,680
It's going to start with all
of the data that's already
648
00:46:47,680 --> 00:46:50,320
on crypto networks,
every different blockchain
649
00:46:50,320 --> 00:46:55,840
and storage network and then from there just
all of humanity's knowledge and information.
650
00:46:56,960 --> 00:47:00,640
Yeah, I'm really looking forward to seeing the
subgraphs migrate over to the decentralized
651
00:47:00,640 --> 00:47:05,280
network but I'm also excited to see a new
segment of developers spin up their first
652
00:47:05,280 --> 00:47:08,640
subgraph directly on The
Graph's decentralized network
653
00:47:08,640 --> 00:47:12,000
and like Yaniv said, organizing
all of the world's information.
654
00:47:12,000 --> 00:47:15,280
I believe blockchains are the future
of the Internet and I think The Graph
655
00:47:15,280 --> 00:47:18,400
is core infrastructure that's
paving the way for that future.
656
00:47:19,520 --> 00:47:21,120
Thanks for your time today.
657
00:47:21,120 --> 00:47:22,720
We really enjoyed
everything you had to say
658
00:47:22,720 --> 00:47:24,960
and we're really looking
forward to what's coming next.
659
00:47:25,760 --> 00:47:30,160
And thanks to everyone who tuned in today,
come and visit us at the graph.com,
660
00:47:31,120 --> 00:47:35,760
visit our Discord or Telegram and learn
about how you can become a Delegator,
661
00:47:35,760 --> 00:47:45,200
an Indexer or Curator and help participate
in building this decentralized ecosystem.
1
00:00:00,560 --> 00:00:07,467
Data powers the world around us. It's how we're able to
communicate, trade, understand what's happening and make decisions.
2
00:00:08,000 --> 00:00:12,334
On The Graph, data is grouped into open APIs called subgraphs.
3
00:00:12,500 --> 00:00:18,834
Anyone can access data from subgraphs with just a few
keystrokes, using a convenient query language called GraphQL.
4
00:00:19,800 --> 00:00:23,040
As data keeps getting added onto The Graph, individuals need
5
00:00:23,040 --> 00:00:29,434
to organize and surface the most relevant data
in a decentralized way, that's where Curators come in.
6
00:00:30,434 --> 00:00:37,400
Curators signal on quality subgraphs by depositing GRT,
The Graph's native token, in return for curation shares.
7
00:00:38,034 --> 00:00:44,234
Curation shares are minted on a bonding curve, which means
that the earlier you signal, the more shares you get.
8
00:00:44,534 --> 00:00:51,100
It also means that when you go to withdraw your GRT, you could
end up with more or less Graph tokens than you started with.
9
00:00:51,600 --> 00:00:58,334
When Curators signal on a subgraph, that creates a reward
to incentivize Indexers to index that subgraph.
10
00:00:58,640 --> 00:01:03,234
For their work, Curators earn a cut of query fees
on the subgraphs they're curating.
11
00:01:03,800 --> 00:01:09,680
By accurately assessing which subgraphs are most
valuable and likely to produce query fees, Curators are able
12
00:01:09,680 --> 00:01:15,134
to efficiently allocate resources on the network
and help organize the data for the crypto economy.
1
00:00:00,320 --> 00:00:04,880
In this video you'll learn how to query a
subgraph from a front-end web application.
2
00:00:05,520 --> 00:00:08,240
While what we're covering will
show you how to query web,
3
00:00:08,240 --> 00:00:11,680
as well as Javascript mobile
applications, like React Native,
4
00:00:11,680 --> 00:00:16,160
the techniques and ideas that we'll be
covering can also be applied to native IOS,
5
00:00:16,160 --> 00:00:18,240
Android and Flutter
applications as well.
6
00:00:18,960 --> 00:00:22,160
There are various GraphQL clients
available in the ecosystem
7
00:00:22,160 --> 00:00:24,000
and I'll be going over two of them today.
8
00:00:24,880 --> 00:00:28,480
The Apollo client is one of the more
mature libraries in the ecosystem
9
00:00:28,480 --> 00:00:32,320
and supports both web, as well
as native IOS and native Android.
10
00:00:33,600 --> 00:00:36,367
URQL is somewhat newer
and also more lightweight.
11
00:00:36,634 --> 00:00:39,434
It supports web applications,
as well as React Native.
12
00:00:40,200 --> 00:00:41,500
This is what we'll be using today.
13
00:00:42,800 --> 00:00:46,700
Many applications in the Web3 space
use The Graph as their API layer.
14
00:00:47,040 --> 00:00:52,720
For instance, Uniswap uses The Graph for
analytics, historical and token data and metadata.
15
00:00:54,000 --> 00:00:56,880
If we open the network tab,
inspect the network traffic
16
00:00:56,880 --> 00:00:59,920
and refresh the page, we
see that API calls are being
17
00:00:59,920 --> 00:01:02,800
made to The Graph using
the GraphQL query language,
18
00:01:02,800 --> 00:01:04,560
which we'll be covering
in just a moment.
19
00:01:07,760 --> 00:01:10,934
There are a couple of scenarios in
which you might be querying a subgraph.
20
00:01:14,480 --> 00:01:17,680
You can query any subgraph in
the Graph Explorer by opening
21
00:01:17,680 --> 00:01:19,760
the subgraph and then
clicking on Query.
22
00:01:23,360 --> 00:01:26,480
Here you'll be given a Query URL
from which you'll need to update
23
00:01:26,480 --> 00:01:29,040
the API key with an
API key in your account.
24
00:01:32,080 --> 00:01:36,167
You can also query any subgraph which
you've created in the Subgraph Studio.
25
00:01:39,840 --> 00:01:43,800
Here you'll be given a Temporary Query
URL, which can be used for testing.
26
00:01:44,160 --> 00:01:46,080
This is the approach
we'll be taking today.
27
00:01:51,360 --> 00:01:57,200
To get started, we'll first create a new empty
react application, using “npx create-react-app”
28
00:01:59,920 --> 00:02:04,900
Next, we'll change into the new directory
and install “urql”, as well as “graphql”
29
00:02:12,400 --> 00:02:15,840
Now we can open the project in our
text editor to start writing some code.
30
00:02:16,400 --> 00:02:20,500
The file that we'll be
working in is src/App.js
31
00:02:22,960 --> 00:02:27,440
We'll first import “createClient” from “urql”,
which will allow us to create our graphql client.
32
00:02:29,280 --> 00:02:33,200
Next, we'll import “useEffect” and “useState”
hooks from “react” to manage state,
33
00:02:33,200 --> 00:02:35,267
as well as to call lifecycle methods.
34
00:02:36,880 --> 00:02:42,067
Now we can define our subgraph "APIURL", which
was given to us by the Subgraph Studio.
35
00:02:51,200 --> 00:02:54,880
Next, we'll define our query by
creating a variable called "query",
36
00:02:54,880 --> 00:02:58,240
using template literals and
setting the "query" keyword.
37
00:02:59,200 --> 00:03:01,440
Here we can paste in
any graphql query.
38
00:03:02,080 --> 00:03:06,000
To test out queries, we can go to the
Subgraph Studio and click on Playground.
39
00:03:11,840 --> 00:03:16,600
Here I'll make a couple of updates to create the query
that I'd like to use and then copy it to my clipboard.
40
00:03:23,600 --> 00:03:27,800
In the app, I can now just paste in the code
copied from the Subgraph Studio playground.
41
00:03:31,920 --> 00:03:36,320
Now we'll create the graphql "client",
setting the "url" as the "APIURL".
42
00:03:37,520 --> 00:03:39,920
Using this "client" we
can now query our API.
43
00:03:41,120 --> 00:03:44,080
To do so, I'll create a
function called "fetchData"
44
00:03:44,080 --> 00:03:49,840
in the main "App" component that calls "client.query",
passing in the "query" and setting it "toPromise".
45
00:03:53,520 --> 00:03:55,760
When the "response" comes
back, we'll just "log" it out.
46
00:03:58,400 --> 00:04:04,000
To invoke the function, we'll use a "useEffect" hook, which
will call the function, as soon as the application loads.
47
00:04:05,434 --> 00:04:07,900
To test this out we can
now run "npm start".
48
00:04:14,160 --> 00:04:18,800
When we inspect the console, we can see that the
data has been returned and logged out to the console.
49
00:04:30,160 --> 00:04:32,720
Next, let's display some
of this data in our UI.
50
00:04:35,520 --> 00:04:40,960
To do so, we'll create a "tokens" array in the local
state, using "useState", setting the initial value
51
00:04:40,960 --> 00:04:46,240
as an empty array and a "setTokens" function,
that we can call to update the "tokens" array.
52
00:04:48,400 --> 00:04:54,134
In our "fetchData" function we can now call
"setTokens" passing in the "response.data.tokens".
53
00:04:56,000 --> 00:04:58,400
In our UI, we can now
map over the "tokens".
54
00:05:10,160 --> 00:05:15,267
Here we'll render links, so that we can view
the "contentURI", as well as the "metadataURI".
55
00:05:27,360 --> 00:05:32,400
When the UI updates, we should now see links for
the Content, as well as the Metadata for each token.
1
00:00:00,149 --> 00:00:04,534
The Graph’s decentralized network needs
to process a tremendous amount of data.
2
00:00:04,734 --> 00:00:09,434
This data is processed and served by a network
of computers, called Indexers.
3
00:00:09,634 --> 00:00:12,934
Indexing is a way of organizing
data for efficient retrieval.
4
00:00:13,734 --> 00:00:18,267
Let’s say you wanna find a job as
a barista, in Italy, on the decentralized web.
5
00:00:18,534 --> 00:00:20,867
There could be millions of job openings worldwide.
6
00:00:21,150 --> 00:00:26,000
If you had to flip through them one by one to find a match,
that would take you a very long time.
7
00:00:26,340 --> 00:00:28,334
That’s where indexing comes in.
8
00:00:28,619 --> 00:00:32,841
If someone with a computer kept track of all
the job openings, sorted by country and with
9
00:00:32,841 --> 00:00:36,200
support for search, you could quickly
find that job you’re looking for.
10
00:00:36,734 --> 00:00:42,167
Indexers compete in a query market to provide
the best indexing service at the lowest price.
11
00:00:42,700 --> 00:00:47,267
Query fees are paid using micropayments
that cost small fractions of a cent.
12
00:00:47,620 --> 00:00:52,140
This creates an incentive for independent
Indexers to provide this useful service and
13
00:00:52,140 --> 00:00:55,233
we don’t need to rely on big
corporations to control our data.
14
00:00:55,533 --> 00:00:59,700
Indexers need to stake GRT in order
to provide the service to the network.
15
00:01:00,070 --> 00:01:05,000
This creates economic security because if
they misbehave, they can lose their GRT.
16
00:01:05,470 --> 00:01:11,567
To create more of an incentive, anyone can become
a Delegator and delegate their GRT to an Indexer.
17
00:01:11,640 --> 00:01:18,800
With the extra stake, Indexers can earn more fees and rewards
and pass a portion of those earnings back to their Delegators.
18
00:01:19,090 --> 00:01:24,330
By delegating GRT to Indexers, you can help
secure the network and make sure that there
19
00:01:24,330 --> 00:01:29,050
are plenty of Indexers to process and serve
all of the data for the crypto economy.
1
00:00:01,600 --> 00:00:05,034
In this video, we'll be looking at
the features of the new Graph Explorer.
2
00:00:05,680 --> 00:00:08,960
To get started,
visit thegraph.com/explorer
3
00:00:12,720 --> 00:00:17,040
Here, you'll see a list of all of the subgraphs
currently deployed to the decentralized network.
4
00:00:18,640 --> 00:00:24,320
You can view subgraph details by clicking on
any subgraph. Here you can view metadata about
5
00:00:24,320 --> 00:00:29,067
the subgraph, including the Query URL,
Subgraph ID and Deployment ID.
6
00:00:32,960 --> 00:00:36,560
You can also view the Query Fees,
as well as the Curation of the subgraph.
7
00:00:39,760 --> 00:00:43,360
There are also tabs for viewing the
Indexers and Curators of a subgraph.
8
00:00:45,440 --> 00:00:50,634
By clicking on an address, you can see all of the information
about their participation in the network.
9
00:00:52,720 --> 00:00:58,080
Curators or Subgraph Developers, data Consumers
or community members who signal to Indexers
10
00:00:58,080 --> 00:01:00,767
which subgraphs should be
indexed by The Graph Network.
11
00:01:02,067 --> 00:01:08,267
Curators earn a portion of query fees for the subgraphs
they signal on, incentivizing the highest quality data sources.
12
00:01:09,600 --> 00:01:14,200
To signal on a subgraph, click Signal and then
enter the amount that you would like to approve.
13
00:01:27,760 --> 00:01:32,240
In the Network section, you can view general
information about The Graph Network itself.
14
00:01:34,640 --> 00:01:41,734
In the Activity tab, you can view the amount of GRT staked,
Indexing Rewards, as well as information about the Active Epoch.
15
00:01:45,840 --> 00:01:49,900
In the Epochs tab, you can view
information about past and present epochs.
16
00:01:55,634 --> 00:02:00,367
In the Participants section, you can view detailed information
about all the participants in the network.
17
00:02:02,160 --> 00:02:07,067
Here there are separate sortable and searchable
tabs for Indexers, Curators and Delegators.
18
00:02:13,200 --> 00:02:17,267
To learn more about The Graph Network,
check out our docs at thegraph.com/docs
1
00:00:01,200 --> 00:00:06,640
In this video, you'll learn how to set up API keys
and enable billing in the Subgraph Studio.
2
00:00:07,680 --> 00:00:11,040
To get started,
visit thegraph.com/studio
3
00:00:15,120 --> 00:00:18,400
Next, click Connect Wallet to connect the
wallet that you'll be working with.
4
00:00:28,640 --> 00:00:31,120
Next, click on Billing
to view the billing dashboard.
5
00:00:32,960 --> 00:00:35,200
Here, you can see your
account balance, as well as
6
00:00:35,200 --> 00:00:38,400
any costs that you've incurred
during the current billing period.
7
00:00:39,280 --> 00:00:43,840
To enable billing you must have both GRT,
as well as Ether available in your wallet.
8
00:00:48,160 --> 00:00:49,440
Here, click on Deposit.
9
00:00:52,240 --> 00:00:54,960
Billing is handled on the
Matic sidechain to decrease
10
00:00:54,960 --> 00:00:57,840
both transaction times, as
well as transaction costs.
11
00:00:58,720 --> 00:01:03,634
In this step we can move GRT to the
Matic Network directly in the Studio UI.
12
00:01:17,440 --> 00:01:22,160
Once the GRT is successfully moved to Matic,
we can now switch to the Matic Network.
13
00:01:23,760 --> 00:01:26,480
To view the details of the Matic Network, visit
14
00:01:35,360 --> 00:01:39,280
Here, you can view the details that we'll
need to import into our MetaMask wallet,
15
00:01:39,280 --> 00:01:43,434
including the network name, the
Chain ID and the RPC address.
16
00:02:07,040 --> 00:02:11,360
Once the network is configured in your MetaMask
wallet, we can now move on to the next step.
17
00:02:16,240 --> 00:02:20,667
We can now move our GRT from our wallet
balance into our billing balance.
18
00:02:45,760 --> 00:02:48,034
We're now ready to create an API key.
19
00:03:00,800 --> 00:03:03,680
Once the API key is created,
we can now scope it down
20
00:03:03,680 --> 00:03:06,880
to both Authorized Subgraphs,
as well as Authorized Domains.
21
00:03:08,240 --> 00:03:12,080
To scope an API key to a subgraph
we first need the Subgraph ID.
22
00:03:14,000 --> 00:03:18,720
Subgraph IDs are available in The Graph
Explorer detail view for the specific subgraph.
23
00:03:31,280 --> 00:03:36,034
Now that the API key has been configured,
we can start using it to query our subgraph.
1
00:00:00,160 --> 00:00:05,700
So many of the activities we participate in,
happen or are organized online on digital platforms.
2
00:00:07,034 --> 00:00:11,600
But today, most of that software and data
is owned and controlled by giant corporations,
3
00:00:11,840 --> 00:00:17,367
making it hard for small teams and individuals to
compete, have a say and impact the world around them.
4
00:00:18,640 --> 00:00:24,234
Crypto networks are creating a radically better
foundation for society, software and information.
5
00:00:24,400 --> 00:00:28,867
All data is stored and processed on
open networks with verifiable integrity,
6
00:00:28,880 --> 00:00:35,067
making it possible for developers to build and publish
Open Source code that's guaranteed to run as advertised.
7
00:00:35,760 --> 00:00:41,267
The Graph is a protocol for organizing open data and
making it easily accessible to applications.
8
00:00:42,160 --> 00:00:46,334
Curators help organize data through open APIs called subgraphs.
9
00:00:46,960 --> 00:00:52,767
A network of computer operators, called
Indexers, process subgraph data for efficient retrieval.
10
00:00:53,280 --> 00:00:56,700
What Google does for the web, The Graph does for blockchains.
11
00:00:57,120 --> 00:01:02,134
By creating a solid foundation for data, The Graph
is paving the way for a flourishing
12
00:01:02,134 --> 00:01:06,134
ecosystem of decentralized applications or DApps, that can help
13
00:01:06,134 --> 00:01:09,800
change the way people cooperate and organize on the Internet.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment