{{ message }}

Instantly share code, notes, and snippets.

# justinmeiners/another_math_example.md

Last active Jul 20, 2019

### The Problem

Let's suppose you are writing an autocomplete/search feature in which an input field sends requests to the server to get suggestions. The tricky part here is accounting for latency, so I'm going to model this as a timeline of events. This a choice of mathematical structure that I will use to represent the problem. You can visualize it by thinking of points on a number line.

### Components

There are three kinds of events:

• Input events
• Network request events
• Network response events

Let's represent events as tuples `(t, p, e)` where `t` is the time, `p`, is the payload, and `e in {INPUT, REQ, RESP}` is the event type. The code will of course have no such timeline or tuple, its just helpful to think about the abstract structure of the problem, in this case the events over time.

On the server we produce search results, which are a function of the query. Let's label this `query(s)` which returns a set of results. So a response event will look like `(t, query(p), RESP)`.

### Invariants

What guarentees can make about these properties?

Since the user can type in inputs as fast as they want, we have no control over the time between inputs.

We know each request has a corresponding response with a greater time.

In reality we have no idea how long it will take a response to be returned after a request, but we can make a reasonable assumption that the difference between times is bounded. In other words `0 < t_rep - t_req < M` for some `M`.

### Desired Results

Now that we have thought about it a bit, we can decide what behaviour we want

If we are concerned about how frequently requests are made, we can do some kind of rate limiting, so that if a sufficient amount of time has not pased, then no request is made. But, we always need to capture the last input, otherwise a fast type will find they never receive answers. Mathematically this means we want a minimum bound on the time between requests.

Here is one way to accomplish that: After each input event `(t, p, INPUT)`, schedule a request `(t + M, p, REQ)`. If another input event occurs within the range `(t, t + M)` cancel scheduled, and schedule a new one.

This of course adds some input latency, so perhaps we want an alternative...

Which response should we display? The one with the greatest time.

etc, etc, etc

....

See how this analysis is helpful beyond writing code? We aren't trying to write code in math notation. Its just thinking about one aspect of the problem in depth. Identifiying constraints. Defining vague terms. Deciding what outcomes we want. All of which is hard to do once you are in the code itself.