#1st generation
Request arrives in fastly, backend is set to point to our preflight app, response comes back, we decorate the request with headers (including a flag to indicate preflight has successfully happened) then restart. This time it goes through to our router, or cached response if it exists Note - code examples simplified to exclude error handling etc
vcl_recv () {
if (req.http.preflighted) {
// set backend to router then
return(fetch);
} else {
// set backend to preflight then
return (pass)
}
}
vcl_fetch () {
if (req.backend == preflight) {
req.http.thing = beresp.http.thing
return(restart);
}
}
Same as above, but now preflight sometimes serves redirects, so in vcl_fetch we deliver these straight to the end user
vcl_recv () {
if (req.http.preflighted) {
// set backend to router then
return(fetch);
} else {
// set backend to preflight then
return (pass)
}
}
vcl_fetch () {
if (req.backend == preflight) {
if (beresp.status == 301 || beresp.status == 302) {
return (deliver);
}
req.http.thing = beresp.http.thing
return(restart);
}
}
Same as above, but now we'd like to cache the redirects. The code below works - we get a cache hit - but it has a very damaging effect on uncachable preflight requests. For some reason an extra restart is triggered which causes the request to go to preflight twice. We've tried playing about with vcl_hit and vcl_miss to no avail
vcl_recv () {
if (req.http.preflighted) {
// set backend to router then
return(fetch);
} else {
// set backend to preflight then
// lookup rather than pass because some requests are cachable
return (lookup)
}
}
vcl_hit () {
if (req.backend == preflight) {
return(deliver);
}
}
vcl_fetch () {
if (req.backend == preflight) {
if (beresp.status == 301 || beresp.status == 302) {
return (deliver);
}
req.http.thing = beresp.http.thing
return(restart);
}
}