For explanation, essentially we'd generate a cache UUID based on cacheKey
of your entire Babel config data and on all the plugin's cacheKey
s, along with other stuff like the plugin options. Then we can take that cache key and fetch the cached data from wherever.
Each function in the cached
block will be wrapped with logic to record the inputs and outputs, and the data will be stored in the output cacheable data. After a transform, we'd write Babel's normal output object, plus the data from the calls to the cached
functions, and write it to some cache storage backend based on the final overall cacheKey
.
When loading data from cache, we'll load the config and build the key as usually, and then before transforming anything, we try to load from the cache based on the config's key. Once the data is loaded from the cache, Babel itself will replay each call to a cached
function, based on whatever the previous cached result had done. If they return matching values, we'll consider it a cache hit.
This seems like it gives optimal flexibility. I think the vast majority of plugins can be fully qualified based on their cacheKey
, but plugins that depend on unrelated content have no way to signal that to Babel. By exposing the generic cached
block, plugins have a general way to say "if this returns the wrong thing, consider me invalid".
Oh interesting, so this would require that anything that could invalidate the cache be put in a function in the
cached
object of my babel config, then I call that function usingthis.cached.nameOfThing
? Doesn't this mean that you have to parse and traverse the code again just to see whether the cache is still valid?