Cache
Cached handlers
To cache an event handler, you simply need to use the defineCachedHandler
method.
It works like defineHandler
but with an second parameter for the cache options.
import { defineCachedHandler } from "nitro/runtime";
export default defineCachedHandler((event) => {
return "I am cached for an hour";
}, { maxAge: 60 * 60 });
With this example, the response will be cached for 1 hour and a stale value will be sent to the client while the cache is being updated in the background. If you want to immediately return the updated response set swr: false
.
See the options section for more details about the available options.
varies
option to consider specific headers when caching and serving the responses.Cached functions
You can also cache a function using the defineCachedFunction
function. This is useful for caching the result of a function that is not an event handler, but is part of one, and reusing it in multiple handlers.
For example, you might want to cache the result of an API call for one hour:
import { defineHandler, defineCachedFunction } from "nitro/runtime";
export default defineHandler(async (event) => {
const { repo } = event.context.params;
const stars = await cachedGHStars(repo).catch(() => 0)
return { repo, stars }
});
const cachedGHStars = defineCachedFunction(async (repo: string) => {
const data = await fetch(`https://api.github.com/repos/${repo}`).then(res => res.json());
return data.stargazers_count;
}, {
maxAge: 60 * 60,
name: "ghStars",
getKey: (repo: string) => repo
});
The stars will be cached in development inside .nitro/cache/functions/ghStars/<owner>/<repo>.json
with value
being the number of stars.
{"expires":1677851092249,"value":43991,"mtime":1677847492540,"integrity":"ZUHcsxCWEH"}
In edge workers, the instance is destroyed after each request. Nitro automatically uses event.waitUntil
to keep the instance alive while the cache is being updated while the response is sent to the client.
To ensure that your cached functions work as expected in edge workers, you should always pass the event
as the first argument to the function using defineCachedFunction
.
import { defineHandler, defineCachedFunction, type H3Event } from "nitro/runtime";
export default defineHandler(async (event) => {
const { repo } = event.context.params;
const stars = await cachedGHStars(event, repo).catch(() => 0)
return { repo, stars }
});
const cachedGHStars = defineCachedFunction(async (event: H3Event, repo: string) => {
const data = await fetch(`https://api.github.com/repos/${repo}`).then(res => res.json());
return data.stargazers_count;
}, {
maxAge: 60 * 60,
name: "ghStars",
getKey: (event: H3Event, repo: string) => repo
});
This way, the function will be able to keep the instance alive while the cache is being updated without slowing down the response to the client.
Using route rules
This feature enables you to add caching routes based on a glob pattern directly in the main configuration file. This is especially useful to have a global cache strategy for a part of your application.
Cache all the blog routes for 1 hour with stale-while-revalidate
behavior:
import { defineNitroConfig } from "nitro/config";
export default defineNitroConfig({
routeRules: {
"/blog/**": { cache: { maxAge: 60 * 60 } },
},
});
If we want to use a custom cache storage mount point, we can use the base
option.
import { defineNitroConfig } from "nitro/config";
export default defineNitroConfig({
storage: {
redis: {
driver: "redis",
url: "redis://localhost:6379",
},
},
routeRules: {
"/blog/**": { cache: { maxAge: 60 * 60, base: "redis" } },
},
});
Cache storage
Nitro stores the data in the cache
storage mount point.
- In production, it will use the memory driver by default.
- In development, it will use the filesystem driver, writing to a temporary dir (
.nitro/cache
).
To overwrite the production storage, set the cache
mount point using the storage
option:
import { defineNitroConfig } from "nitro/config";
export default defineNitroConfig({
storage: {
cache: {
driver: 'redis',
/* redis connector options */
}
}
})
In development, you can also overwrite the cache mount point using the devStorage
option:
import { defineNitroConfig } from "nitro/config";
export default defineNitroConfig({
storage: {
cache: {
// production cache storage
},
},
devStorage: {
cache: {
// development cache storage
}
}
})
Options
The defineCachedHandler
and defineCachedFunction
functions accept the following options:
Default to
cache
.'_'
otherwise.'nitro/handlers'
for handlers and 'nitro/functions'
for functions.String
). If not provided, a built-in hash function will be used to generate a key based on the function arguments.
By default, it is computed from function code, used in development to invalidate the cache when the function code changes.
Default to
1
(second).-1
a stale value will still be sent to the client while the cache updates in the background. Defaults to
0
(disabled).stale-while-revalidate
behavior to serve a stale cached response while asynchronously revalidating it. Defaults to
true
.boolean
to invalidate the current cache and create a new one.boolean
to bypass the current cache without invalidating the existing entry.['host', 'x-forwarded-host']
to ensure these headers are not discarded and that the cache is unique per tenant.Cache keys and invalidation
When using the defineCachedFunction
or defineCachedHandler
functions, the cache key is generated using the following pattern:
`${options.group}:${options.name}:${options.getKey(...args)}.json`
For example, the following function:
import { defineCachedFunction } from "nitro/runtime";
const getAccessToken = defineCachedFunction(() => {
return String(Date.now())
}, {
maxAge: 10,
name: "getAccessToken",
getKey: () => "default"
});
Will generate the following cache key:
nitro:functions:getAccessToken:default.json
You can invalidate the cached function entry with:
import { useStorage } from "nitro/runtime";
await useStorage('cache').removeItem('nitro:functions:getAccessToken:default.json')