docsync
We maintain a brrr
SDK in TypeScript and Python. They both provide implementations for
the same backing datastructures, and those classes provide the same
docstrings. To avoid them going out of sync, Shun created a tool
called `docsync`. It scans docstrings with a
<docsync>SomeKey</docsync> tag using treesitter, and
compares them to be equal across both languages. E.g.:
/**
* A full brrr request payload.
*
* This is a low-level brrr primitive.
*
* The memo key must be generated by the instantiator of this class, and it
* must be deterministic: the "same" args and kwargs must always encode to the
* same memo key.
*
* Using the same memo key, we store the task and its argv here so we can
* retrieve them in workers.
*
* <docsync>Call</docsync>
*/
export interface Call {
...
and:
@dataclass
class Call:
"""A full brrr request payload.
This is a low-level brrr primitive.
The memo key must be generated by the instantiator of this class, and it
must be deterministic: the "same" args and kwargs must always encode to the
same memo key.
Using the same memo key, we store the task and its argv here so we can
retrieve them in workers.
<docsync>Call</docsync>
"""
We hooked it up to `nix flake check` so it’s automatically checked in
CI.
It’s in
brrr @ 137527a
but we’ll probably move it out to its own repo at some point.
Hosting Nix NYC meetup 3/18
We’ll be hosting the
next Nix NYC meetup, 3/18/26. See you there!
UNIX_EPOCH + 1 second
Yesterday, Ben noticed this blog’s contents weren’t refreshing, even
if you explicitly clicked refresh; seeing changes required a hard
refresh. Let’s look at the headers:
$ curl -D /dev/stderr -s -o /dev/null https://電.anterior.app/auth/login.html
HTTP/2 200
content-type: text/html
content-length: 11233
date: Fri, 27 Feb 2026 20:42:51 GMT
cache-control: max-age=86400
accept-ranges: bytes
last-modified: Thu, 01 Jan 1970 00:00:01 GMT
vary: accept-encoding
x-cache: Miss from cloudfront
via: 1.1 a086f9674a01c7542c440ffacd39476a.cloudfront.net (CloudFront)
x-amz-cf-pop: JFK52-P9
x-amz-cf-id: 7_XCBzHLxLFTjlJuOa1cG0WLhZv_yQ_pZfYopz23SUWy0KJGkgn4IQ==
x-frame-options: DENY
content-security-policy: connect-src 'self' https://anterior-master-platform.s3.us-east-2.amazonaws.com/artifacts/ https://anterior-master-platform.s3.us-east-2.amazonaws.com/uploads/; default-src 'none'; font-src 'self'; form-action 'self' https://anterior-master-platform.s3.us-east-2.amazonaws.com/uploads/; img-src 'self'; manifest-src 'self'; media-src 'self'; script-src-elem 'self'; style-src-elem 'self'; upgrade-insecure-requests ; worker-src 'self';
x-content-type-options: nosniff
strict-transport-security: max-age=31536000; includeSubDomains; preload
What’s that
Last-Modified
header? That’s the time to which all files are set when stored in the
/nix/store:
$ nix eval --raw --expr 'builtins.toFile "foo" "hello\n"' | xargs -r date -u -Iseconds -r
1970-01-01T00:00:01+00:00
Unfortunately, even when you click refresh, a browser will send the
If-Modified-Since
header, and the server will say: nope, nothing changed since you last
loaded this page;
304 Not Modified. And the browser won’t get the new content.
So the solution would seem to be: stop static-web-server from sending
the Last-Modified header when that’s the value? A grep through their
source code finds
this:
// If the file's modified time is the UNIX epoch, then it's likely not valid and should
// not be included in the Last-Modified header to avoid cache revalidation issues.
let modified = meta
.modified()
.ok()
.filter(|&t| t != std::time::UNIX_EPOCH)
.map(LastModified::from);
They already thought of it. So why isn’t it working for us? Taking a
closer look at that timestamp from the nix store: apparently it’s
*1 second* after the epoch. Not exactly the epoch.
Sure enough, the Nix source code
confirms:
const time_t mtimeStore = 1; /* 1 second into the epoch */
Nooo. What’s easier, patching Nix, or patching static-web-server?
Let’s try our hand at editing some Rust through sed through Nix, in an
overlay on our monorepo’s nixpkgs instance:
overlays = [
(self: super: {
...
static-web-server = super.static-web-server.overrideAttrs {
prePatch = ''
${self.gnused}/bin/sed \
-i \
-e 's/\(\.filter.*t\) != .*UNIX_EPOCH/\1 > (std::time::UNIX_EPOCH + std::time::Duration::from_secs(1))/' \
src/response.rs
'';
# Some tests which implicitly relied on the above behavior now
# break. Force an mtime update to fix.
postUnpack = ''
find . -exec touch -m {} +
'';
};
};
];
Rebuild the web server and run it locally to test:
$ curl -D /dev/stderr -s -o /dev/null http://localhost:12345/auth/login.html
HTTP/1.1 200 OK
content-length: 11233
content-type: text/html
accept-ranges: bytes
vary: accept-encoding
cache-control: max-age=86400
date: Fri, 27 Feb 2026 20:57:34 GMT
Change a CSS rule, do a regular refresh, and: it works :)
Excess Verbiage
ECS: Task Protection vs stopTimeout
AWS Struggle of the day: graceful exit of ECS tasks handling long
running async jobs.
The clearest signal that ECS wants you to terminate is a SIGTERM,
eventually followed by a SIGKILL. The maximum grace period ECS grants
you is
2 minutes. 2 minutes is too short for our long running async tasks. :(
It seems we are not alone. For such cases, ECS introduced
task termination protection: tasks can self identify as protected, escaping downscaling until
they’re done. This definitely solves the problem for fleet with <1✕
sustained job / worker load, notably auto scaling fleet without
parallel handling of jobs by workers. But if your workers support
handling concurrent jobs, it’s unlikely they’ll ever be completely out
of any work. And until they get a signal, they don’t know whether or
not they’re “old”. :((
We settled on workers just scheduling themselves to gracefully exit
every hour, so even in times of sustained load there will be task
rescheduling events which will give ECS the opportunity to upgrade the
tasks. But it’s convoluted, and it’s a hack on top of another hack.
Wouldn’t it be nicer if you could just set a delay of 2 hours between
SIGTERM and SIGKILL, instead of 2 minutes?
nix flake archive
Our new favorite nix command is
`nix flake archive`: copy all flake inputs to your store, and/or to a binary cache. Goes
very nicely with
`nix copy`
to ensure private substituters always have all your flake inputs
cached.
To pipe this into `nix copy` (or Cachix’s `cachix push`), use:
nix flake archive --json \
| jq '.. | .path? | strings' \
| xargs nix copy --to ...
# or: cachix push my-cachix-bin
The implementation
is surprisingly simple.
nix building a flake app
Does anyone know how you’re supposed to just build a flake
app (not program) without running it? Best we could come up with is:
nix eval --raw --impure --expr \
'let
f = builtins.getFlake "git+file://${toString ./.}";
prg = f.apps.${builtins.currentSystem}.foobar.program;
in
builtins.head (builtins.attrNames (builtins.getContext prg))' \
| xargs -r nix-store -r
Surely there has to be a better way...
codegen flake module
We open sourced our
codegen flake module
for declaring auto generated files in your flake.
Usage is as simple as:
$ nix run .#codegen
and:
$ nix flake check
validating JWTs in CF edge functions
We installed an edge function in Cloudfront to validate any JWTs were
signed by a known JWT key. Copied almost verbatim from the
Cloudfront docs.
We explicitly whitelisted certain subdirectories from this check,
`/auth/*` among others, to allow unauthenticated users to log in.
That’s why we host this page on `/auth/login.html` ☺
The benefit: extremely small surface area for the code which does JWT
validation. Severely limits impact of large amount of potential bugs
in the origin.
flake module: checkBuildAll
When you publish a flake, a sane base level sanity check is usually:
do my exposed packages at least build? The checkBuildAll
flake module does that:
inputs.anterior-tools.url = "github:anteriorcore/tools";
...
flake.parts.lib.mkFlake { inherit inputs; } {
imports = [
inputs.anterior-tools.flakeModules.checkBuildAll
...
Now, `nix flake check` builds everything exposed through your flake’s
`packages`.
From our
nix tools repo.
NY Nix Meetup
We’ll be at the
NY Nix Meetup this Wednesday. Looking forward to it!
brrr: high performance workflow scheduler
We also open sourced
brrr; a
library-only, high performance, bring-your-own-infra workflow
scheduler. Crucial feature: no central orchestrator → no single point
of failure.
TypeScript and Python implementations provided. Nix powered demo in
the repo. Under active development.
elasticmq and dynamodb in services-flake
Shun submitted patches to include
elasticmq and
dynamodb-local
in
services-flake.
They both got merged, so you can now easily use them in
process-compose:
services.dynamodb-local.mydynamodb.enable = true;
services.elasticmq.myelasticmq.enable = true;
PRs
#639
and
#640.
package-lock2nix
We open sourced
package-lock2nix, a tool to build NPM projects with a package-lock.json directly in
Nix. Full package-lock.json parsing is done at eval time,
meaning no separate `*2nix` command stage to run. Just `nix build`
your project directly, and manage the package-lock.json file itself
with regular build tools like npm.
Released under AGPLv3 (but open to other licenses)
Anterior dev log
Launched the anterior dev log. We’re hosting it under /auth/login.html
because that’s the only path that our edge functions allow through
unauthenticated.
Chose a non-ascii app name to test the system’s handling of unicode.