Homepage

Static IPv4 address on outbound connections from a Cloud Run Job (using Tailscale)

The full example for this writeup is at github.com/OmniTroid/tailscale-docker if you want to look at that first.

I ran into a bit of an engineering challenge recently. A scheduled job had to access an API that is behind a strict firewall that only permits a whitelist of IPv4 addresses. Our current monitoring and infrastructure is built around Google Cloud Run for scheduled tasks. Now, the ideal solution would simply route these requests through some exit node with the correct IPv4 address to make the firewall happy, all handled transparently from the job's perspective (python requests.get(), specifically). Now, I would really, really prefer not to do any network magic in the scheduled job itself because it really is complex enough on its own.

We already use Tailscale internally, so it made sense to build a solution using this.

The first step was to simply set up the exit node. For this case, I used Debian 13 and configured the exit node according to Tailscale docs

Once that's done (and tested), we need to get Tailscale running inside the Cloud Run container. The approach is straightforward: run tailscaled in userspace networking mode with a SOCKS5 proxy, then route all application traffic through it. The Dockerfile is minimal:

FROM debian:bookworm-slim

RUN apt-get update && apt-get install -y --no-install-recommends \
    curl ca-certificates && \
    rm -rf /var/lib/apt/lists/*

COPY --from=docker.io/tailscale/tailscale:v1.94.1 /usr/local/bin/tailscaled /usr/local/bin/tailscaled
COPY --from=docker.io/tailscale/tailscale:v1.94.1 /usr/local/bin/tailscale /usr/local/bin/tailscale

WORKDIR /app
COPY start.sh ./

CMD ["/app/start.sh"]

You'll need a .env file with your Tailscale auth key and exit node:

TS_AUTHKEY=tskey-auth-...
TS_EXITNODE=your-exit-node.tailnet-name.ts.net.

To generate the auth key, go to the Tailscale admin console under Settings -> Keys -> Generate auth key with these settings:

Warning: the auth key expires after 90 days. When it does, all jobs using it will silently fail to authenticate. Consider using the Tailscale API to automate key rotation before expiry.

Tailscale auth key settings

And the entrypoint:

tailscaled --tun=userspace-networking --socks5-server=localhost:1055 --state=mem: &
sleep 3
tailscale up --auth-key="$TS_AUTHKEY"
tailscale set --exit-node="$TS_EXITNODE"

export ALL_PROXY=socks5h://localhost:1055/
# ... your application here

A couple of things to note. --state=mem: tells tailscaled to treat the node as explicitly ephemeral, so no state is persisted. Exactly what you want for a serverless container. The --exit-node must be set after tailscale up because the network needs to be established first. And critically, you want socks5h:// (note the h), not socks5://. The h variant sends the hostname to the proxy, which resolves DNS on the exit node. This matters, more on that shortly.

One important note: pin to Tailscale v1.94.1. There's a regression in v1.94.2 that breaks DERP peer registration after a rebind in Cloud Run's ipvlan network, causing the SOCKS5 proxy to silently fail. I filed tailscale/tailscale#19069 for this.

The IPv6 problem

With tailscale pinned and working, the proxy routes traffic through the exit node, but it returns the exit node's IPv6 address instead of the expected static IPv4 address. The whole point of this exercise is to present a specific IPv4 address to the remote firewall.

The issue is that the exit node has both IPv4 and IPv6 connectivity. When a hostname resolves to both A and AAAA records, the exit node's Go resolver picks IPv6, connects over IPv6, and the remote server sees the exit node's IPv6 address, not the elastic IP we configured.

You might think "just prefer IPv4 in gai.conf". It doesn't work. Go's resolver doesn't read gai.conf. Neither does the no-aaaa option in resolv.conf help when systemd-resolved sits in the middle.

The solution that actually works: dnsmasq on the exit node with filter-AAAA. This strips AAAA records from DNS responses at the resolver level, before Go ever sees them. Point /etc/resolv.conf at 127.0.0.1, disable systemd-resolved's stub listener, and configure dnsmasq with an upstream:

filter-AAAA
server=1.1.1.1

With this in place, any DNS resolution on the exit node, whether from Go's resolver, glibc, or anything else, returns only A records. The exit node connects via IPv4, and the remote server sees 51.21.228.146. Done.

The full picture

The final setup has a few moving parts but each one is there for a reason:

  1. Cloud Run container runs tailscaled in userspace networking mode with --state=mem: and socks5h:// proxy
  2. Exit node (Debian on AWS) runs tailscaled as an exit node with dnsmasq filtering AAAA records
  3. Application traffic flows: app -> SOCKS5 proxy -> DERP relay -> exit node -> internet (IPv4 only)
  4. Tailscale version is pinned to v1.94.1 to avoid the DERP re-registration bug in v1.94.2

The socks5h:// protocol is important because it delegates DNS resolution to the exit node, where dnsmasq ensures only IPv4 addresses are returned. If you use socks5:// instead, the container resolves DNS, and Cloud Run's resolver happily returns AAAA records that the exit node then tries (and fails, or succeeds with the wrong IP) to connect to.