I Built a Tag Manager Replacement in One Afternoon with AI


Here’s the situation: a client of mine is moving from Adobe Analytics to Amplitude. When their Adobe contract ends, they lose access to Adobe Launch — not just for analytics, but for all their marketing tags too. Developers will own instrumentation going forward. There’s no tag management system in place to replace it.

So I went looking for one. I evaluated every open-source option I could find. And then I built my own. In an afternoon.

(Okay — an afternoon of coding, with Claude 4.6 Opus doing most of the typing. It was also ten years of absorbing domain expertise, two years of working with AI coding tools, nine months of notes, architecture diagrams, and muttering to myself about consent state machines. But the implementation? One afternoon, one architecture doc, and a conversation with Claude.)

The problem isn’t new. The tools are just old.

I’ve been in Adobe DTM/Launch/Tags for nearly a decade, Google Tag Manager for about five years. I have war stories. I know both systems well enough to know exactly where they fall short — and if you’ve been in this space for any length of time, you’ve lived these too. For this post I’m going to assume Adobe Analytics instead of Adobe Experience Platform Customer Journey Analytics, since that’s what my clients (and many others) still use.

Let’s start with Launch. Launch doesn’t do branches (no, libraries are not branches). It saves every change to something called “Latest.” When you add a rule’s “Latest” to a library and build, a new Revision is created. Revisions get published. But here’s the thing — if you’re not careful, you might accidentally include a half-finished change someone else added (saved as Latest) without building a new Revision. I’ve watched it happen on large teams, enough to convince me that Launch becomes untrustworthy with more than one person working on a property.

GTM is better here — workspaces act more like branches and provide some isolation, and when you publish one, you get prompted to merge those changes into other workspaces — but the publishing workflow is unintuitive, and Preview Mode is the closest thing you get to validation.

Consent management is — let’s be honest — a disaster in both. GTM’s Consent Mode is a gtag API that was bolted on after the fact and, since it’s Google, favors advertisers and publishers. The API is poorly documented and difficult to implement correctly (do you want “basic” consent mode or “advanced” consent mode?). Then there’s Adobe: Aside from the Adobe.OptIn API, which only addresses Adobe products, they have essentially punted on consent entirely, leaving practitioners to cobble together fragmented point solutions. Maybe you’re on AEP and the consent interface is better, but all of my clients (and most of Adobe’s customers) still use good ol’ Adobe Analytics. If you’ve ever tried to implement OneTrust or Cookiebot through either platform, you already know. We deserve better, friends.

Neither system offers schema validation. Nothing stops a developer from pushing addToCart instead of add_to_cart and silently breaking a funnel that you only find out about days later when marketing asks why people stopped adding things to their cart. Nothing validates that price is a number or that currency is an ISO 4217 code. AEP offers schema validation, but you’re learning an Adobe-proprietary data model (XDM) implemented through a kludgy UI that takes months to learn. (I passed the AEP Foundations exam on my second try btw!) The data quality problems this gap creates are invisible until they’re expensive.

Then there’s server side. The backlash against big tech greedily hoovering up everything it can about you has led to a hard pivot away from third party cookies. Server side is becoming table stakes not to circumvent consent but just to reclaim your actual first-party data. Adobe’s offering (Event Forwarding) is a paid SKU and requires migrating to AEP (or at least Web SDK) and shoehorning all your data through XDM. GTM Server Side is pretty solid, but to get the most benefit you have to route all your data collection through GA4 beacons. Big vendor lock-in play, and still no schema validation. And why are we calling things “tags” on the server?!

Lastly, both systems are proprietary and closed-source. GTM’s inner workings are opaque. The Launch core extension is open source, but the actual guts of Launch are closed. When something goes wrong, you reach for a third-party browser extension and hope for the best. So, other than Launch, GTM, and Tealium (which has pivoted hard to being a CDP), my client has no recourse. (Sorry, Ensighten).

What I went looking for

In the years since Launch … launched, the industry has moved beyond “tag management.” Segment, RudderStack, Snowplow, Amplitude — they all think in terms of events, not tags or hits. Even GA4 and AEP are event-based. An event is a structured, typed unit of data that gets routed to destinations. A tag is a vendor script that gets injected into a page. The mental model is fundamentally different, and it matters. But the tooling for the people who actually implement Adobe and Google Analytics? It hasn’t fundamentally changed since Obama was in office.

I evaluated everything I could find:

walkerOS came closest to what I wanted. It’s open-source, TypeScript-native, and has an elegant architecture: sources emit events, a collector validates and routes them, destinations transform and deliver them. The consent management is genuinely good — events queue while consent is pending and flush when it’s resolved. I spent real time with it. But it exposes its API through two objects on window. One of them is called elb, which confused me, but it’s configurable. The “mapping” DSL for destination transforms felt like unnecessary abstraction. And it’s maintained by a two-person team in Hamburg. Great work — and not quite what I was looking for.

RudderStack is the most mature open-source option — 65+ destinations, proper consent management, battle-tested at scale. But it’s a full CDP, and self-hosting requires Kubernetes or Docker Compose with Postgres. For my client’s use case, it’s like renting a semi to move a couch.

Jitsu is solid — if your primary concern is getting events into a data warehouse. Consent management is thin, and the warehouse-first orientation doesn’t map to my client’s immediate needs.

Snowplow is excellent but expensive. It was open source, but they changed the license in 2024. The open-source fork (OpenSnowcat) exists but the community is fragmented. Hard to bet on.

Nothing quite fit: lightweight, git-native, privacy-first, schema-validated, not tied to a CDP, with pluggable destinations and first-class support for modern frameworks.

The realization

The thing I actually needed didn’t exist. Not as a product, not as an open-source project, not as a half-finished github repo with a promising README and no commits in six months.

What I needed was simple in principle: a lightweight, vendor-agnostic, git-native event collection layer with schema validation, consent as a state machine, pluggable destinations, and first-class support for modern frameworks like Astro. I’d been sketching this in notes for nine months — architecture diagrams, consent flow charts, destination interfaces. It wasn’t a whim. It was a spec waiting for implementation.

A year ago, I would have filed it under “someday/maybe” in my GTD. The spec was solid but the implementation would have taken weeks — maybe months — of nights and weekends. I’m not a software engineer by trade. I’m an analytics practitioner who can code.

But this is 2026, and code is being commoditized before our eyes. I’ve spent the last two years learning how to work with AI — starting with Claude 3.5 Sonnet, Windsurf (tear), then with Claude Code, some time with OpenAI’s Codex models, enough time with Gemini CLI to almost break my keyboard when the stupid replace tool call failed for the third time. A dozen different AI workflows. Hundreds of hours learning what you can trust, what you can’t, and when to step in. It’s a dance — a vibe, if you will — and I’ve gotten decent at it.

I wrote an architecture doc, handed it to Claude, and directed the implementation. The AI wrote the code. I steered — reviewing modules, catching the places where it needed domain expertise it couldn’t have, handling git operations and deployment. Ten years of practitioner knowledge shaped the spec. Two years of learning to work with AI made the afternoon possible.

The result is Junction.

Junction

Junction is an event collection and routing layer — the plumbing between your site and your analytics/marketing destinations. One global (window.jct), one config file in your repo, and you’re off.

The core ideas are simple:

Events are the primitive. You don’t “fire tags.” You emit typed events — product:added, page:viewed, order:completed — and destinations decide what to do with them. Your instrumentation is decoupled from vendor-specific concerns. (If you’ve ever had to rip out a vendor and rewire every tag that touched it, you understand why this matters.)

Config lives in git. Your entire analytics configuration is a TypeScript file in your repository. API keys, destination registration, consent settings, event contracts — all versioned, all reviewable via pull request, all deployable through CI/CD. No more Launch “Latest.” No more GTM container versions. If someone wants to change how your analytics work, they open a PR. If AI is cooking, they don’t need to use a browser designed for humans to navigate which data element to include in the extension config.

Consent is a state machine. Not an afterthought, not a bolt-on. Events queue in memory while consent is pending. When the user interacts with your CMP and consent resolves, the queue flushes and replays events to newly-permitted destinations with updated user properties. Every destination declares which consent categories it requires. DNT and GPC are respected by default.

Schemas validate your data. Event contracts are defined per entity+action pair using Zod. If a developer pushes a product:added event without a product_id, or with price as a string, the contract catches it before the event reaches any destination. In strict mode, the event is dropped. In lenient mode, it passes through with a warning. Either way, you know about it. (Compare this to the current state of affairs, where you find out three weeks later when someone asks why the funnel report looks weird.)

Destinations are just TypeScript. No mapping DSL, no configuration UI. A destination is an object with init(), transform(), and send(). Writing one for a new vendor takes about an hour. The transform function is where walkerOS’s “mapping” concept lives, except it’s a regular function — you can use whatever logic you want.

Here’s what the config looks like:

import { amplitude } from "./destinations/amplitude";
import { meta } from "./destinations/meta";

export const config = {
  name: "my-site",
  environment: import.meta.env.MODE,
  consent: {
    defaultState: { necessary: true },
    queueTimeout: 30_000,
    respectDNT: true,
    respectGPC: true,
  },
  destinations: [
    {
      destination: amplitude,
      config: { apiKey: import.meta.env.AMPLITUDE_KEY, mode: "client" },
      consent: ["analytics"],
    },
    {
      destination: meta,
      config: { pixelId: "000000000000000" },
      consent: ["marketing"],
    },
  ],
  contracts: [
    {
      entity: "product",
      action: "added",
      version: "1.0.0",
      mode: "strict",
      schema: z.object({
        product_id: z.string().min(1),
        name: z.string().min(1),
        price: z.number().nonnegative(),
        currency: z.string().length(3),
      }),
    },
  ],
  debug: import.meta.env.DEV,
};

If you’ve spent much time in Launch or GTM, you can read this immediately. The difference is that it’s code, it’s typesafe, and it lives in your repo. One config file replaces an entire TMS UI — at least, that’s the idea.

The part where it actually works

I integrated Junction into this blog — the one you’re reading right now — as a test drive. It took about ten minutes. One dependency (zod), a handful of files copied into src/lib/, and a single component added to my Astro layout.

It’s running on this page. Right now. Open your devtools and check it out.

I built a console destination that logs every event to the browser console during development — think GTM Preview Mode or Analytics Debugger, except it’s just your devtools and it doesn’t require you to install anything. My blog uses Plausible, so I built a Plausible destination that bridges custom events to my existing setup. And I wired up automatic tracking for page views (with Astro View Transitions support), outbound link clicks, blog tag clicks, and a blog:read event that fires when someone actually reads a post — 75% scroll depth and 30 seconds on page.

That blog:read event is an engagement signal that in Adobe Analytics would take customizing a plugin and wiring up a prop and a success event to implement. It’s the difference between “someone landed on this page” and “someone read this article.” Twenty lines of code, validated against a Zod contract. Last I checked, Enhanced Measurement just fires an event on 90% scroll, and you don’t get any Zod contract. Who is Adobe or Google to tell you what “engagement” means on your site? Why are we still contorting ourselves to satisfy their goofy, one-size-fits-all abstractions? It’s 2026. We can do better.

What’s next

Junction is open-source on github. The current state: Amplitude, GA4, and Meta Pixel destinations. Consent state machine with event queuing. Zod-based schema validation. Astro integration with View Transitions support. A WinterCG-compatible edge gateway that runs on Cloudflare Workers, Deno Deploy, or Vercel Edge.

On the roadmap: a rules engine for declarative auto-tracking (think Launch rules, but as code), pre-built CMP adapters for OneTrust and Cookiebot, a debug panel, and more destinations.

If you’re interested — try it, break it, tell me what’s missing. If you’re evaluating tools for a migration and the choice feels like “proprietary TMS or heavyweight CDP,” there’s a middle ground now.

One more thing

There’s a voice in my head — maybe you have one too — that says who are you to build this? There are teams of engineers at Google and Adobe working on this problem. What makes you think you can do better?

AI is rapidly commoditizing code. The latest coding models (Opus 4.6, GPT-5.3-Codex) can produce high-quality code with little steering. Some companies have started leaning into this, effectively turning software engineers into agent wranglers. Code is by no means worthless, but it’s no longer the bottleneck. You don’t need to learn python or TypeScript or Next.js to ship something that solves your specific use case. The bottlenecks are (a) knowing what to build, and (b) getting past the voice that says “you can’t do this”. It’s years of watching consent implementations fail, of staring blankly at the network tab in devtools knowing the thing you just tested in GTM Preview Mode worked there but for some reason not “Live”. The bottleneck is domain expertise, and knowing that you — not your stakeholders, not the vendors — are the only thing standing in the way of what you actually need.

Over $1 trillion in SaaS company value has been wiped out so far this year. A few days ago someone calculated when the singularity will occur (if you like unhinged, this is a fun read). The tools we pay five and six figures to license today are being rebuilt by practitioners who know their domains and have learned to work with AI. I’m not saying Junction is better than Launch or GTM — not yet. But it exists, it works, it took an afternoon, and it does things neither of those products can do.

If you’re a practitioner sitting on a spec — a solution you’ve been sketching in notebooks, a tool you wish existed — the barrier to building it has never been lower. The hardest part isn’t the code anymore. It’s shoving a sock in the critic’s mouth and shipping the thing.