Data Sovereignty in Practice

Case Studies

Real stories from a real operating ecosystem. Every case study below happened. Every number is exact. This is what data sovereignty looks like when it's not theoretical.

245
Active Workflows
8
Live Brands
400+
Database Tables
0
Vendor Lock-In
Case Study 01 — Security & Protection

The Sweep That Found 575 Silent Failures

Your system was working. It was also silently breaking.

The Guardian — data protection archetype
The Situation

A multi-brand ecosystem running 363 automated workflows processed payments, managed events, sent emails, and tracked brand assets across seven websites. Everything appeared functional. Revenue was flowing. Events were listing. Emails were sending.

The Discovery

A systematic audit revealed what dashboards couldn't see: 575 database nodes had no error handling — a single bad record could crash an entire pipeline. Four API keys were hardcoded with an expired credential that had been silently dropping real Stripe payments for days. Twenty webhook URLs were baked into workflow code instead of configuration. Fifteen webhook paths were duplicated, causing unpredictable routing.

The Fix

Every node was hardened with graceful error handling. Every hardcoded secret was replaced with a configuration variable. Every duplicate path was resolved. Every write operation was protected with type safety. The system went from appearing healthy to actually being resilient.

A single expired credential was dropping real payments for days before anyone noticed. That's not a tool problem — it's a visibility problem.
By The Numbers
575
Nodes Hardened
320
Writes Protected
4
Leaked Secrets Found
15
Conflicts Resolved
The Lesson

Security isn't paranoia — it's knowing where the doors are and who has the keys. Systems that look healthy and systems that are healthy are two different things.

Course Day 1: Security Basics
Case Study 02 — Portability & Sovereignty

The Migration That Took Two Hours

When your email platform sunsets, you move in hours — not months.

Data Satchel — portable systems
The Situation

The ecosystem relied on ConvertKit (now Kit) for email marketing across multiple brands. Eight workflows triggered subscriber actions, welcome sequences, and event notifications through Kit's API. Nine configuration variables pointed to Kit endpoints. Then Kit's pricing model changed and the platform no longer fit.

The Migration

Because every integration was built through a configuration layer — not hardcoded into business logic — the migration was mechanical, not architectural. Eight workflows were rewired from Kit nodes to MailerLite API calls. Nine Kit-specific variables were replaced. New subscriber groups were created. Old references were cleaned. The entire migration completed in a single working session with zero subscriber data loss and zero downtime.

Why It Worked

The system was built on the principle that tools are replaceable but data and workflows are not. Subscriber lists were structured in a portable database. Workflow logic was separated from platform-specific API calls. When the tool changed, only the connection layer needed rewriting — not the business logic underneath.

"Built to survive change" isn't a philosophy poster. It's a two-hour migration instead of a two-month panic.
By The Numbers
8
Workflows Migrated
9
Variables Replaced
0
Data Lost
0
Downtime
The Lesson

Platforms sunset. Pricing changes. APIs deprecate. If your workflows are documented, your data is portable, and your logic is separated from your tools — you adapt. Everyone else starts over.

Manifesto: Portable Across Platforms
Case Study 03 — Review Rituals & Dashboards

The Dashboard That Runs Before Coffee

15 minutes a week prevents 5 hours of crisis a month.

Data Cathedral — organized information space
The Situation

Seven brands. Dozens of upcoming events. Brand assets in various stages of approval. Content calendars across multiple platforms. Website uptime to monitor. Financial tracking to review. No single person can hold all of that in their head — and they shouldn't have to.

The System

Every morning at 7:00 AM, an automated brief pulls live data from five sources: upcoming events with venue and ticket status, brand asset requests awaiting review, content publishing schedule, team deliverables and deadlines, and website health across all seven domains. It compiles everything into a single digest delivered to the team channel before anyone opens their laptop.

The Weekly Layer

On top of the daily brief, a weekly audit workflow scans every active automation for errors, credential health, and execution failures. It catches problems before they compound — an expired API key, a misconfigured table reference, a workflow that stopped running and no one noticed. The tortoise beats the hare: consistent, small-effort review prevents heroic, high-stress recovery.

Your ecosystem already ran the health check before your coffee was ready.
By The Numbers
5
Data Sources
7:00 AM
Daily Brief
363
Workflows Audited
15 min
Weekly Ritual
The Lesson

Systems decay without maintenance. The difference between a system that lasts and one that crumbles is a 15-minute weekly ritual — not heroic effort, but consistent small attention.

Course Day 7: Review Ritual
Case Study 04 — Delivery Systems

One Command, Seven Brands, 26 Pages

Shipping isn't stressful when your delivery system is a checklist.

Village Network — connected infrastructure
The Situation

Seven distinct brands, each with their own domain, visual identity, navigation, and content. RoseCourt runs community events. Grove House manages land consultation. Witch Haven Grove sells artisan goods. Mirror Mirror teaches data sovereignty. Each site has different colors, fonts, page structures, and audiences. Managing seven separate codebases would be unsustainable for a small team.

The Architecture

Instead of seven codebases, the ecosystem uses a shared builder system: a brand registry defines each brand's identity (colors, fonts, links, legal, metadata), and page builders generate HTML from templates using that registry. Every page is generated, not hand-coded. A single deployment script builds all 26 pages across all seven brands and pushes them to the edge network. Total output: 688 kilobytes. Total deploy time: under a minute.

What This Enables

When a legal footer changes, it updates everywhere. When a new event is created, the site rebuilds with fresh data. When a new brand launches, it plugs into the existing registry and immediately inherits the full deployment pipeline. The Valentine's Mix & Match event went from event creation to live website listing to automated reminder system — all in one session.

Nobody gets lost on a well-marked trail. The markers eliminate anxiety. Your delivery system is the same — clear steps, clear handoffs, clear confirmation.
By The Numbers
7
Brands Deployed
26
Pages Generated
688K
Total Output
<60s
Deploy Time
The Lesson

Most delivery anxiety comes from complexity, not difficulty. When shipping is a documented, repeatable process — a script, not a judgment call — it stops being stressful and starts being routine.

Course Day 5: Delivery System
Case Study 05 — Automation & Error Handling

The Payment That Almost Disappeared

Nine real payments, silently rejected. One missing setting fixed it all.

Trust Jar — containment and verification
The Situation

The payment pipeline was straightforward: Stripe processes a checkout, fires a webhook to the automation engine, and the engine logs the transaction to the database with customer details, amount, and event type. It had been working for months. Then it stopped — but only for certain payments.

The Discovery

Nine real customer payments were processed by Stripe (money collected successfully) but silently rejected by the database. The automation engine tried to write "checkout.session.completed" into a dropdown field that only accepted predefined options. The database rejected the unknown value. No error was surfaced. No alert was triggered. Revenue was captured but never recorded — invisible to reporting, follow-up workflows, and customer acknowledgment.

The Fix

A single configuration flag — typecast — tells the database to automatically create new dropdown options when it encounters unknown values instead of rejecting them. Once enabled, the nine failed records were recoverable and every future payment would be captured correctly. The fix took minutes. Finding it took a systematic audit of every write operation across 181 workflows — which uncovered 320 nodes with the same vulnerability.

The payment was processed. The money was collected. The record was never created. That's the difference between a system that runs and a system that's resilient.
By The Numbers
9
Payments Lost
320
Nodes Vulnerable
181
Workflows Scanned
1
Setting Fixed It
The Lesson

Automation without error handling is a liability. The most dangerous failures are the ones that don't look like failures — where everything appears to work while data quietly disappears.

Course Day 3: Engines + Automation
Case Study 06 — Databasing & Structure

Four Bases, 400 Tables, One Source of Truth

A spreadsheet is a graveyard. A database is a living system.

Data Cabinet — structured information storage
The Situation

Seven brands needed to track events, contacts, financial transactions, brand assets, content calendars, course enrollments, product inventory, email subscribers, team tasks, and more. The temptation was obvious: one giant spreadsheet per brand. Or worse, scattered notes in various apps that nobody could find when they needed them.

The Architecture

Instead, four structured databases organize everything by domain: RoseCourt handles events and community (113 tables), Mirror Mirror manages courses and content (143 tables), Grove House tracks land projects and members, and a shared operations base handles cross-brand workflows. Each table has typed fields — dates are dates, currencies are currencies, relationships link records across tables. A contact in the events table automatically connects to their payment history, their feedback responses, and their email preferences.

What This Enables

When the Valentine's Mix & Match event was created, the database didn't just store a name and date. It linked to the venue record, the pricing tiers, the ticket sales tracker, the attendee contact list, the email reminder sequences, the post-event feedback forms, and the financial P&L report — all automatically. One record, connected to everything it touches. That's the difference between data you have and data you can use.

The goal isn't more data. It's findable, connected, structured data that works for how you actually think and operate.
By The Numbers
4
Databases
400+
Tables
363
Workflows Connected
7
Brands Unified
The Lesson

Good database structure is invisible — things just work. Bad structure creates daily friction you stop noticing until you try to find something important and can't. Start with three tables. Grow from there.

Course Day 2: Databasing for Real Life
Case Study 07 — Archival Intelligence

From Rare Herbal Books to Living Product Intelligence

How Mirror Mirror can preserve, structure, and activate rare herbal collections for long-term research and Witch Haven Grove innovation.

Archive horizon — structured archival knowledge
The Problem

Rare herbal texts are often locked in low-discoverability formats. Without structured metadata, citations stay buried in old pages instead of informing modern product development. Mirror Mirror needed a model to convert archival assets into a usable, citation-backed knowledge system.

The Benchmark

Kew digitized 5.4M specimens with barcoding and open data portals. BHL serves 62M+ pages with API-first architecture. Wellcome implemented IIIF for deep zoom and cross-platform access. NLM and NAL maintain major botanical and herbal holdings as live research assets. These institutions proved the model works at scale.

The Mirror Mirror Pipeline

Digitize with preservation-grade imaging and persistent IDs. Structure with herbal-aware metadata: taxa, preparations, indications, provenance. Publish via searchable collections and IIIF/API endpoints. Convert archival evidence into governed, citation-backed ingredient dossiers for Witch Haven Grove product R&D. Every claim is tiered by evidence confidence and linked to its source.

Historical references are clearly separated from modern clinical claims. All product-facing insights are provenance-linked and require compliance review before external messaging.
By The Numbers
7
Benchmark Institutions
20
Cited Sources
4
Pipeline Stages
6
90-Day Priorities
The Lesson

The most valuable library isn't the biggest — it's the one where every reference can be traced from shelf to product label with full citation integrity. Start with 25 high-value works. Build the pipeline before scaling the collection.

Applied: Archive-to-Product Pipeline
Sources & References (20)
  1. [1] Royal Botanic Gardens, Kew — Digitisation Project
  2. [2] Kew — Collections Digitisation
  3. [3] Kew — Digitisation Progress Report (Q1 2025/26)
  4. [4] Biodiversity Heritage Library — About
  5. [5] BHL — Datasets on AWS Cloud
  6. [6] BHL — Digital Imaging Specifications
  7. [7] BHL — API v3 Documentation
  8. [8] BHL Blog — Travelling Plants Collaborative Project
  9. [9] Wellcome Collection — Catalogue API
  10. [10] Wellcome Collection — IIIF Documentation
  11. [11] Smithsonian — Open Access FAQ
  12. [12] Library of Congress — JSON/YAML APIs
  13. [13] Library of Congress — IIIF Image Services
  14. [14] Europeana — APIs
  15. [15] NLM — Digital Projects
  16. [16] NLM — Digital Collections
  17. [17] NLM — Web Service API
  18. [18] NLM — Incunabula Collection (Herbarius latinus, Ortus sanitatis)
  19. [19] NAL — Botany Collections
  20. [20] NAL — Rare Books Exhibit

Build Your Own

These case studies aren't from a client portfolio. They're from our own ecosystem — the same system we teach you to build in the Data Sovereignty course.

View Courses Watch the Webinar