Single-pane dashboard for Op Doc, skills, connectors, and live state.
Most people treat AI like a search box. I treat it like a co-worker with a written Operating Doc, role-specific personas, persistent memory, and a couple dozen specialized skills I built to extend its reach.
This dashboard is the live control surface for that system. Same file I open on my desktop and phone to remind myself what's available, fire workflows, check memory, and review the rules I co-authored with Claude.
Interactive picker. Tells me whether a task is Trivial, Standard, or Full-tier — and what that means for how Claude responds.
The elicitation flow that fires on Full-tier work, with compression rules and a flow diagram.
Standing rules — what Claude labels, what overrides exist, how confidence is reported.
Every skill in the system — custom-built, Anthropic library, workflow shortcuts. Click any for triggers and details.
External tool integrations — Gmail, Calendar, Notion, Spotify, Cloudflare, browsers, computer-use, more.
Role-specific lenses (Sheldon, House, Hitch, Jocko, Chief). Each has its own logic and voice.
Reusable prompt library — copy a structured prompt and paste it into a fresh chat.
Snapshot of current auto-memory: feedback rules, project status, references.
Scheduled reviews, the amendment log, snapshot routines.
The full story — origin, architecture, the Anthropic features that power it, and lessons learned.
Data analyst, 8-11 years experience, based in Cincinnati. This system started as a way to stop re-explaining myself to a fresh chat every day. It turned into a deeper experiment: what happens when you treat a human + AI partnership like an org you're designing — with rules, roles, memory, and review cycles?
If you're a recruiter or hiring manager who landed here from my resume or LinkedIn, this is what I mean when I say I work with AI. Not "I use ChatGPT for emails." This.
Inside my Claude session, these fire skills directly. Here on the public site, they copy the trigger phrase to your clipboard so you can see exactly what the command is. The skills themselves only execute on my machine — that's where the memory, files, and connectors live.
Claude\Context\01_Core\.Those are real paths in my OneDrive. They're load-bearing for me; for you they're just labels.
If you want depth, that tab has the narrative — origin, architecture, what I learned. The other tabs are the working surfaces; they make more sense after the story.
If two rounds cover everything, run two. Don't pad to feel thorough.
Every clickable option must be a thing Evan might actually pick. If only two real options exist, show two.
If the opener makes goal and stakes clear, collapse Round 3 into a single confirmation. Don't re-ask what's already been answered.
If a round isn't earning its keep on this specific request, name it and skip it. Compliance theater is worse than honest skipping.
Don't ask questions Claude doesn't actually need answered.
Before each clarifying question, the test is: would the response change if Claude knew the answer?
"It would be nice to know" doesn't qualify. The bar is "Claude genuinely can't proceed without this."
Why this rule exists: protocol rounds have a real cost (time, friction, context-shifting). Each question must justify that cost. Asking for "nice to have" context erodes the value of the protocol — it becomes ceremony instead of useful elicitation.
flowchart TD
Start(["User opens conversation"]) --> Tier{"Full-tier trigger fires?"}
Tier -->|No| Std["Standard or Trivial response"]
Tier -->|Yes| Override{"Override condition?"}
Override -->|"Skip signal or full context given"| SkipP["Skip protocol, go to output"]
Override -->|No| R1["Round 1 — Tier, Mode, Triggers pre-populated"]
R1 --> Intent{"Intent unambiguous?"}
Intent -->|No| IntentCheck["Reflect read, confirm or correct"]
IntentCheck --> R2
Intent -->|Yes| R2["Round 2 — Trigger-specific follow-ups"]
R2 --> R3["Round 3 — Goal, Stakes, anything-else"]
R3 --> R4{"Need more rounds?"}
R4 -->|"Material gap open"| R4Round["Round 4 plus, conditional"]
R4 -->|"Can proceed"| Output
R4Round --> R4Check{"Round count over 4?"}
R4Check -->|"Yes — friction"| ExitFriction["Exit, name skip, proceed"]
R4Check -->|No| Output
ExitFriction --> Output
Output["Output Round — tier, mode, confidence, pre-mortem, two-pass"] --> Feedback
Feedback["Feedback Round — pressure-test, missed angle, format, ship"] --> Ship{"Ship?"}
Ship -->|No| Feedback
Ship -->|Yes| Done(["Done"])
SkipP --> Done
classDef startNode fill:#d4dce8,stroke:#4a5c8a,color:#1a1a1a
classDef decision fill:#f0d9c8,stroke:#b85c38,color:#1a1a1a
classDef round fill:#ffffff,stroke:#555,color:#1a1a1a
classDef exit fill:#d8e5d8,stroke:#6b8e6b,color:#1a1a1a
class Start,Done startNode
class Tier,Override,Intent,R4,R4Check,Ship decision
class R1,R2,R3,R4Round,IntentCheck,Output,Feedback round
class Std,SkipP,ExitFriction exit
Conversation enters or exits the protocol.
A branch point. Follow the labeled arrow.
A protocol step that runs as a turn in the conversation.
Protocol short-circuits — go straight to output.
Did the request hit a Full-tier trigger (money, career, published, standing rules, foundational, irreversible, "this matters")? If no → Standard or Trivial response, no protocol. If yes → continue.
Did Evan say "just respond" or "skip the protocol"? Did the opener already include tier, mode, triggers, goal, and stakes? Are we mid-Full-tier on a follow-up turn? Any of these → skip the protocol, go to output.
Claude pre-populates its read of tier, mode, and which triggers fired. Evan confirms or corrects.
Claude reflects back what it understood Evan is actually asking for. If unclear, Evan corrects before Round 2.
Money asks magnitude and reversibility. Career asks stage and who's affected. Published asks audience. Etc. Only triggers that fired generate questions.
What does Evan want to walk away with (goal)? What's actually on the line (stakes)? Plus an open "anything else I should know" field.
Conditional. Only runs if a previous answer surfaced something that materially changes the response, or if a load-bearing assumption is still unstated. The stopping rule above governs whether the question runs.
If round count exceeds 4 without convergence, the protocol has stopped helping. Exit, name what's being skipped, proceed.
The actual response — tier announcement, mode, confidence labels, citations, pre-mortem, two-pass review.
Claude offers options: pressure-test, missed angle, change format, expand a section, or ship. Loop until ship.
Hard on bad ideas. Neutral on neutral ones. Supportive on good ones.
Say "hold" or "update" and why.
Trivial → Standard → Full. If unclear, default to higher tier.
Confidence and assumptions are part of the answer, not subtext.
No percentages. False precision hides bad calls. Threshold for flagging: anything below High.
"Holding because…" or "Updating because…" so the change is visible as earned vs. performed.
If it's not a real option, don't include it.
If a rule isn't earning its keep on this task, say so. Compliance theater is worse than honest skipping.
Permanent standing question on every non-trivial answer.
State the strongest version of the position first, then attack that version.
Empirical or quantitative claims need an inline source. Speculation flag is its own citation.
First pass: best answer. Second pass: hostile reviewer. Revise. Ship.
"If this fails 12 months from now, what's the most likely reason?" Then address it.
"What would guarantee failure?" Then avoid those things.
Write the prior before the research. The delta is the calibration signal.
Captured at render time. Grouped by type.
operating system or op system in chat to re-render with current state.All standing files live under your portable Claude\Context folder. Live (ephemeral) auto-memory lives separately in AppData.
C:\Users\egbur\OneDrive\Documents\Claude\Context\ ├── 01_Core\ ← Standing rules + skills + history │ ├── Operating_Doc.md ← The Constitution itself │ ├── Operating_Doc_Log.md ← In-flight findings between reviews │ ├── Skills_Guide.md ← Skills reference (legacy companion) │ ├── Handoff_Protocol.md │ ├── Session_Continuity_Protocol.md │ ├── Implementation_Log.md ← Major changes log │ ├── Skills\ ← Custom skills, one folder per skill │ │ ├── operating-system\ │ │ │ ├── SKILL.md │ │ │ └── references\ │ │ │ └── artifact_template.html │ │ ├── persona-protocol\SKILL.md │ │ ├── md-to-word\ │ │ │ ├── SKILL.md │ │ │ └── references\ │ │ │ ├── build_reference_doc.py │ │ │ └── reference.docx │ │ └── ... (other skills) │ ├── Memory_Snapshots\YYYY-MM-DD\ ← Periodic auto-memory backups │ └── Handoffs\YYYY-MM-DD_topic.md ← Session handoffs ├── 02_Knowledge\ ← Reference material ├── 03_Projects\ ← Active projects ├── 04_Templates\ ← Project templates └── 99_Archive\ ← Old / superseded material Live auto-memory (ephemeral, session-bound): C:\Users\egbur\AppData\Roaming\Claude\local-agent-mode-sessions\<session>\spaces\<space>\memory\MEMORY.md
snapshot memory for a portable backup.operating system in chat. The skill reads current state from disk and rebuilds the artifact.Claude\Context\01_Core\Skills\<new-skill-name>\SKILL.md with YAML frontmatter (name, description with trigger phrases) followed by the skill body — instructions Claude follows when triggered.references\ for supporting files (templates, scripts, data).skill-creator skill, or use Customize → Skills → Package locally..skill bundle via Customize → Skills.SKILLS array in the operating-system artifact template (name, oneLine, triggers, runPhrase, whenToUse, optimal, gotchas, example).operating system. The Skills tab now shows the new card.Claude\Context\01_Core\Skills\persona-protocol\SKILL.md.persona-protocol via Customize → Skills.PERSONAS array in the operating-system artifact template.operating system. Personas tab + Live State KPI both update.CONNECTORS array in the operating-system artifact template (manual curation — auto-detect is a v2 enhancement).operating system. Connectors tab + Live State KPI update.CONNECTORS array.operating system.Auto-memory updates automatically as Claude saves info during conversations. It's stored under AppData and is session-bound.
Periodic snapshots via the memory-snapshot skill copy the live auto-memory to Memory_Snapshots\YYYY-MM-DD\ — portable backup. Run weekly (Sunday recommended) or before any major Op Doc review.
Dashboard's memory view reflects state at render time. Re-render via operating system to refresh.
Claude\Context\01_Core\Operating_Doc.md.operating system. The Op Doc version, last-amended, and recent amendments all refresh from the source.When to refresh:
How to refresh:
operating system, op system, or close variant in chat.operating-system skill fires. It reads current state from disk (Op Doc, MEMORY.md, skills folder) and rebuilds the embedded data.mcp__cowork__update_artifact to replace the rendered HTML.Scenario: you add a new Notion connector, create a new "Researcher" skill, add a new "Skeptic" persona, and document the changes as a constitutional amendment.
CONNECTORS array in the artifact template.Claude\Context\01_Core\Skills\researcher\SKILL.md. Package + upload via Customize → Skills. Add researcher to the SKILLS array.persona-protocol\SKILL.md — add Skeptic persona, update tables, repackage + upload. Add skeptic to the PERSONAS array.Operating_Doc.md — add amendment entry covering all three additions, bump version to v1.5, update top-of-doc metadata.snapshot memory to capture state before the next batch of work.operating system in chat.Source path: Claude\Context\01_Core\Skills\operating-system\references\artifact_template.html
SKILL.md: Claude\Context\01_Core\Skills\operating-system\SKILL.md
When the operating-system skill fires, it reads the template, customizes the embedded data (version, memory, amendments, etc.) for the current state, and renders. Edit the template directly to change the structure or visual style; edit the data customization step in SKILL.md to change what gets pulled in at render time.
Filesystem-only changes are invisible to Claude's skill loader. Always package + upload.
If a Write or copy fails with permission-denied, retry after a few seconds, or write via bash heredoc to /tmp first then copy back.
If wc -c or tail shows truncated content, cross-check with the Read tool before assuming corruption.
Adding a connector in Customize doesn't automatically update the dashboard — the CONNECTORS array in the template is hand-maintained for now.
The artifact's amendment count reflects what's in the embedded AMENDMENT_DATA array, which currently shows the 3 most recent. The Op Doc may have more historical entries — refresh the array if you want the full count.
In March 2026 I noticed I was solving the same coordination problem over and over: a fresh Claude chat didn't know my goals, my voice, my rules, or my work history. Every conversation burned the first ten minutes re-establishing context.
So I sat down with Claude and we co-authored an Operating Doc — a document defining how we'd work together. Principles, decision tiers, what Claude must label vs. assume, when to ask vs. proceed, how to handle disagreement.
That document is the spine of this whole system. Everything else — the skills, the memory, the personas, this dashboard — derives from it. It's been actively evolving ever since — April and May brought new amendments, new layers, and new tools on top of the spine. Constant updating is the design, not a side effect.
Versioned plain-text doc at Claude\Context\01_Core\Operating_Doc.md. Currently v1.5. Source of truth for every other layer.
Has an Amendment Log appended on every rule change, plus a scheduled review cycle (monthly for three months, then quarterly) so it stays alive instead of going stale.
Auto-memory. Claude's built-in file-based memory at ~\.claude\projects\…\memory\. Holds user facts, feedback rules, project state, external system references. Loaded automatically each conversation.
Portable snapshots. Periodic copies into Claude\Context\…\Memory_Snapshots\ — backup and portability across machines.
Each entry is structured (type, description, why, how to apply) so future Claude can act on it, not just read it. Memory isn't journaling — it's load-bearing context.
A skill is a packaged set of instructions Claude loads on demand when a trigger fires. They extend Claude into specific domains without bloating every conversation with context.
I have skills for my job search (resume tailoring, interview prep, recruiter outreach), my finances (journal entries, reconciliations, SOX testing), my data work (SQL across dialects, viz, validation), and dozens of personal-system skills (memory hygiene, session continuity, voice consistency, persona activation).
Some come from Anthropic's public library. Most are custom — built for the way I think and work.
MCP servers, web connectors, and desktop integrations that let Claude actually touch the world: Gmail, Calendar, Notion, Spotify, Cloudflare, browsers, direct computer-use control. The Connectors tab catalogs every one I have wired up, what it's good for, and the gotchas.
Five named modes I activate by saying their name: Sheldon (precision and structure), House (skeptical diagnostician), Hitch (warm strategist), Jocko (discipline, no excuses), Chief (the meta-coordinator who convenes the others).
They aren't characters — they're modes of thinking I want available on demand. Same Claude underneath; different lens on top.
A single-file HTML artifact. No build step, no framework. Vanilla JS, Mermaid for diagrams, pure CSS.
Inside my Claude session, "Run" buttons fire trigger phrases through the Cowork host. Out here on the public site, they fall back to clipboard-copy so the system stays usable on mobile or as a reference even without the host.
Source of truth lives in OneDrive. A sync script pulls it into the deploy repo and pushes to Cloudflare Pages — total deploy time under 30 seconds.
Loadable instruction packages — the foundation of customizing Claude per task type.
File-based persistent memory across conversations.
Open standard for connecting Claude to external tools and services.
First-party integrations for common services (Gmail, Calendar, Drive, etc.).
Persistent HTML artifacts you can edit and re-render in place — what this dashboard runs on.
The CLI that built and deploys this site. Same CLI I use for all my coding work.
The leverage isn't in the model. It's in the system around the model — rules, memory, workflows, review cycles. Same as building any team.
Every rule in the Operating Doc came out of a conversation where Claude pushed back, suggested alternatives, or pointed out edge cases I'd missed. The doc is better because of it.
The dashboard exists because I couldn't remember which skill did what or when to use it. The Operating Doc exists because I couldn't remember how I'd decided to handle X. Make the system load-bearing so you don't have to be.
Standing documents rot. The Amendment Log plus monthly reviews are how I keep this system from becoming a museum piece.
Start with one page of plain-English rules for how you want to work with AI. Don't worry about skills or dashboards yet.
Use that document for a week. Notice what you keep re-explaining. Each repeated explanation is a candidate for a skill, a memory entry, or a clarification of the rules.
Iterate. The dashboard came years into this for me — it was never the goal, just the visualization that emerged once the system was big enough to need one.
Hosted on Cloudflare Pages. Same domain registrar pattern as my other live projects (mymovietracker.com, helpgroceryshop.com).
When I update the artifact in my Cowork session, a one-command sync ships it here. Source repo lives at C:\Users\egbur\Code\operating-system\.