Skill v1.0.0
currentAutomated scan100/100version: "1.0.0" name: start description: Initialize the productivity + journal system and open the dashboard with graph-aware file structure. Use when setting up the plugin for the first time, bootstrapping working memory from your existing task list, scaffolding the work/life memory tree, or decoding the shorthand (nicknames, acronyms, project codenames) you use in your todos. type: skill updated: 2026-04-26 tags:
- skill
- graph
- bootstrap
- spine
Start Command
If you see unfamiliar placeholders or need to check which tools are connected, see [[CONNECTORS.md|CONNECTORS]].
Initialize the task and memory systems, then open the unified dashboard.
Graph convention. Every file written by this skill follows the vault's Obsidian-style graph convention: YAML frontmatter at top, path-based wiki-links for cross-references,## Linked fromfooter regenerated by the transform script. Full spec in [[memory/README.md|Memory README]] § "Graph convention".
Every file creation, edit, or move in this flow goes through preview → confirm → write, per [[memory-management/SKILL.md|Memory Management Skill]] § Universal preview rule. Scaffolding (creating empty directories + placeholderREADME.md,CLAUDE.md,TASKS.md,dashboard.html) runs without per-file prompting since it's pure structural setup, but any content-bearing write (memory files, task files, wiki-scrape output) is batched into a preview and applied only ony.
Instructions
1. Check What Exists
Check the working directory for:
TASKS.md— task listCLAUDE.md— working memorymemory/— curated knowledge base (withREADME.mdas the map)journal/— immutable daily logdashboard.html— the visual UI
2. Create What's Missing
If `TASKS.md` doesn't exist: Create it with the standard template (see [[task-management/SKILL.md|Task Management Skill]]). Place it in the current working directory.
If `dashboard.html` doesn't exist: Create it in the current working directory.
If `memory/README.md` doesn't exist: That file is the authoritative architecture map. Ensure it exists — it should be shipped with the plugin. If missing, stop and surface the issue; don't invent a substitute layout.
If `memory/` sub-tree is missing: Create the directory scaffold described in [[memory/README.md|Memory README]]:
memory/me.md,memory/glossary.md,memory/all-entities.md,memory/preferences.mdmemory/work/companies/,memory/work/_current.md(stub)- Per-company scaffold (when a company is first referenced):
memory/work/companies/{slug}/{people,projects,tasks/{active,waiting,done}}/,memory/work/companies/{slug}/{<slug>.md,glossary.md} # company entity uses its slug as filename memory/life/{people/{family,friends,flatmates,teachers},places,orgs,topics,events,health/{providers,conditions,log.md},home}/
Create empty directories; only create files when they're first needed (lazy). Do not create memory/context/ — that structure was flattened; preferences.md lives at memory/preferences.md and timeline lives in me.md.
If `journal/` is missing: Create journal/, journal/<current-year>/, journal/<current-year>/<current-month>/. Do not create an entry file yet — that happens on first /journal call.
If `CLAUDE.md` doesn't exist: After scaffolding, begin the memory bootstrap workflow (see below).
3. Open the Dashboard
Do NOT use open or xdg-open — in Cowork, the agent runs in a VM and shell open commands won't reach the user's browser. Instead, tell the user: "Dashboard is ready at dashboard.html. Open it from your file browser to get started."
4. Orient the User
If everything was already initialized:
Dashboard open. Your tasks, memory, and journal are all loaded.- /productivity:journal to log today- /productivity:update to sync tasks and check memory- /productivity:update --comprehensive for a deep scan of all activity
If memory hasn't been bootstrapped yet, continue to step 5.
5. Bootstrap Memory (First Run Only)
Only do this if CLAUDE.md doesn't exist yet.
First, seed `memory/me.md` with three short questions:
Quick setup:1. Your name?2. City you live in?3. Current employer + role? (if applicable)I'll seed me.md and your current company folder.
From the answers:
- Write
memory/me.mdwith the basics. - If employed: create
memory/work/companies/{slug}/{slug}.mdwith role + start date, and pointmemory/work/_current.mdat it.
Then decode workplace shorthand from the task list. The best source of workplace language is the user's actual task list. Real tasks = real shorthand.
Ask the user:
Where do you keep your todos or task list? This could be:- A local file (e.g., TASKS.md, todo.txt)- An app (e.g. Asana, Linear, Jira, Notion, Todoist)- A notes fileI'll use your tasks to learn your workplace shorthand.
Once you have access to the task list:
For each task item, analyze it for potential shorthand:
- Names that might be nicknames
- Acronyms or abbreviations
- Project references or codenames
- Internal terms or jargon
For each item, decode it interactively:
Task: "Send PSR to Todd re: Phoenix blockers"I see some terms I want to make sure I understand:1. **PSR** - What does this stand for?2. **Todd** - Who is Todd? (full name, role)3. **Phoenix** - Is this a project codename? What's it about?
Continue through each task, asking only about terms you haven't already decoded.
5b. Bootstrap education institutions (optional)
After seeding me.md, if the user has mentioned their educational background (university, master's program, prior schools), offer to create institution files:
I noticed you mentioned {school/university}. I can create institution files for:- {current school} (current)- {prior bachelor's} (graduated YYYY)- {prior high school}- {T4EU alliance, Erasmus program, etc.}Each will live in `life/orgs/{slug}.md` and be linked from your education topic.This creates an anchor point for location, dates, and people you knew there.Create them now, or fill organically? [y / n]
If approved:
- Create one
life/orgs/{slug}.mdstub per institution withtype: org,country:,city:,dates:(your enrollment period), and a Summary. - Create or update
life/topics/education.md— an aggregating topic file that lists all schools as a timeline with wiki-links.
If the user skips this, these files will be created organically on first journal mention.
6. First-time company-wiki scrape (optional but recommended)
External sources are ephemeral inputs. Nothing from the wiki is stored on disk as a link, ID, or "last synced" field. See [[memory/README.md|Memory README]] → "External sources — ephemeral inputs". Every file written in this step is plain prose.
After task decoding, if a company-wiki connector is available (Notion, Confluence, Google Docs, etc.), offer:
I can scrape your company's wiki to bootstrap the company folder —products, teams, people, past tasks, glossary. One-time, takes a few minutes,produces a few hundred files. Or skip and we'll fill it organically.Hand me the company home page and I'll go from there.
If they agree, run the first-time wiki scrape:
A. Scope calibration (one question):
Which team are you on at {company}, and which 1–2 teams do youcollaborate with most? I'll write richer summaries for projectsyour team owns and stub the people you work with most often.
B. Walk the wiki:
Starting from the user-provided home page, traverse all reachable pages, all levels, while classifying each page into one of:
| Class | Action | |
|---|---|---|
| Company overview (home, "about us", mission) | Extract into work/companies/{co}/{co}.md | |
| Product / project page | Stub as work/companies/{co}/projects/{slug}.md with a well-written Summary paragraph. Rich detail for own-team projects; thinner for others. | |
| Team page / team roster | Add a section to work/companies/{co}/{co}.md under "Teams". Own team gets its own detailed subsection. | |
| Individual person profile | Stub as work/companies/{co}/people/{name}.md — only for own team + adjacent teams. Others: skip (chat sync will discover them over time). | |
| Glossary, acronym lists, product-nickname pages | Harvest into work/companies/{co}/glossary.md | |
| Social / rituals / recurring events (book club, demo day) | One line each in work/companies/{co}/{co}.md → Rituals | |
| User's past / closed tasks (tasks ever assigned to them, now done) | Stub as work/companies/{co}/tasks/done/{YYYY-MM-slug}.md with Title + Description + Completed date — seeds the CV record | |
| Active tasks assigned to user right now | SKIP. Those get pulled weekly via /productivity:update, not in this one-time scrape. | |
| Handbook, policies, onboarding, benefits docs | SKIP. Reference-only; fetch on demand if ever needed. | |
| Confidential (HR, compensation, performance reviews, customer-specific, financial) | SKIP. Do not touch. |
Track visited page IDs to avoid loops when pages cross-link.
C. Preview before writing:
Show the user a classification report:
Wiki scrape preview for {company}:- 1 company README- 47 project files (8 own-team — rich; 39 others — summary)- 23 people files (own team + adjacent)- 183 glossary entries- 62 past-task files → tasks/done/- Skipped: 41 handbook/policy pages, 14 confidential, 8 active tasksApprove or adjust?
The user can drop categories, adjust scope, or exclude specific pages. Nothing is written until they approve.
D. Write everything in TWO passes (graph-aware):
The scrape produces a connected graph from day one — not a flat star around the company README. This requires two passes, not one.
D1. First pass — write each file with frontmatter + structural wiki-links.
For every classified page, write the corresponding file:
- Each file gets YAML frontmatter (
type:,title:,created:,updated:,tags:, plus type-specific fields per [[memory/README.md|Memory README]] § Graph convention). - Structural cross-references — fields on a page like Owner / Team / Lead / Status — are wiki-linked at write time. E.g. a project page with "Owner: Sarah Chen" becomes
**Owner:** [[memory/work/companies/{co}/people/sarah-chen.md|Sarah Chen]]in the file, even if Sarah's file hasn't been written yet (Obsidian wiki-links work fine pointing to files that don't exist yet — they get followed-by-creation). - Free-prose body text is left as-is for now.
- No external URLs, no wiki IDs, no "last synced" fields anywhere.
- Each project / person file has a
## Logline:- <today> — stubbed from company wiki during onboarding scrape.
D2. Second pass — entity cross-link sweep.
This is the pass that turns prose into a real semantic graph. Without it, the scrape produces files that all only link to the company README — the exact TOC-shaped failure mode described in [[memory/README.md|Memory README]] § Rule 4.
After D1 finishes:
- Build the slug registry. Walk every file written in D1 and collect
(display name, all aliases, file path)tuples. Include the H1 title and anyaliases:from frontmatter. The registry is one row per entity.
- Walk every file's body. For each plain-text occurrence of a registered display name or alias (case-insensitive, word-boundary match), replace it with a wiki-link to the corresponding file. Skip:
- Mentions inside YAML frontmatter (already structured).
- Mentions inside the
## Linked fromsection (auto-generated). - Self-references (a file should not wiki-link itself).
- Mentions inside fenced code blocks.
- The first occurrence per paragraph is enough — additional mentions in the same paragraph stay as plain text per the convention.
- Disambiguate carefully. If the registry has two entities with the same display name (e.g. two "Sarah"s), don't auto-link — flag for the user instead. Better to leave a plain mention than create a wrong edge.
- Run `python3 system/scripts/transform.py`. It regenerates
## Linked fromfrom the new wiki-links so every entity's inbound list reflects the cross-references just added.
After D2, an audit.py run should report 0 issues for the scraped cluster — every project, person, and how-to page is connected to the others by real semantic edges, not just to the company README.
E. Chat people-discovery (if chat connector available):
After the wiki scrape, run a narrow chat scan:
- Query DM partners (last 90 days), @mentions of user, people user @mentioned.
- Dedupe against people already stubbed from the wiki.
- For each new person, propose a stub with name + role + team (from the chat tool's profile API, not messages).
- User confirms in batch. Confirmed →
work/companies/{co}/people/{name}.mdwith no chat ID.
Chat is only for discovering people — never for extracting messages, tasks, or any other content.
7. Write Memory Files
Preview before write. Build the full list of proposed files (CLAUDE.md contents + every memory file + task file + index.md) and show a batched diff with Apply? [y / n / edit]. On y, write everything in one pass. On edit, the user can drop files, tweak descriptions, or reclassify before applying.
From everything gathered, create:
CLAUDE.md — see the Working Memory Format in [[memory-management/SKILL.md|Memory Management Skill]] for the full template. It has a Work section (current company) and a Life section. Target ~100 lines.
memory/ files (lazy — only create the ones you actually have content for):
memory/glossary.md— cross-domain decoder ringmemory/work/companies/{co}/{co}.md— company overview, role, team, tools, ritualsmemory/work/companies/{co}/people/{name}.md— teammatesmemory/work/companies/{co}/projects/{name}.md— products / projects (well-summarized if from wiki scrape)memory/work/companies/{co}/glossary.md— company-specific jargonmemory/work/companies/{co}/tasks/{active,waiting,done}/{slug}.md— task files (see [[task-management/SKILL.md|Task Management Skill]])- Life files only when the user mentions them or confirms during bootstrap
memory/all-entities.md — list every entity file created, alphabetized, with a one-line summary. Fast-lookup TOC.
Full tree and decision rules: [[memory/README.md|Memory README]].
8. Report Results
Productivity + journal system ready:- Tasks: TASKS.md dashboard; task files in work/companies/{co}/tasks/- Memory: X projects, Y people, Z glossary terms, N past tasks- Journal: journal/ scaffolded, no entries yet- Dashboard: open in browserNext steps:- /productivity:journal to log today (2–10 lines of free text)- /productivity:update weekly — pulls this week's tasks from the wiki- /productivity:update --comprehensive monthly — chat people-discovery sweep
9. Transform Script
After a batch of writes, run python3 system/scripts/transform.py from the workspace root to normalize frontmatter, refresh wiki-links, and regenerate the ## Linked from sections across the vault. The script is idempotent — safe to run anytime.
Notes
- If memory is already initialized, this just opens the dashboard
- Nicknames are critical — always capture how people are actually referred to
- If a source isn't available, skip it and note the gap
- The first-time wiki scrape is a one-time operation — after it runs, weekly updates only pull the tasks page
- Nothing external is ever stored on disk (no URLs, no IDs, no index files) — if you leave the company, the local memory loses nothing
- Memory grows organically through natural conversation and
/journalafter bootstrap
Linked from
- [[CLAUDE.md|Working Memory]]
- [[journal/SKILL.md|Journal Skill]]
- [[memory-management/SKILL.md|Memory Management Skill]]
- [[memory/README.md|Memory README]]
- [[task-management/SKILL.md|Task Management Skill]]
- [[update/SKILL.md|Update Skill]]