Workflows
End-to-end workflows for using vskill — installing skills others have published, authoring your own in Skill Studio, evaluating them across models, submitting to the registry, and keeping installed skills up to date. Each section here is a short summary that links into the detail page for that step.
Overview
There are five canonical workflows. They split cleanly into two roles: consumers who install and update skills, and authors who author, evaluate, and submit them. Every workflow flows through the same security and provenance checks; the difference is what you start with and what you end with.
- Install — pull a published skill into all detected agents. Consumer.
- Author — write a SKILL.md in Skill Studio with AI assistance. Author.
- Evaluate — A/B and benchmark your skill across models. Author.
- Submit — push to GitHub and publish to verified-skill.com. Author.
- Update — pull a newer published version into your install. Consumer.
Install
Run npx vskill@latest install <skill> and vskill scans the package, runs LLM intent analysis, and materializes the SKILL.md into every detected agent directory (~/.claude/skills/, ~/.cursor/skills/, and so on). The lockfile records the SHA-256 hash and trust tier so subsequent runs can verify integrity.
See the full step-by-step in Getting Started → Install your first skill →. For where files end up on disk, see the folder map at Getting Started → Agent platforms →.
Author
Run npx vskill@latest studio to open the local IDE. The AI-assisted generator scaffolds the frontmatter, body, and benchmark suite from a one-line description; the editor then lets you iterate on SKILL.md with live preview. Studio is a single Node process serving the SPA on one port and the REST/SSE API on another — the browser only ever talks to localhost.
Architecture: Getting Started → Skill Studio →. For the runtime topology diagram, see the same section.
Evaluate
Studio's eval engine runs your skill's benchmark suite against any provider you have configured — Claude, GPT, Llama, Gemini, local Ollama, LM Studio. The A/B compare primitive runs the same prompts with and without your skill enabled and a blind LLM judge ranks the outputs as EFFECTIVE, MARGINAL, INEFFECTIVE, or DEGRADING. SSE streams partial results to the UI as cases complete.
For the provider matrix and adapter contracts, see the eval engine notes in CLI Reference →. For the publish-readiness contract that the comparator feeds into, see the submit workflow below.
Submit
Submitting is the durable record of a skill release: git push to your public GitHub repo, then verified-skill.com's scanner picks the new version up within ~10 minutes (or immediately when the optional internal-broadcast key is set). Other users see the new version via SSE notification or on their next vskill outdated check.
Full step-by-step including the SKILL.md schema, frontmatter, scan rules, and verification timeline: Submitting Skills →. The publish/submit sequence diagram lives at Submitting → Submission flow →.
Update
Run vskill update to pull newer published versions of every skill in your lockfile. Each candidate is re-scanned before installation; if scan results differ from the locked record, the update is held for review rather than applied silently. Studio's sidebar surfaces a count badge when updates are available; the right-panel detail view exposes a per-skill update button with changelog preview.
For the lockfile schema and update detection details, see Getting Started → How it works →.
Related resources