SKILL ARCHITECT
UNLOCKED
You've completed all 5 levels. The Expansion Pack awaits.
[ click anywhere to continue ]
< play.aidailybrief.ai

SKILLS MASTER CLASS

AIDB Operators Cut — Companion Experience
Nufar Gaspar × Nathaniel Whittemore
Skill Power 0%
APPRENTICE BUILDER ARSENAL STRATEGIST ARCHITECT
1
APPRENTICE
Understanding the Skill Landscape
What Are Skills?

At their core, skills are folders — not just markdown files — containing instructions, scripts, and resources that give AI agents new capabilities.

  • Skills can run code, trigger external systems, call APIs, and orchestrate multi-step workflows
  • Two modes: agents discover & invoke them autonomously, OR humans trigger them manually (/research this topic). Skills serve both.
  • The problem they solve: system prompts become bloated → performance degrades. Skills load the right knowledge dynamically, only when needed.
The Portability Breakthrough

Custom GPTs were locked inside ChatGPT. Skills are portable, human-readable markdown folders. Build once, use across tools. No proprietary format, no vendor lock-in. Anyone can read, edit, and maintain a skill — no engineering required.

This is what custom GPTs should have been.

The Ecosystem

44+ AI tools already support skills: Claude Code, Cursor, Windsurf, GitHub Copilot, Codex, Gemini CLI, Notion, and many more. This is becoming a universal standard.

Two Fundamental Types
TypeWhat It DoesDurability
Capability Uplift Enables new functions the model can't do well on its own May become obsolete as models improve
Encoded Preference Sequences existing capabilities according to YOUR workflow Gets more valuable over time

Spend most time on preference skills. They encode how YOUR team works.

Progressive Disclosure
LayerWhat LoadsToken Cost
1. Description~100 tokens in system promptAlways loaded
2. SKILL.md bodyFull instructionsOnly when triggered
3. Folder contentsScripts, assets, referencesOnly when needed
⚠ Security Warning
Third-party skills are code that runs with your agent's permissions. Always audit before installing. Treat skill installation like installing a browser extension.
2
BUILDER
Anatomy of an Effective Skill
When to Build a Skill
  • You do something more than 3 times with an AI tool
  • You keep pasting the same instructions repeatedly
  • You want consistent, reliable output across sessions
  • You want to enforce standardization — the skill becomes the single source of truth
  • Growth mindset: Build for things you couldn't do before, not just automating what you already do
Scoping Rule
One clear job per skill. If you can't describe what it does in one sentence, it's probably two skills.
Reuse vs. Create

Growing libraries exist (Anthropic skills repo: 87K+ GitHub stars). But for your first skills: build your own. Browsing marketplaces is tedious, quality varies, and security concerns are real. Building from scratch teaches you the craft faster than adopting someone else's 70%-right skill.

The Skill Creator Tool

Claude's built-in tool — but the pattern can be emulated in any tool by building a "skill for building skills." It interviews you, runs evals, does A/B testing, and iteratively improves your skill's description for better triggering.

Skill Anatomy
name
Lowercase, hyphens, max 64 chars. Gerund form: analyzing-data, preparing-meetings
description
The most critical line. A trigger, not a summary. Write for the model asking "when should I fire?"
Pro tip: make it LOUDER rather than quieter. Write in third person. Include both what it does and when to use it.
Instructions
Favor numbered steps or bulleted lists over prose. Set degrees of freedom: tight for fragile ops, loose for creative tasks.
Output Format
Show, don't describe. Include a literal template or example.
Gotcha Section
Highest-signal content. Where does the model go wrong? "I know you'll want to do X — don't."
Constraints
What NOT to do. Sharp and specific to THIS skill.
Skip This
Identity / Role section ("Act as a senior analyst...") — legacy prompt engineering pattern. Tell the model what YOUR approach does differently, not what persona to adopt.
The 5 Skill Killers
1
Description doesn't trigger properly — too vague, too narrow, or wrong person Fix: Specific, loud, third-person. "Use when..." format.
2
Over-defining the process — railroading instead of guiding Fix: Set degrees of freedom. Tight for fragile ops, loose for creative.
3
Stating the obvious — wasting tokens on what the model knows Fix: Challenge every paragraph: "Does Claude really need this?"
4
Missing gotcha section — not capturing failure patterns Fix: Document every failure you've seen. This IS the skill's value.
5
Monolithic blob — everything in one file Fix: SKILL.md under 500 lines. Move references to separate files.
Context Binding: In-Folder vs. External
Put IN the Folder When...Point Externally When...
Context is specific to this skill and travels with it (rubric, template, persona, examples) Context is shared across skills or changes independently (CLAUDE.md, project docs, stakeholder list)

Rule of thumb: "about the skill" = inside. "About you/your org" = outside.

Showcase: Meeting Prep Skill
meeting-prep/ ├── SKILL.md # Core instructions ├── stakeholder-context.md # External pointer ├── output-template.md # Structured brief format ├── scenarios.md # Meetings that go off the rails ├── examples.md # Great vs. mediocre prep └── skills/ └── meeting-sim/ └── SKILL.md # Nested: simulate the meeting
📄
meeting-prep/SKILL.md
SKILL.md
--- name: preparing-meetings description: > Use when the user says "prep for my meeting", "meeting prep", "get ready for [meeting name]", "I have a meeting with...", or references any upcoming calendar event they want to prepare for. Also triggers when user asks "who am I meeting with" or "what should I know before my [meeting]". Researches attendees, pulls correspondence history, and prepares a risk-aware brief. --- # Meeting Prep ## Steps 1. Identify all attendees from the calendar event or user input 2. Collect context for each attendee: - Search email history for recent correspondence - Check stakeholder-context.md for institutional knowledge - Note any open action items involving this person - Flag relationship dynamics (ally, skeptic, new contact) 3. Analyze the agenda: - For each topic, identify who cares most and why - Flag topics where attendees have known opposing positions - Identify gaps: what's NOT on the agenda but should be? 4. Run scenario analysis (read scenarios.md): - Which derailment patterns are likely given these attendees? - Prepare 1-2 sentence responses for each risk scenario 5. Generate brief using output-template.md: - Executive summary (3 sentences max) - Per-attendee context cards - Agenda analysis with your talking points - Risk scenarios and prepared responses - Suggested questions to ask ## Output Follow the structure in output-template.md exactly. Save to meetings/[date]-[meeting-name]-prep.md ## Gotcha Section - Don't assume attendee seniority from title alone - Don't fabricate company details — flag unknowns explicitly - Don't prepare generic talking points — every point must reference a specific agenda item - Don't skip the "what could go wrong" analysis - If you can't find correspondence history, say so — don't fill the gap with assumptions - For new contacts with no history: focus on their LinkedIn/ public presence, not invented backstory
📦
Full skill folder (SKILL.md + scenarios.md + output-template.md + meeting-sim/) available in the Expansion Pack → Blueprints section below.
3
ARSENAL
Your Starter Kit
4 Skills Every Knowledge Worker Should Have
Important
These are starting points, not finished products. Every skill below needs to be customized to YOUR context: your sources, your stakeholders, your quality bar, your terminology. Copy the structure, then make it yours. The trigger phrases, the gotcha sections, the output formats — all should reflect how YOU actually work.
🔍
research-with-confidence/SKILL.md
CAPABILITY
--- name: researching-with-confidence description: > Use when the user says "research [topic]", "what's the latest on [topic]", "deep dive into [topic]", "find out about", "investigate", or asks any question that requires gathering information from multiple sources. Also triggers on "is it true that...", "verify this claim", or "fact-check". Runs parallel multi-angle research with confidence scoring and cross-source fact-checking. --- # Research with Confidence ## Inputs (confirm with user) - Topic: [from user] - Time horizon: last week / last quarter / last year / all time - Source types: Tier 1: McKinsey, HBR, Gartner, peer-reviewed journals Tier 2: TechCrunch, industry blogs, company announcements Tier 3: Reddit, X/Twitter threads, Hacker News, forums Primary: SEC filings, patent databases, government data - Depth: quick scan / standard / deep dive ## Steps 1. Confirm topic, time horizon, source preferences, and depth 2. Launch parallel research across 4+ angles: - News and recent developments - Expert analysis and opinion - Criticism and counter-arguments - Industry-specific implications 3. Aggregate findings and cross-reference 4. Confidence score each claim: - HIGH: 3+ independent, quality sources corroborate - MEDIUM: 2 sources, or 1 high-quality source - LOW: single source, or contradicted by other evidence 5. Fact-check using methodology: - Rare facts (1-2 sources): flag for manual validation - Repeated facts (many sources): approve UNLESS high-stakes or sources trace to a single original (echo chamber risk) - Suspicious consensus: flag as "verify primary source" 6. Generate structured brief per output format ## Output - Executive summary (3-5 sentences) - Findings table: claim | confidence | sources | notes - What's NOT clear yet (mandatory section) - Relevance to user's context - Full source list with access dates ## Gotcha Section - Don't present a single source as definitive - Don't conflate opinion pieces with primary data - Don't treat recency as authority - Don't trust aggregator sites that compile without verifying - Check publication dates: AI content recycles outdated facts - Reddit anecdotes are signal, not evidence - If all sources trace to one original, that's echo chamber risk, not consensus
😈
devils-advocate/SKILL.md
PREFERENCE
--- name: devils-advocate description: > Use when the user says "devil's advocate this", "poke holes in this", "stress test", "what am I missing", "challenge this", "what could go wrong", or asks for critical review of any proposal, plan, decision, or document. Also triggers when user says "before I send this" or "is this solid". Systematically identifies hidden assumptions, blind spots, biases in both the presenter and the model's own reasoning. --- # Devil's Advocate ## Steps 1. Read the target document, decision, or proposal completely 2. Identify 3-5 hidden assumptions the author is making 3. For each assumption, construct the strongest counter-argument (steelman the opposition) 4. Find the single biggest blind spot — what's not being considered at all? 5. Check for presenter's biases: anchoring, confirmation bias, sunk cost, authority bias 6. Check for model's own biases: "I notice I'm defaulting to Y — this may reflect training patterns rather than your specific context" 7. Deliver verdict: SOLID / SHAKY / RED FLAG 8. End with constructive mitigation: "If you proceed, here's how to address these risks" ## Output | Assumption | Risk if Wrong | Severity | | Counter-argument for each | Biggest blind spot | Bias check (presenter + model) | Verdict with confidence | Mitigation recommendations ## Gotcha Section - Don't be nihilistic — challenge constructively - Don't challenge for the sake of challenging - Prioritize the 2-3 critiques that could actually sink this - Never end with "but overall this looks good" unless it genuinely does - Explicitly flag when you notice your own model biases
☀️
morning-briefing/SKILL.md
PREFERENCE
--- name: morning-briefing description: > Use when the user says "morning brief", "start my day", "daily briefing", "what's on today", "catch me up", or at the start of any new work session where no specific task is given. Also triggers on "what should I focus on today" or "give me my priorities". Pulls from calendar, priority files, and pending items to generate a structured daily brief. --- # Morning Briefing ## Context Required Read these files before generating: - my-priorities.md (your current goals and focus areas) - current-projects.md (active projects and status) - Calendar: today's events and attendees - Pending items from previous sessions ## Steps 1. Read all context files listed above 2. Scan today's calendar for meetings and deadlines 3. Identify the top 3 priorities for today based on urgency, deadlines, and your stated goals 4. Flag any conflicts or collisions in the schedule 5. Surface items at risk of falling through the cracks 6. Generate briefing in the format below ## Output ## Your Day — [Date] ### Top 3 Priorities 1. [Priority] — why it matters today 2. ... 3. ... ### Calendar Overview [Time blocks with prep notes for key meetings] ### Watch List [Items at risk, pending decisions, approaching deadlines] ### Quick Wins [Small tasks you can knock out between meetings] ## Gotcha Section - Don't fabricate calendar events — if you can't access the calendar, say so and focus on priorities - Don't repeat yesterday's briefing with minor edits - Priorities should reflect CURRENT context, not the last time you updated my-priorities.md - If context files are stale, flag it: "your priorities file hasn't been updated in 2 weeks"
👥
board-of-advisors/SKILL.md
PREFERENCE
--- name: board-of-advisors description: > Use when the user says "what would my board say", "get perspectives on this", "review this from multiple angles", "360 review", "what would [role] think about this", or asks for feedback from different viewpoints. Also triggers on "I need a second opinion" or "who would disagree with this". Simulates multi-perspective review from expert archetypes, each with defined lenses, biases, and blind spots. --- # Board of Advisors ## Advisor Profiles Each advisor has: a defined lens, known biases, and documented blind spots. Read profiles from advisors/ folder if available, otherwise use defaults: - The Strategist: long-term positioning, competitive dynamics. Bias: overvalues optionality. - The Operator: execution feasibility, resource constraints. Bias: undervalues bold bets. - The Customer Voice: end-user impact, adoption friction. Bias: deprioritizes internal efficiency. - The Skeptic: risk, downside, what could go wrong. Bias: status quo preference. ## Steps 1. Read the target document or decision 2. Run review through each advisor's lens independently 3. Aggregate: areas of agreement, points of tension, the question nobody asked 4. Surface the most important disagreement 5. Deliver synthesis with recommended action ## Gotcha Section - Don't make all advisors agree — tension is the point - Each advisor must stay in character, including biases - Flag when an advisor's bias may be distorting their view - The "question nobody asked" section is mandatory
📦
All 4 skill folders (Research, Devil's Advocate, Morning Briefing, Board of Advisors) available in the Expansion Pack → Blueprints section below.
4
STRATEGIST
Advanced Patterns & System Thinking
Advanced Patterns
PatternWhat It DoesExample
Skill Dispatcher Meta-skill that routes requests to the right skill Essential when your library grows past 10-15 skills
Skill Chaining Output of one skill → input to another Research → Devil's Advocate → Executive Summary
Loop Skills Iterative: check → act → check again Campaign optimization, competitive monitoring, content QA
Agentic Loops Spawn sub-agents, maintain state, orchestrate Deep research with parallel workers and aggregation
Loop Skills for Knowledge Workers
  • Marketing campaign optimization: Monitor ad performance → adjust bid strategy → re-check → flag when ROAS drops
  • Competitive intelligence: Scan competitor sites daily → compare to baseline → alert on meaningful changes
  • Content publishing: Draft → review against style guide → revise → check compliance → publish
  • Recruitment pipeline: Monitor candidates → send follow-ups → escalate when stale
Testing & Optimization
The Litmus Test
If you find yourself having to iterate and refine the output AFTER the skill runs — editing, correcting, restructuring — then the skill itself needs improvement. A well-built skill produces output you can use directly.
  • What works on Claude 4.6 might behave differently on Claude 5. Test after model updates.
  • A/B test skill versions: is v2 actually better, or just different?
  • Eval proportionality: Personal writing skill? Quick spot-check. Customer-facing compliance skill? Formal evaluation with edge cases.
When to Re-Evaluate Your Skills
TriggerWhat to Do
Model ChangeNew model drops — your gotcha section might be solving problems the new model doesn't have
Tool ChangeMoving between tools? Skills are portable but behaviors differ. Validate.
Results DegradeBefore blaming the model: did YOUR context go stale?
Before ScalingAbout to share with 50 people? Run proper evals first. Treat it like an AI product.
QuarterlyEven if nothing seems broken. Capability skills may have been surpassed by the base model.
5
ARCHITECT
From Personal Skills to Organizational Skill Libraries
The Opportunity

Skills are the pipe dream of every knowledge manager finally becoming real. Standardization + execution + knowledge — bundled into a single portable artifact that both humans can read and AI can execute.

Organizations doing skill hackathons and maintaining shared libraries are seeing massive uplift. Those that aren't? Their people reinvent the wheel in every AI conversation.

The Plugin Model

The future = plugins: skills + MCP connections bundled. A "sales pipeline review" plugin = the review skill + CRM connection + analytics access + your VP's expected output format.

The 5-Stage Process
1

DISCOVERY

Run work audits. Where do people repeat instructions to AI? Survey champions — they've already built skills for themselves. Map the shadow skill landscape.

2

CURATION

Prioritize by frequency × impact × standardization value. Designate skill owners — SMEs, not engineers. The best preference skills are built by the people who do the work.

3

VALIDATION

Test against real scenarios. A/B test vs. unstructured prompting. Get feedback from actual users. Test across tools.

4

BUNDLING

Package into plugins. Define org-wide vs. team vs. personal. Create onboarding bundles. Version control everything.

5

OWNERSHIP & MAINTENANCE

Champions per domain. Quarterly review. Usage tracking. Feedback loops. Deprecation process for obsolete skills.

🎮 EXPANSION PACK

Complete all 5 levels to unlock • or click to peek
📘
Blueprints
Copy-Paste Templates
Universal SKILL.md Template
📄
universal-template/SKILL.md
STARTER
--- name: your-skill-name description: > THE TRIGGER — this is the most important part. Start with "Use when the user says..." and list the exact phrases that should activate this skill: Use when the user says "[phrase 1]", "[phrase 2]", "[phrase 3]", or asks about [topic]. Also triggers on "[alternative phrase]" or "[related request]". [Then 1 sentence describing what the skill actually does.] TIPS: - Be LOUD: models skip past quiet descriptions - Lead with trigger phrases, not with what the skill does - Write in third person (this gets injected into the system prompt — "I can help you" breaks things) - Include edge-case triggers people actually say: "before I send this", "is this solid", etc. - Test: say the trigger phrase — does it fire? Say 3 similar sentences — does it fire when it shouldn't? --- # [Skill Name] ## Context Required Read these files before running: - [List every file the agent should read] - [Don't assume the agent remembers anything from prior sessions] - [Point to external files with full paths] ## Steps # Use numbered steps, not prose. Set degrees of freedom: # tight steps for fragile operations, loose for creative tasks. 1. [Specific, actionable step — not "analyze the situation"] 2. [Next step with clear inputs and outputs] 3. [Continue until the deliverable is complete] ## Output # Show the EXACT format. If it's a table, show the headers. # If it's a document, show the sections. Don't describe — show. [Literal template with headers, structure, and length constraints] ## Gotcha Section [This is the HIGHEST-SIGNAL content in your skill. Document every failure pattern you've seen: - "I know you'll want to do X — don't. Here's why." - Common assumptions the model makes incorrectly - Edge cases that trip up the workflow - What went wrong last time you did this task? Start building this section from day 1 and keep adding to it.] ## Constraints - [Rules specific to THIS skill, not general behavior] - [What can go wrong in THIS workflow specifically?]
Meeting Prep — Full Skill Folder

5 files + 1 nested skill. The SKILL.md is shown above in Level 2. Below: the supporting files that make it non-trivial.

meeting-prep/ ├── SKILL.md # shown in Level 2 above ├── stakeholder-context.md # below ├── output-template.md # below ├── scenarios.md # below ├── examples.md # below └── skills/meeting-sim/ └── SKILL.md # below
👤
meeting-prep/stakeholder-context.md
EXTERNAL POINTER
# Stakeholder Context This file lives OUTSIDE the skill folder (pointed externally) because it's shared across multiple skills and changes independently. The skill references it; you maintain it. ## Template — copy and fill for each key stakeholder ### [Name] Role: [Title, team, reporting line] Communication style: [Direct/diplomatic, data-driven/narrative] Known priorities: [What they care about most right now] Relationship history: [Ally/neutral/skeptic, past friction] Hot buttons: [Topics that trigger strong reactions] Decision pattern: [Decides fast/deliberates/defers to boss] Last interaction: [Date, topic, outcome] Update this file regularly — stale context is worse than no context. The Morning Briefing and Devil's Advocate skills also reference this file.
📄
meeting-prep/output-template.md
TEMPLATE
# Meeting Prep Brief — [Meeting Name] Date: [Date] | Time: [Time] | Duration: [Est.] ## Executive Summary [3 sentences max: purpose, key dynamics, your primary objective] ## Attendee Cards | Name | Role | Stance on Key Topics | Watch For | |------|------|---------------------|-----------| | [Name] | [Title] | [Their known position] | [Risk/opportunity] | ## Agenda Analysis | Topic | Who Cares Most | Potential Tension | Your Talking Point | |-------|---------------|-------------------|-------------------| | [Topic] | [Name] | [Conflict risk] | [Your prepared point] | ## Risk Scenarios | Scenario | Likelihood | Your Response | |----------|-----------|---------------| | [From scenarios.md] | [H/M/L] | [1-2 sentence response] | ## Questions to Ask 1. [Question that surfaces hidden information] 2. [Question that tests a key assumption] 3. [Question nobody else will ask] ## Open Items - [Pending decisions or action items involving these attendees]
💡
meeting-prep/examples.md
EXAMPLES
# Meeting Prep Examples — Good vs. Mediocre ## Mediocre Prep (what most people do) Attendees: Sarah (VP Ops), Mike (Dir Eng), Lisa (PM) Agenda: Q2 planning Notes: Discuss priorities for next quarter. ^ No context on dynamics. No scenario prep. No specific talking points. Generic output. ## Great Prep (what this skill produces) Executive Summary: Q2 planning with Sarah, Mike, and Lisa. Key dynamic: Sarah wants to expand the platform team; Mike wants to consolidate. Lisa is caught between — her roadmap depends on the outcome. Your objective: align on staffing before the budget discussion next Friday. Attendee Cards: - Sarah (VP Ops): Pushing for platform expansion since January. Cites customer churn data. WATCH: may try to reframe Mike's consolidation as "cutting corners." - Mike (Dir Eng): Sent a detailed consolidation proposal last Tuesday (see email thread 3/14). Data-driven, won't respond to emotional arguments. WATCH: may shut down if he feels outnumbered. - Lisa (PM): Her Q2 roadmap draft (shared 3/12) assumes current team size. If staffing changes, her timeline breaks. WATCH: may stay quiet to avoid picking sides. Risk Scenario: Decision Reversal (MEDIUM likelihood) Sarah wasn't in the March 8 meeting where Mike's proposal was tentatively approved. She may reopen it. Your response: "We discussed this on March 8 — here's the decision log. Happy to review offline if there are new concerns." ^ Notice the difference: specific dynamics, named tensions, prepared responses, evidence-based attendee cards.
📅
meeting-prep/scenarios.md
REFERENCE
# Meeting Scenarios — What Can Go Off the Rails Use this file to anticipate and prepare for common meeting derailment patterns. For each scenario, the skill should: 1. Assess likelihood given the attendees and agenda 2. Prepare a 1-2 sentence response strategy ## Scenario: Hostile Stakeholder Signs: history of pushback, known opposing position Risk: derails agenda, creates adversarial dynamic Prep: acknowledge their concern upfront, have data ready, propose offline follow-up if discussion exceeds 5 min ## Scenario: Scope Creep Ambush Signs: attendee with adjacent project, "while we're here..." Risk: meeting loses focus, decisions get deferred Prep: "Great point — let's capture that for a separate session. For today, let's focus on [agenda item]." ## Scenario: Decision Reversal Signs: senior attendee who wasn't in prior meetings Risk: previously agreed decisions get reopened Prep: have the decision log ready, reference who agreed and when, propose "let's discuss offline if concerns remain" ## Scenario: "Let's Take This Offline" Signs: complex topic, insufficient data, discomfort Risk: important decisions get indefinitely deferred Prep: propose specific follow-up: who, when, what outcome ## Scenario: Surprise Attendee Signs: last-minute addition, unclear agenda relationship Risk: hidden agenda, context mismatch, power dynamic shift Prep: acknowledge their presence, ask for their perspective early, adjust talking points based on their likely priorities
💡
meeting-prep/skills/meeting-sim/SKILL.md
NESTED SKILL
--- name: simulating-meeting description: > Use when the user says "simulate the meeting", "rehearse my talking points", "role-play the meeting", "play devil's advocate as [attendee]", or "what will [person] say about this". Triggers after meeting prep is complete. Role-plays each attendee based on their known positions, challenges the user's talking points from each perspective. --- # Meeting Simulation ## Steps 1. Read the meeting prep brief for attendee profiles 2. Read stakeholder-context.md for relationship dynamics 3. For each attendee, adopt their known position and style: - What would they push back on? - What would they champion? - What questions would they ask? 4. Simulate the meeting flow: - Present each agenda item - Voice each attendee's likely response - Challenge the user's talking points from each perspective 5. After simulation, provide: - Talking points that held up well - Talking points that need strengthening - Questions you weren't prepared for ## Gotcha Section - Don't make all attendees agree — tension is the point - Stay in character for each attendee, including their biases - If you don't have enough context on an attendee to simulate them, say so rather than inventing a generic persona - The value is in surfacing UNEXPECTED pushback, not confirming what the user already expects
Morning Briefing Builder Prompt
morning-briefing-builder.md
INTERACTIVE
# Build Your Personalized Morning Briefing Skill Copy this prompt into your AI tool to generate a Morning Briefing skill customized for YOUR workflow. I want you to interview me to build a personalized Morning Briefing skill. Ask me these questions one at a time, waiting for my answer before proceeding: 1. Morning routine: What does your ideal morning work start look like? What information do you need first? 2. Sources of truth: Where do your priorities live? (Notion, Google Docs, a priorities.md file, your calendar?) 3. Communication channels: What overnight messages matter? (Email, Slack, Teams, WhatsApp?) 4. Calendar depth: Do you want just today, or a 3-day lookahead? Do you need prep notes per meeting? 5. News/industry: Do you want relevant industry news? What topics? What sources? 6. Decision style: Do you want your briefing to recommend priorities, or just present information? 7. Format: Bullet points? Structured sections? How long should the briefing be? 8. Tone: Formal executive brief or casual morning coffee companion? After the interview, generate a complete SKILL.md file following the universal template structure (name, description, context required, steps, output, gotcha section, constraints) customized to everything I told you.
🗺
Side Quests
Skill Ideas by Role
Skills by Role
RoleSkill Ideas
Product ManagerPRD reviewer, sprint retrospective analyzer, competitive feature tracker, user story generator from interview notes
MarketerCampaign brief writer, A/B test analyzer, content calendar planner, brand voice enforcer, social post generator with platform-specific formatting
SalesPre-call research, objection handler prep, deal review (win/loss analysis), proposal customizer, CRM update summarizer
HRJob description generator, interview question builder (role-specific), candidate debrief synthesizer, policy Q&A responder
FinanceVariance analysis narrator, budget vs. actual summarizer, board deck data preparer, expense categorizer with policy checks
LegalContract clause reviewer, compliance checklist runner, regulatory update summarizer, NDA comparison tool
Skills I Wish I'd Built Sooner
  • Status Update Writer — reads your project files, calendar, and recent work, produces a consistent weekly update. Never write one manually again.
  • Email Drafter — your tone, your structure, your sign-off preferences. Encoded preference skill that gets better as you refine it.
  • Decision Logger — captures decisions from conversations, structures them, saves to a decision log with date, context, and rationale.
  • Onboarding Guide — for new team members. Reads your team docs and produces a structured "here's how we work" brief.
  • Remember — the most important infrastructure skill. Updates your working memory at the end of meaningful conversations so every future session has context.
🔒
Secret Levels
Advanced Patterns
Sub-Agent Skills

Skills that spawn parallel workers using context: fork. Example: a deep research skill that launches 4 sub-agents — one for news, one for academic papers, one for social sentiment, one for competitor analysis — then aggregates their findings.

State Management

Skills that maintain context across multiple invocations. Use intermediate checkpoint files (research/draft.md) so partial results survive session interruptions. Essential for long-running skills (20+ minute research sessions).

Meta-Skills: Skills That Build Skills

The recursive pattern: a Skill Reviewer that audits any skill file against the 5 Skill Killers checklist. A Skill Optimizer that rewrites descriptions for better triggering. A Memory Hygiene skill that reviews context files for staleness.

Multi-Tool Orchestration

One skill, multiple connected services via MCP. Real example: Research via web → write audio script → generate voice via ElevenLabs → post to Slack. Three connections, one trigger, zero manual steps. Took 20 minutes to build. Guardrails on every write action are non-negotiable.

Enterprise Testing Pipelines

Formal evaluation framework for skills at scale: test cases with expected behaviors, automated pass/fail scoring, regression testing across model versions, A/B comparison dashboards, and usage analytics. Proportional to criticality — personal skills get spot-checks, customer-facing skills get the full treatment.

📋
Field Guide
Reference Cards & Checklists
The 5 Skill Killers — Quick Check
#KillerFix
1Vague/quiet descriptionLoud, specific, third-person. "Use when..." format.
2Railroading the modelMatch freedom to task fragility. Loose for creative, tight for fragile.
3Stating the obviousOnly add what Claude doesn't already know.
4No gotcha sectionDocument every failure pattern. This IS the skill's value.
5Monolithic blobSKILL.md under 500 lines. Reference files for the rest.
Capability vs. Preference
Capability UpliftEncoded Preference
PurposeNew function the model can't do wellYOUR way of doing something it CAN do
DurabilityMay become obsolete as models improveGets more valuable over time
ExampleResearch with fact-checking methodologyYour team's meeting notes format
Where to investHigh-impact gaps onlyMost of your time should go here
Scoping: One Skill or Two?
  • Can you describe it in one sentence? → One skill.
  • Does it have two distinct triggers? → Probably two skills.
  • Would two different people use different halves? → Two skills.
  • Is the SKILL.md approaching 500 lines? → Split it.
Context: In-Folder vs. External
In the FolderPoint Externally
Specific to THIS skillShared across skills
Travels when copiedChanges independently
Rubric, template, examplesCLAUDE.md, project docs, stakeholder list
Re-Evaluation Triggers
  • ⚠ New model version released
  • ⚠ Switching between AI tools
  • ⚠ Output quality declining
  • ⚠ About to share skill with many people
  • 📅 Quarterly review (even if nothing seems broken)
The Litmus Test
Ask Yourself
After the skill runs, do I use the output directly? Or do I edit, correct, and restructure it? If you're iterating on the output, improve the skill, not the output.
SKILL POWER
0%