Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Claude Night Market

Claude Night Market contains 16 plugins for Claude Code that automate git operations, code review, and specification-driven development. Each plugin operates independently, allowing you to install only the components required for your specific workflow.

Architecture

The ecosystem uses a layered architecture to manage dependencies and token usage.

  1. Domain Specialists: Plugins like pensive (code review) and minister (issue tracking) provide high-level task automation.
  2. Utility Layer: Provides resource management services, such as token conservation in conserve.
  3. Foundation Layer: Implements core mechanics used across the ecosystem, including permission handling in sanctum.
  4. Meta Layer: abstract provides tools for cross-plugin validation and enforcement of project standards.

Design Philosophy

The project prioritizes token efficiency through shallow dependency chains. Progressive loading ensures that plugin logic enters the system prompt only when a specific feature is active. We enforce a “specification-first” workflow, requiring a written design phase before code generation begins.

Claude Code Integration

Plugins require Claude Code 2.1.0 or later to use features like:

  • Hot-reloading: Skills update immediately upon file modification.
  • Context Forking: Risky operations run in isolated context windows.
  • Lifecycle Hooks: Frontmatter hooks execute logic at specific execution points.
  • Wildcard Permissions: Pre-approved tool access reduces manual confirmation prompts.

Integration with Superpowers

These plugins integrate with the superpowers marketplace. While Night Market handles high-level process and workflow orchestration, superpowers provides the underlying methodology for TDD, debugging, and execution analysis.

Quick Start

# 1. Add the marketplace
/plugin marketplace add athola/claude-night-market

# 2. Install a plugin
/plugin install sanctum@claude-night-market

# 3. Use a command
/pr

# 4. Invoke a skill
Skill(sanctum:git-workspace-review)

Getting Started

This section will guide you through setting up Claude Night Market and using your first plugins.

Overview

This section covers:

  • Installing the marketplace and plugins
  • Invoking skills, commands, and agents
  • Plugin dependency structure

Prerequisites

  1. Claude Code installed and configured.
  2. A terminal.
  3. Git (for version control workflows).

Quick Overview

The Claude Night Market provides three types of capabilities:

TypeDescriptionHow to Use
SkillsReusable methodology guidesSkill(plugin:skill-name)
CommandsQuick actions with slash syntax/command-name
AgentsAutonomous task executorsReferenced in skill workflows

Sections

  1. Installation: Add the marketplace and install plugins
  2. Your First Plugin: Hands-on tutorial with sanctum
  3. Quick Start Guide: Common workflows and patterns

Achievement: Getting Started

Complete the installation steps to unlock the Marketplace Pioneer badge.

Install the marketplace to unlock: Marketplace Pioneer

Installation

This guide walks you through adding the Claude Night Market to your Claude Code setup.

Prerequisites

  • Claude Code 2.1.16+ (2.1.32+ for agent teams features)
  • Python 3.9+ — required for hook execution. macOS ships Python 3.9.6 as the system interpreter; hooks run under this rather than virtual environments. Plugin packages may target higher versions (3.10+, 3.12+) via uv.

Step 1: Add the Marketplace

Open Claude Code and run:

/plugin marketplace add athola/claude-night-market

This registers the marketplace, making all plugins available for installation.

Achievement Unlocked: Marketplace Pioneer

Step 2: Browse Available Plugins

View the marketplace contents:

/plugin marketplace list

You’ll see plugins organized by layer:

LayerPluginsPurpose
MetaabstractPlugin infrastructure
Foundationimbue, sanctum, leylineCore workflows
Utilityconserve, conjureResource optimization
Domainarchetypes, pensive, parseltongue, memory-palace, spec-kit, minister, attuneSpecialized tasks

Step 3: Install Individual Plugins

Install plugins based on your needs:

# Git and workspace operations
/plugin install sanctum@claude-night-market

# Specification-driven development
/plugin install spec-kit@claude-night-market

# Code review toolkit
/plugin install pensive@claude-night-market

# Python development
/plugin install parseltongue@claude-night-market

Step 4: Verify Installation

Check that plugins loaded correctly:

/plugin list

Installed plugins appear with their available skills and commands.

Optional: Install Superpowers

For enhanced methodology integration:

# Add superpowers marketplace
/plugin marketplace add obra/superpowers

# Install superpowers
/plugin install superpowers@superpowers-marketplace

Superpowers provides TDD, debugging, and review patterns that enhance Night Market plugins.

Minimal Setup

For basic git workflows:

/plugin install sanctum@claude-night-market

Development Setup

For active feature development:

/plugin install sanctum@claude-night-market
/plugin install imbue@claude-night-market
/plugin install spec-kit@claude-night-market

Full Setup

For detailed workflow coverage:

/plugin install abstract@claude-night-market
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
/plugin install leyline@claude-night-market
/plugin install conserve@claude-night-market
/plugin install pensive@claude-night-market
/plugin install spec-kit@claude-night-market

Troubleshooting

Plugin not loading?

  1. Verify marketplace was added: /plugin marketplace list
  2. Check for typos in plugin name
  3. Restart Claude Code session

Conflicts between plugins?

Plugins are composable. If you experience issues:

  1. Check the plugin’s README for dependency requirements
  2. Validate foundation plugins (imbue, leyline) are installed if using domain plugins

Next Steps

Continue to Your First Plugin for a hands-on tutorial.

Your First Plugin: sanctum

This hands-on tutorial walks you through using the sanctum plugin for git and workspace operations.

What You’ll Build

By the end of this tutorial, you’ll:

  • Review your git workspace state
  • Generate a conventional commit message
  • Prepare a pull request description

Prerequisites

  • sanctum plugin installed: /plugin install sanctum@claude-night-market
  • A git repository with some uncommitted changes

Part 1: Workspace Review

Before any git operation, understand your current state.

Invoke the Skill

Skill(sanctum:git-workspace-review)

This skill runs a preflight checklist:

  • Current branch and remote tracking
  • Staged vs unstaged changes
  • Recent commit history
  • Untracked files

What to Expect

Claude will analyze your repository and report:

Repository: my-project
Branch: feature/add-login
Tracking: origin/feature/add-login (up to date)

Staged Changes:
  M src/auth/login.ts
  A src/auth/types.ts

Unstaged Changes:
  M README.md

Untracked:
  src/auth/tests/login.test.ts
Achievement Unlocked: Skill Apprentice

Part 2: Commit Message Generation

Now generate a conventional commit message for your staged changes.

Using the Command

/commit-msg

Or invoke the skills directly:

Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)

Understanding the Output

Claude analyzes staged changes and generates:

feat(auth): add login form with validation

- Implement LoginForm component with email/password fields
- Add form validation using zod schema
- Create auth types for login request/response

Closes #42

The commit follows Conventional Commits format:

  • Type: feat, fix, docs, style, refactor, test, chore
  • Scope: Optional context (auth, api, ui)
  • Description: Imperative mood, present tense
  • Body: Bullet points explaining what changed
  • Footer: Issue references

Part 3: PR Preparation

Finally, prepare a pull request description.

Using the Command

/pr

This runs the full PR preparation workflow:

  1. Workspace review
  2. Quality gates check
  3. Change summarization
  4. PR description generation

Quality Gates

Before generating the PR, Claude checks:

Quality Gates:
  [x] Code compiles
  [x] Tests pass
  [x] Linting clean
  [x] No console.log statements
  [x] Documentation updated

Generated PR Description

## Summary

Add user authentication with login form validation.

## Changes

- **New Feature**: Login form component with email/password validation
- **Types**: Auth request/response type definitions
- **Tests**: Unit tests for login validation logic

## Testing

- [x] Manual testing of form submission
- [x] Unit tests pass (15 new tests)
- [x] Integration tests pass

## Screenshots

[Add screenshots if UI changes]

## Checklist

- [x] Tests added
- [x] Documentation updated
- [x] No breaking changes
Achievement Unlocked: PR Pioneer

Workflow Chaining

These skills work together. The recommended flow:

git-workspace-review (foundation)
├── commit-messages (depends on workspace state)
├── pr-prep (depends on workspace state)
├── doc-updates (depends on workspace state)
└── version-updates (depends on workspace state)

Always run git-workspace-review first to establish context.

Common Patterns

Pre-Commit Workflow

# Stage your changes
git add -p

# Review and commit
Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)

# Apply the message
git commit -m "<generated message>"

Pre-PR Workflow

# Run quality checks
make fmt && make lint && make test

# Prepare PR
/pr

# Create on GitHub
gh pr create --title "<title>" --body "<generated body>"

Next Steps

Achievements Earned

  • Skill Apprentice: Used your first skill
  • PR Pioneer: Prepared your first PR
Section Progress: 3/3 complete

Quick Start Guide

Common workflows and patterns for Claude Night Market plugins.

Workflow Recipes

Feature Development

Start features with a specification:

# (Optional) Resume persistent speckit context for this repo/session
/speckit-startup

# Create specification from idea
/speckit-specify Add user authentication with OAuth2

# Generate implementation plan
/speckit-plan

# Create ordered tasks
/speckit-tasks

# Execute tasks
/speckit-implement

# Verify artifacts stay consistent
/speckit-analyze

Code Review

Run a detailed code review:

# Full review with intelligent skill selection
/full-review

# Or specific review types
/architecture-review    # Architecture assessment
/api-review            # API surface evaluation
/bug-review            # Bug hunting
/test-review           # Test quality
/rust-review           # Rust-specific (if applicable)

Context Recovery

Get up to speed on changes:

# Quick catchup on recent changes
/catchup

# Or with sanctum's git-specific variant
/git-catchup

Context Optimization

Monitor and optimize context usage:

# Analyze context window usage
/optimize-context

# Check skill growth patterns
/analyze-growth

Skill Invocation Patterns

Basic Skill Usage

# Standard format
Skill(plugin:skill-name)

# Examples
Skill(sanctum:git-workspace-review)
Skill(imbue:diff-analysis)
Skill(conservation:context-optimization)

Skill Chaining

Some skills depend on others:

# Pensive depends on imbue and sanctum
Skill(sanctum:git-workspace-review)
Skill(imbue:review-core)
Skill(pensive:architecture-review)

Skill with Dependencies

Check a plugin’s README for dependency chains:

spec-kit depends on imbue
pensive depends on imbue + sanctum
sanctum depends on imbue (for some skills)

Command Quick Reference

Git Operations (sanctum)

CommandPurpose
/commit-msgGenerate commit message
/prPrepare pull request
/fix-prAddress PR review comments
/do-issueFix GitHub issues
/update-docsUpdate documentation
/update-readmeModernize README
/update-testsMaintain tests
/update-versionBump versions

Specification (spec-kit)

CommandPurpose
/speckit-specifyCreate specification
/speckit-planGenerate plan
/speckit-tasksCreate tasks
/speckit-implementExecute tasks
/speckit-analyzeCheck consistency
/speckit-clarifyAsk clarifying questions

Review (pensive)

CommandPurpose
/full-reviewUnified review
/architecture-reviewArchitecture check
/api-reviewAPI surface review
/bug-reviewBug hunting
/test-reviewTest quality

Analysis (imbue)

CommandPurpose
/catchupQuick context recovery
/structured-reviewStructured review with evidence
/feature-reviewFeature prioritization

Plugin Management (leyline)

CommandPurpose
/reinstall-all-pluginsRefresh all plugins
/update-all-pluginsUpdate all plugins

Environment Variables

Some plugins support configuration via environment variables:

Conservation

# Skip optimization guidance for fast processing
CONSERVATION_MODE=quick claude

# Full guidance with extended allowance
CONSERVATION_MODE=deep claude

Memory Palace

# Set embedding provider
MEMORY_PALACE_EMBEDDINGS_PROVIDER=hash  # or local

Tips

1. Start with Foundation

Install foundation plugins first:

/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market

Then add domain specialists as needed.

2. Use TodoWrite Integration

Most skills output TodoWrite items for tracking:

git-review:repo-confirmed
git-review:status-overview
pr-prep:quality-gates

Monitor these for workflow progress.

3. Chain Skills Intentionally

Don’t invoke all skills at once. Build understanding incrementally:

# First: understand state
Skill(sanctum:git-workspace-review)

# Then: perform action
Skill(sanctum:commit-messages)

4. Use Superpowers

If superpowers is installed, commands gain enhanced capabilities:

  • /create-skill uses brainstorming
  • /test-skill uses TDD methodology
  • /pr uses code review patterns

Next Steps

Common Workflows Guide

When and how to use commands, skills, and subagents for typical development tasks.

Quick Reference

TaskPrimary ToolPlugin
Initialize a project/attune:arch-initattune
Review a PR/full-reviewpensive
Fix PR feedback/fix-prsanctum
Prepare a PR/prsanctum
Catch up on changes/catchupimbue
Write specifications/speckit-specifyspec-kit
Improve system/speckit-analyzespec-kit
Debug an issueSkill(superpowers:debugging)superpowers
Manage knowledge/palacememory-palace

Initializing a New Project

When: Starting a new project from scratch or setting up a new codebase.

Step 1: Architecture-Aware Initialization

Start with an architecture-aware initialization to select the right project structure based on team size and domain complexity. This process guides you through project type selection, online research into best practices, and template customization.

# Interactive architecture selection with research
/attune:arch-init --name my-project

Output: Complete project structure with ARCHITECTURE.md, ADR, and paradigm-specific directories.

Step 2: Standard Initialization

If the architecture is decided, use standard initialization to generate language-specific boilerplate including Makefiles, CI/CD pipelines, and pre-commit hooks.

# Quick initialization when you know the architecture
/attune:init --lang python --name my-project

Step 3: Establish Persistent State

Establish a persistent state to manage artifacts and constraints across sessions. This maintains non-negotiable principles and supports consistent progress tracking.

# (Once) Define non-negotiable principles for the project
/speckit-constitution

# (Each Claude session) Load speckit context + progress tracking
/speckit-startup

Optional enhancements:

  • Install spec-kit for spec-driven artifacts: /plugin install spec-kit@claude-night-market
  • Install superpowers for rigorous methodology loops:
/plugin marketplace add obra/superpowers
/plugin install superpowers@superpowers-marketplace

Alternative: Brainstorming Workflow

For complex projects requiring exploration, begin by brainstorming the problem space and creating a detailed specification before planning the architecture and tasks.

# 1. Brainstorm the problem space
/attune:brainstorm --domain "my problem area"

# 2. Create detailed specification
/attune:specify

# 3. Plan architecture and tasks
/attune:blueprint

# 4. Initialize with chosen architecture
/attune:arch-init --name my-project

# 5. Execute implementation
/attune:execute

What You Get

ArtifactDescription
pyproject.toml / Cargo.toml / package.jsonBuild configuration
MakefileDevelopment targets (test, lint, format)
.pre-commit-config.yamlCode quality hooks
.github/workflows/CI/CD pipelines
ARCHITECTURE.mdArchitecture overview
docs/adr/Architecture decision records

Reviewing a Pull Request

When: Reviewing code changes in a PR or before merging.

Full Multi-Discipline Review

# Full review with skill selection
/full-review

This orchestrates multiple specialized reviews:

  • Architecture assessment
  • API surface evaluation
  • Bug hunting
  • Test quality analysis

Specific Review Types

# Architecture-focused review
/architecture-review

# API surface evaluation
/api-review

# Bug hunting
/bug-review

# Test quality assessment
/test-review

# Rust-specific review (for Rust projects)
/rust-review

Using Skills Directly

For more control, invoke skills:

# First: understand the workspace state
Skill(sanctum:git-workspace-review)

# Then: run specific review
Skill(pensive:architecture-review)
Skill(pensive:api-review)
Skill(pensive:bug-review)

External PR Review

# Review a GitHub PR by URL
/pr-review https://github.com/org/repo/pull/123

# Or just the PR number in current repo
/pr-review 123

Fixing PR Feedback

When: Addressing review comments on your PR.

Quick Fix

# Address PR review comments
/fix-pr

# Or with specific PR reference
/fix-pr 123

This:

  1. Reads PR review comments
  2. Identifies actionable feedback
  3. Applies fixes systematically
  4. Prepares follow-up commit

Manual Workflow

# 1. Review the feedback
Skill(sanctum:git-workspace-review)

# 2. Apply fixes
# (make your changes)

# 3. Prepare commit message
/commit-msg

# 4. Update PR
git push

Preparing a Pull Request

When: Code is complete and ready for review.

Pre-PR Checklist

Run these commands before creating a PR:

# 1. Update documentation
/sanctum:update-docs

# 2. Update README if needed
/sanctum:update-readme

# 3. Review and update tests
/sanctum:update-tests

# 4. Update Makefile demo targets (for plugins)
/abstract:make-dogfood

# 5. Final quality check
make lint && make test

Create the PR

# Full PR preparation
/pr

# This handles:
# - Branch status check
# - Commit message quality
# - Documentation updates
# - PR description generation

Using Skills for PR Prep

# Review workspace before PR
Skill(sanctum:git-workspace-review)

# Generate quality commit message
Skill(sanctum:commit-messages)

# Check PR readiness
Skill(sanctum:pr-preparation)

Catching Up on Changes

When: Returning to a project after time away, or joining an ongoing project.

Quick Catchup

# Standard catchup on recent changes
/catchup

# Git-specific catchup
/git-catchup

Detailed Understanding

# 1. Review workspace state
Skill(sanctum:git-workspace-review)

# 2. Analyze recent diffs
Skill(imbue:diff-analysis)

# 3. Understand branch context
Skill(sanctum:branch-comparison)

Session Recovery

# Resume a previous Claude session
claude --resume

# Or continue with context
claude --continue

Writing Specifications

When: Planning a feature before implementation.

Spec-Driven Development Workflow

# 1. Create specification from idea
/speckit-specify Add user authentication with OAuth2

# 2. Generate implementation plan
/speckit-plan

# 3. Create ordered tasks
/speckit-tasks

# 4. Execute tasks with tracking
/speckit-implement

Persistent Presence Loop (World Model + Agent Model)

Treat SDD artifacts as a self-modeling architecture where the repo state serves as the world model and the loaded skills as the agent model. Experiments are run with small diffs and verified through rigorous loops (tests, linters, repro scripts), while model updates refine both the code artifacts and the orchestration methodology to optimize future loops.

Curriculum generation via /speckit-tasks keeps actions grounded and dependency-ordered, while the skill library and iterative refinement ensure the plan adapts to reality. The cycle moves from planning to action to reflection via /speckit-plan, /speckit-implement, and /speckit-analyze.

Background reading:

  • MineDojo: https://minedojo.org/ (internet-scale knowledge + benchmarks)
  • Voyager: https://voyager.minedojo.org/ (arXiv: https://arxiv.org/abs/2305.16291) (automatic curriculum + skill library)
  • GTNH_Agent: https://github.com/sefiratech/GTNH_Agent (persistent, modular Minecraft automation)

Clarification and Analysis

# Ask clarifying questions about requirements
/speckit-clarify

# Analyze specification consistency
/speckit-analyze

Using Skills

# Invoke spec writing skill directly
Skill(spec-kit:spec-writing)

# Task planning skill
Skill(spec-kit:task-planning)

Meta-Development

When: Improving claude-night-market itself (skills, commands, templates, orchestration).

When improving the system itself, treat the repo as the world model and available tools as the agent model. Run experiments with minimal diffs behind verification, evaluate them with evidence-first methods like /speckit-analyze and Skill(superpowers:verification-before-completion), and update both the artifacts and the methodology so the next loop is cheaper.

Optional pattern: split roles (planner/critic/executor) for long-horizon work, similar to multi-role agent stacks used in open-ended Minecraft agents.

Useful tools:

# Use speckit to keep artifacts + principles explicit
/speckit-constitution
/speckit-analyze

# Use superpowers to enforce evidence
Skill(superpowers:systematic-debugging)
Skill(superpowers:verification-before-completion)

Debugging Issues

When: Investigating bugs or unexpected behavior.

With Superpowers Integration

# Systematic debugging methodology
Skill(superpowers:debugging)

# This provides:
# - Hypothesis formation
# - Evidence gathering
# - Root cause analysis
# - Fix validation

GitHub Issue Resolution

# Fix a GitHub issue
/do-issue 42

# Or with URL
/do-issue https://github.com/org/repo/issues/42

Analysis Tools

# Test analysis (parseltongue)
/analyze-tests

# Performance profiling
/run-profiler

# Context optimization
/optimize-context

Managing Knowledge

When: Capturing insights, decisions, or learnings.

Memory Palace

# Open knowledge management
/palace

# Access digital garden
/garden

Knowledge Capture

# Capture insight during work
Skill(memory-palace:knowledge-capture)

# Link related concepts
Skill(memory-palace:concept-linking)

Plugin Development

When: Creating or maintaining Night Market plugins.

Create a New Plugin

# Scaffold new plugin
make create-plugin NAME=my-plugin

# Or using attune for plugins
/attune:init --type plugin --name my-plugin

Validate Plugin Structure

# Check plugin structure
/abstract:validate-plugin

# Audit skill quality
/abstract:skill-audit

Update Plugin Documentation

# Update all documentation
/sanctum:update-docs

# Update Makefile demo targets
/abstract:make-dogfood

# Sync templates with reference projects
/attune:sync-templates

Testing

# Run plugin tests
make test

# Validate structure
make validate

# Full quality check
make lint && make test && make build

Context Management

When: Managing token usage or context window.

Monitor Usage

# Check context window usage
/context

# Analyze context optimization
/optimize-context

Reduce Context

# Clear context for fresh start
/clear

# Then catch up
/catchup

# Or scan for bloat
/bloat-scan

Optimization Skills

# Context optimization skill
Skill(conserve:context-optimization)

# Growth analysis
/analyze-growth

Subagent Delegation

When: Delegating specialized work to focused agents.

Available Subagents

SubagentPurposeWhen to Use
abstract:plugin-validatorValidate plugin structureBefore publishing plugins
abstract:skill-auditorAudit skill qualityDuring skill development
pensive:code-reviewerFocused code reviewReviewing specific files
attune:project-architectArchitecture designPlanning new features
attune:project-implementerTask executionSystematic implementation

Example: Code Review Delegation

# Delegate to specialized reviewer
Agent(pensive:code-reviewer) Review src/auth/ for security issues

Example: Plugin Validation

# Delegate validation to subagent
Agent(abstract:plugin-validator) Check plugins/my-plugin

End-to-End Example: New Feature

Here’s a complete workflow for adding a new feature:

# 1. PLANNING PHASE
/speckit-specify Add caching layer for API responses
/speckit-plan
/speckit-tasks

# 2. IMPLEMENTATION PHASE
# Create branch
git checkout -b feature/add-caching

# Implement with Iron Law TDD
Skill(imbue:proof-of-work)  # Enforces: NO IMPLEMENTATION WITHOUT FAILING TEST FIRST

# Or with superpowers TDD
Skill(superpowers:tdd)

# Execute planned tasks
/speckit-implement

# 3. QUALITY PHASE
# Run reviews
/architecture-review
/test-review

# Fix any issues
# (make changes)

# 4. PR PREPARATION PHASE
/sanctum:update-docs
/sanctum:update-tests
make lint && make test

# 5. CREATE PR
/pr

Command vs Skill vs Agent

TypeSyntaxWhen to Use
Command/command-nameQuick actions, one-off tasks
SkillSkill(plugin:skill-name)Methodologies, detailed workflows
AgentAgent(plugin:agent-name)Delegated work, specialized focus

Examples

# Command: Quick action
/pr

# Skill: Detailed methodology
Skill(sanctum:pr-preparation)

# Agent: Delegated specialized work
Agent(pensive:code-reviewer) Review authentication module

Skill Invocation: Secondary Strategy

The Skill tool is a Claude Code feature that may not be available in all environments. When the Skill tool is unavailable:

Secondary Pattern:

# 1. If Skill tool fails or is unavailable, read the skill file directly:
Read plugins/{plugin}/skills/{skill-name}/SKILL.md

# 2. Follow the skill content as instructions
# The skill file contains the complete methodology to execute

Example:

# Instead of: Skill(sanctum:commit-messages)
# Secondary:  Read plugins/sanctum/skills/commit-messages/SKILL.md
#             Then follow the instructions in that file

Skill file locations:

  • Plugin skills: plugins/{plugin}/skills/{skill-name}/SKILL.md
  • User skills: ~/.claude/skills/{skill-name}/SKILL.md

This allows workflows to function across different environments.


Claude Code 2.1.0 Features

New Capabilities

FeatureDescriptionUsage
Skill Hot-ReloadSkills auto-reload without restartEdit SKILL.md, immediately available
Plan Mode ShortcutEnter plan mode directly/plan
Forked ContextRun skills in isolated contextcontext: fork in frontmatter
Agent FieldSpecify agent for skill executionagent: agent-name in frontmatter
Frontmatter HooksLifecycle hooks in skills/agentshooks: section in frontmatter
Wildcard PermissionsFlexible Bash patternsBash(npm *), Bash(* install)
Skill VisibilityControl slash menu visibilityuser-invocable: false

Skill Development Workflow (Hot-Reload)

With Claude Code 2.1.0, skill development is faster:

# 1. Create/edit skill
vim ~/.claude/skills/my-skill/SKILL.md

# 2. Save changes (no restart needed!)

# 3. Skill is immediately available
Skill(my-skill)

# 4. Iterate rapidly

Using Forked Context

For isolated operations that shouldn’t pollute main context:

# In skill frontmatter
---
name: isolated-analysis
context: fork  # Runs in separate context
---

Use cases:

  • Heavy file analysis that would bloat context
  • Experimental operations that might fail
  • Parallel workflows

Frontmatter Hooks

Define hooks scoped to skill/agent/command lifecycle:

---
name: validated-workflow
hooks:
  PreToolUse:
    - matcher: "Bash"
      command: "./validate.sh"
      once: true  # Run only once per session
  PostToolUse:
    - matcher: "Write|Edit"
      command: "./format.sh"
  Stop:
    - command: "./cleanup.sh"
---

Permission Wildcards

New wildcard patterns for flexible permissions:

allowed-tools:
  - Bash(npm *)      # All npm commands
  - Bash(* install)  # Any install command
  - Bash(git * main) # Git with main branch

Note (2.1.20+): Bash(*) is now treated as equivalent to plain Bash. Use scoped wildcards like Bash(npm *) for targeted permissions, or plain Bash for unrestricted access.

Disabling Specific Agents

Control which agents can be invoked:

# Via CLI
claude --disallowedTools "Task(expensive-agent)"

# Via settings.json
{
  "permissions": {
    "deny": ["Task(expensive-agent)"]
  }
}

Subagent Resilience

Subagents are designed to continue operations after a permission denial by attempting alternative approaches instead of failing immediately. this behavior results in more reliable agent workflows when interacting with restrictive environments.

Agent-Aware Hooks (2.1.2+)

SessionStart hooks receive agent_type field when launched with --agent:

import json, sys
input_data = json.loads(sys.stdin.read())
agent_type = input_data.get("agent_type", "")

if agent_type in ["code-reviewer", "quick-query"]:
    context = "Minimal context"  # Skip heavy context
else:
    context = full_context

print(json.dumps({"hookSpecificOutput": {"additionalContext": context}}))

This reduces context overhead by 200-800 tokens for lightweight agents.


See Also

Technical Debt Migration Guide

Last Updated: 2025-12-06

Overview

Use this guide to migrate plugin code to shared constants and follow function extraction guidelines.

Quick Start

1. Update Your Plugin to Use Shared Constants

Replace scattered magic numbers with centralized constants:

# BEFORE
def check_file_size(content):
    if len(content) > 15000:  # Magic number!
        return "File too large"
    if len(content) > 5000:   # Another magic number!
        return "File is large"

# AFTER
from plugins.shared.constants import MAX_SKILL_FILE_SIZE, LARGE_SIZE_LIMIT

def check_file_size(content):
    if len(content) > MAX_SKILL_FILE_SIZE:
        return "File too large"
    if len(content) > LARGE_SIZE_LIMIT:
        return "File is large"

2. Apply Function Extraction Guidelines

Use the patterns from the guidelines to refactor complex functions:

# BEFORE - Complex function with multiple responsibilities
def analyze_and_optimize_skill(content, strategy):
    # Validation
    if not content:
        raise ValueError("Content cannot be empty")

    # Analysis
    tokens = estimate_tokens(content)
    complexity = calculate_complexity(content)

    # Optimization
    if strategy == "aggressive":
        # 20 lines of optimization logic
        pass
    elif strategy == "moderate":
        # 20 lines of optimization logic
        pass

    return optimized_content, tokens, complexity

# AFTER - Extracted and organized
def analyze_and_optimize_skill(content: str, strategy: str) -> OptimizationResult:
    """Analyze and optimize skill content."""
    _validate_content(content)

    analysis = _analyze_content(content)
    optimized = _optimize_content(content, strategy)

    return OptimizationResult(optimized, analysis)

def _validate_content(content: str) -> None:
    """Validate input content."""
    if not content:
        raise ValueError("Content cannot be empty")

def _analyze_content(content: str) -> ContentAnalysis:
    """Analyze content properties."""
    tokens = estimate_tokens(content)
    complexity = calculate_complexity(content)
    return ContentAnalysis(tokens, complexity)

def _optimize_content(content: str, strategy: str) -> str:
    """Optimize content using specified strategy."""
    optimizer = get_strategy_optimizer(strategy)
    return optimizer.optimize(content)

Detailed Migration Steps

1. Audit Plugin

Find all magic numbers and complex functions:

# Find magic numbers (search for numeric literals in conditions)
grep -n -E "(if|when|while).*[0-9]+" your_plugin/**/*.py

# Find long functions
find your_plugin -name "*.py" -exec wc -l {} + | awk '$1 > 30 {print}'

# Find functions with many parameters
grep -n "def .*\(.*," your_plugin/**/*.py | grep -oE "\([^)]*\)" | grep -o "," | wc -l

2. Plan Migration

Create a migration plan for your plugin:

  1. Identify Constants

    • List all magic numbers
    • Categorize by purpose (timeouts, sizes, thresholds)
    • Check if they exist in shared constants
  2. Identify Functions to Refactor

    • Functions > 30 lines
    • Functions with > 4 parameters
    • Functions with multiple responsibilities
  3. Create Migration Tasks

    • Update constants first (lowest risk)
    • Refactor simple functions next
    • Tackle complex functions last

3. Replace Magic Numbers

File Size Constants

# Replace these patterns:
if len(content) > 15000:
if file_size > 100000:
if line_count > 200:

# With:
from plugins.shared.constants import (
    MAX_SKILL_FILE_SIZE,
    MAX_TOTAL_SKILL_SIZE,
    LARGE_FILE_LINES
)

Timeout Constants

# Replace these patterns:
timeout=10
timeout=300
time.sleep(30)

# With:
from plugins.shared.constants import (
    DEFAULT_SERVICE_CHECK_TIMEOUT,
    DEFAULT_EXECUTION_TIMEOUT,
    MEDIUM_TIMEOUT
)

Quality Thresholds

# Replace these patterns:
if quality_score > 70.0:
if quality_score > 80.0:
if quality_score > 90.0:

# With:
from plugins.shared.constants import (
    MINIMUM_QUALITY_THRESHOLD,
    HIGH_QUALITY_THRESHOLD,
    EXCELLENT_QUALITY_THRESHOLD
)

4. Refactor Complex Functions

Follow this iterative approach:

4.1 Write Tests First

# Test the current behavior
def test_function_to_refactor():
    result = your_complex_function(input_data)
    assert result.expected_field == expected_value
    # Add more assertions based on current behavior

4.2 Extract Small Helper Functions

# Start with small, obvious extractions
def _calculate_value(item):
    """Extract value calculation from complex function."""
    return item.base * item.multiplier + item.offset

def _validate_input(data):
    """Extract input validation."""
    if not data:
        raise ValueError("Data required")
    return True

4.3 Extract Strategy Classes

For functions with conditional logic:

# Before: Complex conditional function
def process_item(item, mode):
    if mode == "fast":
        # Fast processing logic
        pass
    elif mode == "thorough":
        # Thorough processing logic
        pass
    elif mode == "minimal":
        # Minimal processing logic
        pass

# After: Strategy pattern
class ItemProcessor(ABC):
    @abstractmethod
    def process(self, item):
        pass

class FastProcessor(ItemProcessor):
    def process(self, item):
        # Fast processing implementation
        pass

class ThoroughProcessor(ItemProcessor):
    def process(self, item):
        # Thorough processing implementation
        pass

# Registry
PROCESSORS = {
    "fast": FastProcessor(),
    "thorough": ThoroughProcessor(),
    "minimal": MinimalProcessor()
}

def process_item(item, mode):
    processor = PROCESSORS.get(mode)
    if not processor:
        raise ValueError(f"Unknown mode: {mode}")
    return processor.process(item)

5. Update Configuration

If your plugin has configuration files:

# config.yaml - Use shared defaults
plugin_name: your_plugin

# Import shared defaults and override only what's needed
shared_constants:
  import: file_limits, timeouts, quality

# Plugin-specific settings
specific_settings:
  custom_threshold: 42
  feature_enabled: true

Migration Checklist

Pre-Migration

  • Run existing tests to establish baseline
  • Create backup of current code
  • Document current behavior
  • Identify all dependencies

Constants Migration

  • List all magic numbers in your plugin
  • Map to appropriate shared constants
  • Update imports
  • Replace magic numbers
  • Run tests to verify no breaking changes

Function Refactoring

  • Identify functions > 30 lines
  • Write tests for each function
  • Extract small helper functions first
  • Apply strategy pattern where appropriate
  • Keep public APIs stable
  • Update documentation

Post-Migration

  • Run full test suite
  • Update documentation
  • Verify performance
  • Update CHANGELOG
  • Create migration notes for users

Common Migration Patterns

1. Gradual Migration

Don’t refactor everything at once. Use feature flags:

# Gradually migrate to new implementation
def legacy_function(data):
    if USE_NEW_IMPLEMENTATION:
        return new_refactored_function(data)
    else:
        return old_implementation(data)

# Set this in config when ready
USE_NEW_IMPLEMENTATION = os.getenv("USE_NEW_IMPLEMENTATION", "false").lower() == "true"

2. Adapter Pattern

Keep old API while using new implementation:

def old_api_function(param1, param2, param3):
    """Legacy API - delegates to new implementation."""
    config = LegacyConfig(param1, param2, param3)
    return new_refactored_function(config)

# New, cleaner API
def new_refactored_function(config: Config):
    """New, improved implementation."""
    pass

3. Parallel Implementation

Run both old and new implementations in parallel to verify:

def process_with_validation(data):
    """Run both implementations and compare."""
    old_result = old_implementation(data)
    new_result = new_implementation(data)

    if not results_equivalent(old_result, new_result):
        log_discrepancy(old_result, new_result)
        # Return old result for safety
        return old_result

    return new_result

Testing Your Migration

1. Property-Based Testing

Use hypothesis to test refactored functions:

from hypothesis import given, strategies as st

@given(st.lists(st.integers()))
def test_sort_refactor(data):
    """Test that refactored sort produces same result."""
    old_result = old_sort_function(data.copy())
    new_result = new_sort_function(data.copy())
    assert old_result == new_result

2. Integration Tests

Verify the whole workflow still works:

def test_complete_workflow():
    """Test that refactoring didn't break the workflow."""
    input_data = create_test_data()

    # Run through entire process
    result = your_plugin_workflow(input_data)

    # Verify key properties
    assert result is not None
    assert result.quality_score >= 70
    assert len(result.processed_data) > 0

3. Performance Tests

Verify refactoring didn’t hurt performance:

import time

def test_performance():
    """Verify refactoring didn't degrade performance."""
    data = create_large_dataset()

    start = time.time()
    old_result = old_implementation(data)
    old_time = time.time() - start

    start = time.time()
    new_result = new_implementation(data)
    new_time = time.time() - start

    # New implementation shouldn't be more than 10% slower
    assert new_time < old_time * 1.1

Rollback Plan

If Migration Fails

  1. Immediate Rollback

    git revert <migration-commit>
    
  2. Partial Rollback

    • Keep constants migration
    • Revert function refactoring
    • Fix issues and retry
  3. Feature Flag Rollback

    # Disable new implementation
    os.environ["USE_NEW_IMPLEMENTATION"] = "false"
    

Documenting Issues

If you encounter problems:

  1. Document the specific issue
  2. Note the affected functionality
  3. Create a bug report with:
    • Migration step that failed
    • Error messages
    • Minimal reproduction case
    • Expected vs actual behavior

Getting Help

Resources

Support

  • Create an issue for migration problems
  • Join the #migration Slack channel
  • Review example migrations in other plugins

Contributing

  • Share your migration experience
  • Suggest improvements to guidelines
  • Add new shared constants as needed

Migration Examples

Example: Memory Palace Plugin

Challenges:

  • 15 magic numbers scattered across files
  • Functions averaging 45 lines
  • Complex conditional logic

Solution:

  • Replaced all magic numbers with shared constants
  • Refactored 8 functions using extraction patterns
  • Introduced strategy pattern for content processing

Results:

  • 40% reduction in code complexity
  • Improved test coverage from 60% to 85%
  • Easier to add new content types

Example: Parseltongue Plugin

Challenges:

  • Complex analysis functions with 8+ parameters
  • Duplicated logic across multiple analyzers
  • Hard to test individual components

Solution:

  • Extracted configuration objects for parameters
  • Created shared analysis utilities
  • Applied builder pattern for complex objects

Results:

  • Functions reduced to average 15 lines
  • Parameter count reduced to 3-4 per function
  • 100% test coverage for core logic

Conclusion

Migrating to shared constants and following function extraction guidelines improves code quality and maintainability.

Key Steps:

  • Migrate incrementally: Don’t try to do everything at once.
  • Test thoroughly: Verify behavior doesn’t change.
  • Document changes: Help others understand the migration.
  • Ask for help: Use the community’s experience.

Plugin Overview

The Claude Night Market organizes plugins into four layers, each building on the foundations below.

Architecture

graph TB
    subgraph Meta[Meta Layer]
        abstract[abstract<br/>Plugin infrastructure]
    end

    subgraph Foundation[Foundation Layer]
        imbue[imbue<br/>Intelligent workflows]
        sanctum[sanctum<br/>Git & workspace ops]
        leyline[leyline<br/>Pipeline building blocks]
    end

    subgraph Utility[Utility Layer]
        conserve[conserve<br/>Resource optimization]
        conjure[conjure<br/>External delegation]
    end

    subgraph Domain[Domain Specialists]
        archetypes[archetypes<br/>Architecture patterns]
        pensive[pensive<br/>Code review toolkit]
        parseltongue[parseltongue<br/>Python development]
        memory_palace[memory-palace<br/>Spatial memory]
        spec_kit[spec-kit<br/>Spec-driven dev]
        minister[minister<br/>Release management]
        attune[attune<br/>Full-cycle development]
        scribe[scribe<br/>Documentation review]
    end

    abstract --> leyline
    pensive --> imbue
    pensive --> sanctum
    sanctum --> imbue
    conjure --> leyline
    spec_kit --> imbue
    scribe --> imbue
    scribe --> conserve

    style Meta fill:#fff3e0,stroke:#e65100
    style Foundation fill:#e1f5fe,stroke:#01579b
    style Utility fill:#f3e5f5,stroke:#4a148c
    style Domain fill:#e8f5e8,stroke:#1b5e20

Layer Summary

LayerPurposePlugins
MetaPlugin infrastructure and evaluationabstract
FoundationCore workflow methodologiesimbue, sanctum, leyline
UtilityResource optimization and delegationconserve, conjure
DomainSpecialized task executionarchetypes, pensive, parseltongue, memory-palace, spec-kit, minister, attune, scribe

Dependency Rules

  1. Downward Only: Plugins depend on lower layers, never upward
  2. Foundation First: Most domain plugins work better with foundation plugins installed
  3. Graceful Degradation: Plugins function standalone but gain capabilities with dependencies

Quick Installation

Minimal (Git Workflows)

/plugin install sanctum@claude-night-market

Standard (Development)

/plugin install sanctum@claude-night-market
/plugin install imbue@claude-night-market
/plugin install spec-kit@claude-night-market

Full (All Capabilities)

/plugin install abstract@claude-night-market
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
/plugin install leyline@claude-night-market
/plugin install conserve@claude-night-market
/plugin install conjure@claude-night-market
/plugin install archetypes@claude-night-market
/plugin install pensive@claude-night-market
/plugin install parseltongue@claude-night-market
/plugin install memory-palace@claude-night-market
/plugin install spec-kit@claude-night-market
/plugin install minister@claude-night-market
/plugin install attune@claude-night-market
/plugin install scribe@claude-night-market

Browse by Layer

Browse by Plugin

PluginDescription
abstractMeta-skills for plugin development
imbueAnalysis and evidence gathering
sanctumGit and workspace operations
leylineInfrastructure building blocks
conserveContext and resource optimization
conjureExternal LLM delegation
archetypesArchitecture paradigms
pensiveCode review toolkit
parseltonguePython development
memory-palaceKnowledge organization
spec-kitSpecification-driven development
ministerRelease management
attuneFull-cycle project development
scribeDocumentation review and AI slop detection
Read all plugin pages to unlock: Plugin Explorer

Meta Layer

The meta layer provides infrastructure for building, evaluating, and maintaining plugins themselves.

Purpose

While other layers focus on user-facing workflows, the meta layer focuses on:

  • Plugin Development: Tools for creating new skills, commands, and hooks
  • Quality Assurance: Evaluation frameworks for plugin quality
  • Architecture Guidance: Patterns for modular, maintainable plugins

Plugins

PluginDescription
abstractMeta-skills infrastructure for plugin development

When to Use

Use meta layer plugins when:

  • Creating a new plugin for the marketplace
  • Evaluating existing skill quality
  • Refactoring large skills into modules
  • Validating plugin structure before publishing

Key Capabilities

Plugin Validation

/validate-plugin [path]

Checks plugin structure against official requirements.

Skill Creation

/create-skill

Scaffolds new skills using best practices and TDD methodology.

Quality Assessment

/skills-eval

Scores skill quality and suggests improvements.

Architecture Position

Meta Layer
    |
    v
Foundation Layer (imbue, sanctum, leyline)
    |
    v
Utility Layer (conservation, conjure)
    |
    v
Domain Specialists

The meta layer sits above all others, providing tools to build and maintain the entire ecosystem.

abstract

Meta-skills infrastructure for the plugin ecosystem - skill authoring, hook development, and quality evaluation.

Overview

The abstract plugin provides tools for building, evaluating, and maintaining Claude Code plugins. It’s the toolkit for plugin developers.

Installation

/plugin install abstract@claude-night-market

Skills

SkillDescriptionWhen to Use
skill-authoringTDD methodology with Iron Law enforcementCreating new skills with quality standards
hook-authoringSecurity-first hook developmentBuilding safe, effective hooks
modular-skillsModular design patternsBreaking large skills into modules
skills-evalSkill quality assessmentAuditing skills for token efficiency
hooks-evalHook security scanningVerifying hook safety
escalation-governanceModel escalation decisionsDeciding when to escalate models
makefile-dogfooderMakefile analysisEnsuring Makefile completeness
methodology-curatorExpert framework curationGrounding skills in proven methodologies
shared-patternsPlugin development patternsReusable templates
subagent-testingSubagent test patternsTesting subagent interactions

Commands

CommandDescription
/validate-plugin [path]Check plugin structure against requirements
/create-skillScaffold new skill with best practices
/create-commandScaffold new command
/create-hookScaffold hook with security-first design
/analyze-hookAnalyze hook for security and performance
/analyze-skillGet modularization recommendations
/bulletproof-skillAnti-rationalization workflow for hardening
/context-reportContext optimization report
/estimate-tokensEstimate token usage for skills
/hooks-evaldetailed hook evaluation
/make-dogfoodAnalyze and enhance Makefiles
/skills-evalRun skill quality assessment
/test-skillSkill testing with TDD methodology
/validate-hookValidate hook compliance

Agents

AgentDescription
meta-architectDesigns plugin ecosystem architectures
plugin-validatorValidates plugin structure
skill-auditorAudits skills for quality and compliance

Hooks

HookTypeDescription
homeostatic_monitor.pyPostToolUseReads stability gap metrics, queues degrading skills for auto-improvement
pre_skill_execution.pyPreToolUseSkill execution tracking
skill_execution_logger.pyPostToolUseSkill metrics logging
post-evaluation.jsonConfigQuality scoring and improvement tracking
pre-skill-load.jsonConfigPre-load validation for dependencies

Self-Adapting System

A closed-loop system that monitors skill health and auto-triggers improvements:

  1. homeostatic_monitor.py checks stability gap after each Skill invocation
  2. Skills with gap > 0.3 are queued in improvement_queue.py
  3. After 3+ flags, the skill-improver agent runs automatically
  4. skill_versioning.py tracks changes via YAML frontmatter
  5. rollback_reviewer.py creates GitHub issues if regressions are detected
  6. experience_library.py stores successful trajectories for future context

Cross-plugin dependency: reads stability metrics from memory-palace’s .history.json.

Usage Examples

Create a New Skill

/create-skill

# Claude will:
# 1. Use brainstorming for idea refinement
# 2. Apply TDD methodology
# 3. Generate skill scaffold
# 4. Create tests

Evaluate Skill Quality

Skill(abstract:skills-eval)

# Scores skills on:
# - Token efficiency
# - Documentation quality
# - Trigger clarity
# - Modular structure

Validate Plugin Structure

/validate-plugin /path/to/my-plugin

# Checks:
# - plugin.json structure
# - Required files present
# - Skill format compliance
# - Command syntax

Best Practices

Skill Design

  1. Single Responsibility: Each skill does one thing well
  2. Clear Triggers: Include “Use when…” in descriptions
  3. Token Efficiency: Keep skills under 2000 tokens
  4. TodoWrite Integration: Output actionable items

Hook Security

  1. No Secrets: Never log sensitive data
  2. Fail Safe: Default to allowing operations
  3. Minimal Scope: Request only needed permissions
  4. Audit Trail: Log decisions for review
  5. Agent-Aware (2.1.2+): SessionStart hooks receive agent_type to customize context

Superpowers Integration

When superpowers is installed:

CommandEnhancement
/create-skillUses brainstorming for idea refinement
/create-commandUses brainstorming for concept development
/create-hookUses brainstorming for security design
/test-skillUses test-driven-development for TDD cycles
  • leyline: Infrastructure patterns abstract builds on
  • imbue: Review patterns for skill evaluation

Foundation Layer

The foundation layer provides core workflow methodologies that other plugins build upon.

Purpose

Foundation plugins establish:

  • Analysis Patterns: How to approach investigation and review tasks
  • Workspace Operations: Git and file system interactions
  • Infrastructure Utilities: Reusable patterns for building plugins

Plugins

PluginDescriptionKey Use Case
imbueWorkflow methodologiesAnalysis, evidence gathering
sanctumGit operationsCommits, PRs, documentation
leylineBuilding blocksError handling, authentication

Dependency Flow

imbue (standalone)
  |
sanctum --> imbue
  |
leyline (standalone)
  • imbue: No dependencies, purely methodology
  • sanctum: Uses imbue for review patterns
  • leyline: No dependencies, infrastructure patterns

When to Use

imbue

Use when you need to:

  • Structure a detailed review
  • Analyze changes systematically
  • Capture evidence for decisions
  • Prevent overengineering (scope-guard)

sanctum

Use when you need to:

  • Understand repository state
  • Generate commit messages
  • Prepare pull requests
  • Update documentation

leyline

Use when you need to:

  • Implement error handling patterns
  • Add authentication flows
  • Build plugin infrastructure
  • Standardize testing approaches

Key Workflows

Pre-Commit Flow

Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)

Review Flow

Skill(imbue:review-core)
Skill(imbue:evidence-logging)
Skill(imbue:structured-output)

PR Preparation

Skill(sanctum:git-workspace-review)
Skill(sanctum:pr-prep)

Installation

# Minimal foundation
/plugin install imbue@claude-night-market

# Full foundation
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
/plugin install leyline@claude-night-market

imbue

Workflow methodologies for analysis, evidence gathering, and structured output.

Overview

Imbue provides reusable patterns for approaching analysis tasks. It’s a methodology plugin - the patterns apply to various inputs (git diffs, specs, logs) and chain together for complex workflows.

Core Philosophy: “NO IMPLEMENTATION WITHOUT A FAILING TEST FIRST” - The Iron Law enforced through proof-of-work validation.

Installation

/plugin install imbue@claude-night-market

Principles

  • Generalizable: Patterns work across different input types
  • Composable: Skills chain together naturally
  • Evidence-based: Emphasizes capturing proof for reproducibility
  • TDD-First: Iron Law enforcement prevents cargo cult testing

Skills

Review Patterns

SkillDescriptionWhen to Use
review-coreScaffolding for detailed reviewsStarting architecture, security, or code quality reviews
evidence-loggingEvidence capture methodologyCreating audit trails during analysis
structured-outputOutput formatting patternsPreparing final reports

Analysis Methods

SkillDescriptionWhen to Use
diff-analysisSemantic changeset analysisUnderstanding impact of changes
catchupContext recoveryGetting up to speed after time away

Workflow Guards

SkillDescriptionWhen to Use
scope-guardAnti-overengineeringEvaluating if features should be built now
proof-of-workEvidence-based validationEnforcing Iron Law TDD discipline
rigorous-reasoningAnti-sycophancy guardrailsAnalyzing conflicts, evaluating contested claims

Feature Planning

SkillDescriptionWhen to Use
feature-reviewFeature prioritizationSprint planning, roadmap reviews

Workflow Automation

SkillDescriptionWhen to Use
workflow-monitorExecution monitoring and issue creationAfter workflow failures or inefficiencies

Commands

CommandDescription
/catchupQuick context recovery from recent changes
/structured-reviewStart structured review workflow with evidence logging
/feature-reviewFeature prioritization with RICE+WSJF scoring

Agents

AgentDescription
review-analystAutonomous structured reviews with evidence gathering

Hooks

HookTypeDescription
session-start.shSessionStartInitializes scope-guard, Iron Law, and learning mode
user-prompt-submit.shUserPromptSubmitValidates prompts against scope thresholds
tdd_bdd_gate.pyPreToolUseEnforces Iron Law at write-time
pre-pr-scope-check.shManualChecks scope before PR creation
proof-enforcement.mdDesignIron Law TDD compliance enforcement

Usage Examples

Structured Review

Skill(imbue:review-core)

# Required TodoWrite items:
# 1. review-core:context-established
# 2. review-core:scope-inventoried
# 3. review-core:evidence-captured
# 4. review-core:deliverables-structured
# 5. review-core:contingencies-documented

Diff Analysis

Skill(imbue:diff-analysis)

# Answers: "What changed and why does it matter?"
# - Categorizes changes by function
# - Assesses risks
# - Summarizes implications

Quick Catchup

/catchup

# Summarizes:
# - Recent commits
# - Changed files
# - Key decisions
# - Action items

Feature Prioritization

/feature-review

# Uses hybrid RICE+WSJF scoring:
# - Reach, Impact, Confidence, Effort
# - Weighted Shortest Job First
# - ISO 25010 quality dimensions

Scope Guard

The scope-guard skill prevents overengineering via four components:

ComponentPurpose
decision-frameworkWorthiness formula and scoring
anti-overengineeringRules to prevent scope creep
branch-managementThreshold monitoring (lines, commits, days)
baseline-scenariosValidated test scenarios

Iron Law TDD Enforcement

The proof-of-work skill enforces the Iron Law:

NO IMPLEMENTATION WITHOUT A FAILING TEST FIRST

This prevents “Cargo Cult TDD” where tests validate pre-conceived implementations.

Self-Check Protocol

Thought PatternViolationAction
“Let me plan the implementation first”Skipping REDWrite failing test FIRST
“I know what tests we need”Pre-conceived implDocument failure, THEN design
“The design is straightforward”Skipping uncertaintyLet design EMERGE from tests

TodoWrite Items

proof:iron-law-red     - Failing test documented
proof:iron-law-green   - Minimal code to pass
proof:iron-law-refactor - Code improved, tests green
proof:iron-law-coverage - Coverage gates verified

See iron-law-enforcement.md module for full enforcement patterns.

Rigorous Reasoning

The rigorous-reasoning skill prevents sycophantic patterns through structured analysis:

ComponentPurpose
priority-signalsOverride principles (no courtesy agreement, checklist over intuition)
conflict-analysisHarm/rights checklist for interpersonal conflicts
debate-methodologyTruth claims and contested territory handling
red-flag monitoringDetect sycophantic thought patterns

Red Flag Self-Check

Thought PatternReality CheckAction
“I agree that…”Did you validate?Apply harm/rights checklist
“You’re right that…”Is this proven?Check for evidence
“That’s a fair point”Fair by what standard?Specify the standard

TodoWrite Integration

All skills output TodoWrite items for progress tracking:

review-core:context-established
review-core:scope-inventoried
diff-analysis:baseline-established
diff-analysis:changes-categorized
catchup:context-confirmed
catchup:delta-captured

Integration Pattern

Imbue is foundational - other plugins build on it:

# Sanctum uses imbue for review patterns
Skill(imbue:review-core)
Skill(sanctum:git-workspace-review)

# Pensive uses imbue for evidence gathering
Skill(imbue:evidence-logging)
Skill(pensive:architecture-review)

Superpowers Integration

SkillEnhancement
scope-guardUses brainstorming, writing-plans, execute-plan
/feature-reviewUses brainstorming for feature suggestions
  • sanctum: Uses imbue for review scaffolding
  • pensive: Uses imbue for evidence gathering
  • spec-kit: Uses imbue for analysis patterns

sanctum

Git and workspace operations for active development workflows.

Overview

Sanctum handles the practical side of development: commits, PRs, documentation updates, and version management. It’s the plugin you’ll use most during active coding.

Installation

/plugin install sanctum@claude-night-market

Skills

SkillDescriptionWhen to Use
git-workspace-reviewPreflight repo state analysisBefore any git operation
file-analysisCodebase structure mappingUnderstanding project layout
commit-messagesConventional commit generationAfter staging changes
pr-prepPR preparation with quality gatesBefore creating PRs
pr-reviewPR analysis and feedbackReviewing others’ PRs
doc-consolidationMerge ephemeral docsConsolidating LLM-generated docs
doc-updatesDocumentation maintenanceSyncing docs with code
test-updatesTest generation and enhancementMaintaining test suites
update-readmeREADME modernizationRefreshing project entry points
version-updatesVersion bumpingManaging semantic versions
workflow-improvementWorkflow retrospectivesImproving development processes
tutorial-updatesTutorial maintenanceKeeping tutorials current

Commands

CommandDescription
/git-catchupGit repository catchup
/commit-msgDraft conventional commit message
/prPrepare PR with quality gates
/pr-reviewEnhanced PR review
/fix-prAddress PR review comments
/do-issueFix GitHub issues systematically
/fix-workflowImprove recent workflow
/merge-docsConsolidate ephemeral docs
/update-docsUpdate documentation
/update-pluginsAudit and sync plugin.json registrations
/update-readmeModernize README
/update-testsMaintain tests
/update-tutorialUpdate tutorial content
/update-versionBump versions
/update-dependenciesUpdate project dependencies
/create-tagCreate git tags for releases
/resolve-threadsResolve PR review threads

Agents

AgentDescription
git-workspace-agentRepository state analysis
commit-agentCommit message generation
pr-agentPR preparation specialist
workflow-recreate-agentWorkflow slice reconstruction
workflow-improvement-*Workflow improvement pipeline
dependency-updaterDependency version management

Hooks

HookTypeDescription
post_implementation_policy.pySessionStartRequires docs/tests/readme updates
verify_workflow_complete.pyStopVerifies workflow completion
session_complete_notify.pyStopToast notification when awaiting input

Usage Examples

Pre-Commit Workflow

# Stage changes
git add -p

# Review workspace
Skill(sanctum:git-workspace-review)

# Generate commit message
Skill(sanctum:commit-messages)

# Apply
git commit -m "<generated message>"

PR Preparation

# Run quality checks first
make fmt && make lint && make test

# Prepare PR
/pr

# Creates:
# - Summary
# - Change list
# - Testing checklist
# - Quality gate results

Fix PR Review Comments

/fix-pr

# Claude will:
# 1. Read PR comments
# 2. Triage by priority
# 3. Implement fixes
# 4. Resolve threads on GitHub

Fix GitHub Issue

/do-issue 42

# Uses subagent-driven-development:
# 1. Analyze issue
# 2. Create plan
# 3. Implement fix
# 4. Test
# 5. Prepare PR

Skill Dependencies

Most sanctum skills depend on git-workspace-review:

git-workspace-review (foundation)
├── commit-messages
├── pr-prep
├── doc-updates
├── update-readme
└── version-updates

file-analysis (standalone)

Always run git-workspace-review first to establish context.

TodoWrite Integration

git-review:repo-confirmed
git-review:status-overview
git-review:diff-stat
git-review:diff-details
pr-prep:workspace-reviewed
pr-prep:quality-gates
pr-prep:changes-summarized
pr-prep:testing-documented
pr-prep:pr-drafted

Workflow Patterns

Pre-Commit

git add -p
Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)

Pre-PR

make fmt && make lint && make test
Skill(sanctum:git-workspace-review)
Skill(sanctum:pr-prep)

Post-Review

/fix-pr
# Implements fixes, resolves threads

Release

Skill(sanctum:git-workspace-review)
Skill(sanctum:version-updates)
Skill(sanctum:doc-updates)
git commit && git tag

Superpowers Integration

CommandEnhancement
/prUses receiving-code-review for validation
/pr-reviewUses receiving-code-review for analysis
/fix-prUses receiving-code-review for resolution
/do-issueUses multiple superpowers for full workflow
  • imbue: Provides review scaffolding sanctum uses
  • pensive: Code review complements sanctum’s git operations

leyline

Infrastructure and pipeline building blocks for plugins.

Overview

Leyline provides reusable infrastructure patterns that other plugins build on. Think of it as a standard library for plugin development - error handling, authentication, storage, and testing patterns.

Installation

/plugin install leyline@claude-night-market

Skills

SkillDescriptionWhen to Use
quota-managementRate limiting and quotasBuilding services that consume APIs
usage-loggingTelemetry trackingLogging tool usage for analytics
service-registryService discovery patternsManaging external tool connections
damage-controlAgent-level error recovery for multi-agent coordinationCrash recovery, context overflow, merge conflicts
error-patternsStandardized error handlingImplementing production-grade error recovery
authentication-patternsAuth flow patternsHandling API keys and OAuth
evaluation-frameworkDecision thresholdsBuilding evaluation criteria
mecw-patternsMECW implementationMinimal Effective Context Window
progressive-loadingDynamic content loadingLazy loading strategies
risk-classificationInline 4-tier risk classification for agent tasksRisk-based task routing with war-room escalation
pytest-configPytest configurationStandardized test configuration
storage-templatesStorage abstractionFile and database patterns
testing-quality-standardsTest quality guidelinesEnsuring high-quality tests

Commands

CommandDescription
/reinstall-all-pluginsUninstall and reinstall all plugins to refresh cache
/update-all-pluginsUpdate all installed plugins from marketplaces

Usage Examples

Plugin Management

# Refresh all plugins (fixes version mismatches)
/reinstall-all-plugins

# Update to latest versions
/update-all-plugins

Using as Dependencies

Leyline skills are typically used as dependencies in other plugins:

# In your skill's SKILL.md frontmatter
dependencies:
  - leyline:error-patterns
  - leyline:quota-management

Error Handling Pattern

Skill(leyline:error-patterns)

# Provides:
# - Structured error types
# - Recovery strategies
# - Logging standards
# - User-friendly messages

Authentication Pattern

Skill(leyline:authentication-patterns)

# Covers:
# - API key management
# - OAuth flows
# - Token refresh
# - Secret storage

Testing Standards

Skill(leyline:testing-quality-standards)

# Enforces:
# - Test naming conventions
# - Coverage requirements
# - Mocking guidelines
# - Fixture patterns

Pattern Categories

Rate Limiting

# quota-management pattern
from leyline import QuotaManager

manager = QuotaManager(
    daily_limit=1000,
    hourly_limit=100,
    burst_limit=10
)

if manager.can_proceed():
    # Make API call
    manager.record_usage()

Telemetry

# usage-logging pattern
from leyline import UsageLogger

logger = UsageLogger(output="telemetry.csv")
logger.log_tool_use("WebFetch", tokens=500, latency_ms=1200)

Storage Abstraction

# storage-templates pattern
from leyline import Storage

storage = Storage.from_config()
storage.save("key", data)
data = storage.load("key")

MECW Patterns

The mecw-patterns skill implements Minimum Effective Context Window principles:

PatternDescription
Summarize EarlyCompress context before it grows
Load on DemandFetch details only when needed
Evict StaleRemove outdated information
Prioritize RecentWeight recent context higher

Integration

Leyline is used by:

  • abstract: Plugin validation uses error patterns
  • conjure: Delegation uses quota management
  • conservation: Context optimization uses MECW patterns

Best Practices

  1. Don’t Duplicate: Use leyline patterns instead of reimplementing
  2. Compose Patterns: Combine multiple patterns for complex needs
  3. Test with Standards: Use pytest-config for consistent testing
  4. Log Everything: Use usage-logging for debugging and analytics
  • abstract: Uses leyline for plugin infrastructure
  • conjure: Uses leyline for quota and service management
  • conservation: Uses leyline for MECW implementation

Utility Layer

The utility layer provides resource optimization and external integration capabilities.

Purpose

Utility plugins handle:

  • Resource Management: Context window optimization, token conservation
  • External Delegation: Offloading tasks to external LLM services
  • Performance Monitoring: CPU/GPU and memory tracking

Plugins

PluginDescriptionKey Use Case
conserveResource optimizationContext management
conjureExternal delegationLong-context tasks
hookifyBehavioral rulesPreventing unwanted actions

When to Use

conserve

Use when you need to:

  • Monitor context window usage
  • Optimize token consumption
  • Handle large codebases efficiently
  • Track resource usage patterns

conjure

Use when you need to:

  • Process files too large for Claude’s context
  • Delegate bulk processing tasks
  • Use specialized external models
  • Manage API quotas across services

hookify

Use when you need to:

  • Prevent accidental destructive actions (force push, etc.)
  • Enforce coding standards via pattern matching
  • Create project-specific behavioral constraints
  • Add safety guardrails for automated workflows

Key Capabilities

Context Optimization

/optimize-context

Analyzes current context usage and suggests MECW (Minimum Effective Context Window) strategies.

Growth Analysis

/analyze-growth

Predicts context budget impact of skill growth patterns.

External Delegation

make delegate-auto PROMPT="Summarize" FILES="src/"

Auto-selects the best external service for a task.

Conserve Modes

The conserve plugin supports different modes via environment variables:

ModeCommandBehavior
NormalclaudeFull conservation guidance
QuickCONSERVE_MODE=quick claudeSkip guidance for fast tasks
DeepCONSERVE_MODE=deep claudeExtended resource allowance

Key Thresholds

Context Usage

  • < 30%: LOW - Normal operation
  • 30-50%: MODERATE - Consider optimization
  • > 50%: CRITICAL - Optimize immediately

Token Quotas

  • 5-hour rolling cap
  • Weekly cap
  • Check with /status

Installation

# Resource optimization
/plugin install conserve@claude-night-market

# External delegation
/plugin install conjure@claude-night-market

Integration with Other Layers

Utility plugins enhance all other layers:

Domain Specialists
       |
       v
   Utility Layer (optimization, delegation)
       |
       v
 Foundation Layer

For example, conjure can delegate large file processing before sanctum analyzes the results.

conserve

Resource optimization and performance monitoring for context window management.

Overview

Conserve helps you work efficiently within Claude’s context limits. It automatically loads optimization guidance at session start and provides tools for monitoring and reducing context usage.

Installation

/plugin install conserve@claude-night-market

Skills

SkillDescriptionWhen to Use
context-optimizationMECW principles and 50% context ruleWhen context usage > 30%
token-conservationToken usage strategies and quota trackingSession start, before heavy loads
cpu-gpu-performanceResource monitoring and selective testingBefore builds, tests, or training
mcp-code-executionMCP patterns for data pipelinesProcessing data outside context
optimizing-large-skillsLarge skill optimizationBreaking down oversized skills
bloat-detectorDetect bloated documentation, dead code, dead wrappersDuring documentation reviews, code cleanup
clear-contextContext window management strategiesWhen approaching context limits

Commands

CommandDescription
/bloat-scanDetect code bloat, dead code, and dead wrapper scripts
/unbloatRemove detected bloat with progressive analysis
/optimize-contextAnalyze and optimize context window usage
/analyze-growthPredict context budget impact of skill growth

Agents

AgentDescription
context-optimizerAutonomous context optimization and MECW compliance

Hooks

HookTypeDescription
session-start.shSessionStartLoads conservation guidance at startup

Usage Examples

Context Optimization

/optimize-context

# Analyzes:
# - Current context usage
# - Token distribution
# - Compression opportunities
# - MECW compliance

Growth Analysis

/analyze-growth

# Predicts:
# - Skill growth patterns
# - Context budget impact
# - Optimization priorities

Manual Skill Invocation

Skill(conservation:context-optimization)

# Provides:
# - MECW principles
# - 50% context rule
# - Compression strategies
# - Eviction priorities

Bypass Modes

Control conservation behavior via environment variables:

ModeCommandBehavior
NormalclaudeFull conservation guidance
QuickCONSERVATION_MODE=quick claudeSkip guidance for fast processing
DeepCONSERVATION_MODE=deep claudeExtended resource allowance

Examples

# Quick mode for simple tasks
CONSERVATION_MODE=quick claude

# Deep mode for complex analysis
CONSERVATION_MODE=deep claude

Key Thresholds

Context Usage

LevelUsageAction
LOW< 30%Normal operation
MODERATE30-50%Consider optimization
CRITICAL> 50%Optimize immediately

Token Quotas

  • 5-hour rolling cap: Prevents burst usage
  • Weekly cap: validates sustainable usage
  • Check status: Use /status to see current usage

MECW Principles

Minimum Effective Context Window strategies:

  1. Summarize Early: Compress large outputs before they accumulate
  2. Load on Demand: Fetch file contents only when needed
  3. Evict Stale: Remove information no longer relevant
  4. Prioritize Recent: Weight recent context higher than old

Optimization Strategies

For Large Files

# Don't load entire file
# Instead, use targeted reads
Read file.py --offset 100 --limit 50

For Search Results

# Limit search output
Grep --head_limit 20

For Git Operations

# Use stats instead of full diffs
git diff --stat
git log --oneline -10

CPU/GPU Performance

The cpu-gpu-performance skill monitors resource usage:

Skill(conservation:cpu-gpu-performance)

# Provides:
# - Baseline establishment
# - Resource monitoring
# - Selective test execution
# - Build optimization

MCP Code Execution

For processing data too large for context:

Skill(conservation:mcp-code-execution)

# Patterns for:
# - External data processing
# - Pipeline optimization
# - Result summarization

Superpowers Integration

CommandEnhancement
/optimize-contextUses condition-based-waiting for smart optimization
  • leyline: Provides MECW pattern implementations
  • abstract: Uses conservation for skill optimization
  • conjure: Delegates to external services when context limited

conjure

Delegation to external LLM services for long-context or bulk tasks.

Overview

Conjure provides a framework for delegating tasks to external LLM services (Gemini, Qwen) when Claude’s context window is insufficient or when specialized models are better suited.

Installation

/plugin install conjure@claude-night-market

Skills

SkillDescriptionWhen to Use
delegation-coreFramework for delegation decisionsAssessing if tasks should be offloaded
gemini-delegationGemini CLI integrationProcessing massive context windows
qwen-delegationQwen MCP integrationTasks requiring specific privacy needs

Commands (Makefile)

CommandDescriptionExample
make delegate-autoAuto-select best servicemake delegate-auto PROMPT="Summarize" FILES="src/"
make quota-statusShow current quota usagemake quota-status
make usage-reportSummarize token usage and costsmake usage-report

Hooks

HookTypeDescription
bridge.on_tool_startPreToolUseSuggests delegation when files exceed thresholds
bridge.after_tool_usePostToolUseSuggests delegation if output is truncated

Usage Examples

Auto-Delegation

make delegate-auto PROMPT="Summarize all files" FILES="src/"

# Conjure will:
# 1. Assess file sizes
# 2. Check quota availability
# 3. Select optimal service
# 4. Execute delegation
# 5. Return results

Check Quota Status

make quota-status

# Output:
# Gemini: 450/1000 tokens used (5h rolling)
# Qwen: 200/500 tokens used (5h rolling)

Usage Report

make usage-report

# Output:
# This week:
#   Gemini: 2,500 tokens, $0.05
#   Qwen: 800 tokens, $0.02
# Total: 3,300 tokens, $0.07

Manual Service Selection

# Force Gemini for large context
Skill(conjure:gemini-delegation)

# Force Qwen for privacy-sensitive tasks
Skill(conjure:qwen-delegation)

Delegation Decision Framework

The delegation-core skill evaluates:

FactorWeightDescription
Context SizeHighDoes input exceed Claude’s context?
Task TypeMediumIs task better suited for another model?
Privacy NeedsHighAre there data residency requirements?
Quota AvailableHighDo we have capacity on target service?
CostLowIs delegation cost-effective?

Service Comparison

ServiceStrengthsBest For
GeminiLarge context (1M+ tokens)Bulk file processing, long documents
QwenLocal/private inferenceSensitive data, offline work

Hook Behavior

Pre-Tool Use Hook

When reading large files:

[Conjure Bridge] File exceeds context threshold
Suggested action: Delegate to Gemini
Estimated tokens: 125,000
Quota available: Yes

Post-Tool Use Hook

When output is truncated:

[Conjure Bridge] Output truncated at 100,000 chars
Suggested action: Re-run with delegation
Recommended service: Gemini

Configuration

Environment Variables

# Gemini API key
export GEMINI_API_KEY=your-key

# Qwen MCP endpoint
export QWEN_MCP_ENDPOINT=http://localhost:8080

Quota Configuration

Edit conjure/config/quotas.yaml:

gemini:
  hourly_limit: 1000
  daily_limit: 10000

qwen:
  hourly_limit: 500
  daily_limit: 5000

Integration Patterns

With Conservation

# Conservation detects high context usage
# Suggests delegation via conjure
Skill(conservation:context-optimization)
# -> Recommends: Skill(conjure:delegation-core)

With Sanctum

# Large repo analysis
Skill(sanctum:git-workspace-review)
# If repo too large:
# -> Suggests: make delegate-auto FILES="."

Dependencies

Conjure uses leyline for infrastructure:

conjure
    |
    v
leyline (quota-management, service-registry)

Best Practices

  1. Check Quota First: Run make quota-status before large delegations
  2. Use Auto Mode: Let conjure select the optimal service
  3. Monitor Costs: Review make usage-report weekly
  4. Cache Results: Store delegation results locally to avoid repeat calls
  • leyline: Provides quota management and service registry
  • conservation: Detects when delegation is beneficial

hookify

Create custom behavioral rules through markdown configuration files.

Overview

Hookify provides a framework for defining behavioral rules that prevent unwanted actions through pattern matching. Rules are defined in markdown files and can be enabled, disabled, or customized per project.

Installation

/plugin install hookify@claude-night-market

Skills

SkillDescriptionWhen to Use
writing-rulesGuide for authoring behavioral rulesCreating new rules
rule-catalogPre-built behavioral rule templatesInstalling common rules

Commands

CommandDescription
/hookifyCreate behavioral rules to prevent unwanted actions
/hookify:installInstall hookify rule from catalog
/hookify:listList all hookify rules with status
/hookify:configureInteractive rule enable/disable interface
/hookify:helpDisplay hookify help and documentation

Usage Examples

Install a Rule

# Install from catalog
/hookify:install no-force-push

# List installed rules
/hookify:list --status

Create Custom Rule

# Create a new rule interactively
/hookify

# Configure existing rule
/hookify:configure no-force-push --disable

Rule Structure

Rules are markdown files with frontmatter:

---
name: no-force-push
trigger: PreToolUse
matcher: Bash
pattern: "git push.*--force"
action: block
message: "Force push blocked. Use --force-with-lease instead."
---

# No Force Push Rule

Prevents accidental force pushes that could overwrite remote history.

Integration

Hookify integrates with:

  • abstract: Rule validation and testing
  • imbue: Scope guard integration
  • sanctum: Git workflow protection

Domain Specialists

Domain specialist plugins provide deep expertise in specific areas of software development.

Purpose

Domain plugins offer:

  • Deep Expertise: Specialized knowledge for specific domains
  • Workflow Automation: End-to-end processes for common tasks
  • Best Practices: Curated patterns and anti-patterns

Plugins

PluginDomainKey Use Case
archetypesArchitectureParadigm selection
pensiveCode ReviewMulti-faceted reviews
parseltonguePythonModern Python development
memory-palaceKnowledgeSpatial memory organization
spec-kitSpecificationsSpec-driven development
ministerReleasesInitiative tracking
attuneProjectsFull-cycle project development
scryMediaDocumentation recordings
scribeDocumentationAI slop detection and cleanup

When to Use

archetypes

Use when you need to:

  • Choose an architecture for a new system
  • Evaluate trade-offs between patterns
  • Get implementation guidance for a paradigm

pensive

Use when you need to:

  • Conduct thorough code reviews
  • Audit security and architecture
  • Review APIs, tests, or Makefiles

parseltongue

Use when you need to:

  • Write modern Python (3.12+)
  • Implement async patterns
  • Package projects with uv
  • Profile and optimize performance

memory-palace

Use when you need to:

  • Organize complex knowledge
  • Build spatial memory structures
  • Maintain digital gardens
  • Cache research efficiently

spec-kit

Use when you need to:

  • Define features before implementation
  • Generate structured task lists
  • Maintain specification consistency
  • Track implementation progress

minister

Use when you need to:

  • Track GitHub initiatives
  • Monitor release readiness
  • Generate stakeholder reports

attune

Use when you need to:

  • Brainstorm project ideas
  • Create specifications from concepts
  • Plan architecture and tasks
  • Initialize projects with tooling
  • Execute systematic implementation

scry

Use when you need to:

  • Record terminal demos with VHS
  • Capture browser sessions with Playwright
  • Generate GIFs for documentation
  • Compose multi-source tutorials

scribe

Use when you need to:

  • Detect AI-generated content markers
  • Clean up documentation slop
  • Learn and apply writing styles
  • Verify documentation accuracy

Dependencies

Most domain plugins depend on foundation layers:

archetypes (standalone)
pensive --> imbue, sanctum
parseltongue (standalone)
memory-palace (standalone)
spec-kit --> imbue
minister (standalone)
attune --> spec-kit, imbue
scry (standalone)
scribe --> imbue, conserve

Example Workflows

Architecture Decision

Skill(archetypes:architecture-paradigms)
# Interactive paradigm selection
# Returns: Detailed implementation guide

Full Code Review

/full-review
# Runs multiple review types:
# - architecture-review
# - api-review
# - bug-review
# - test-review

Python Project Setup

Skill(parseltongue:python-packaging)
Skill(parseltongue:python-testing)

Feature Development

/speckit-specify Add user authentication
/speckit-plan
/speckit-tasks
/speckit-implement

Full Project Lifecycle

/attune:brainstorm
# Socratic questioning to explore project idea

/attune:specify
# Create specification from brainstorm

/attune:blueprint
# Design architecture and break down tasks

/attune:init
# Initialize project with tooling

/attune:execute
# Execute implementation with TDD

Media Recording

/record-terminal
# Creates VHS tape script and records terminal to GIF

/record-browser
# Records browser session with Playwright

Documentation Cleanup

/slop-scan docs/
# Scans for AI-generated content markers

/doc-polish README.md
# Interactive cleanup of AI slop

/doc-verify README.md
# Validates documentation claims

Installation

Install based on your needs:

# Architecture work
/plugin install archetypes@claude-night-market

# Code review
/plugin install pensive@claude-night-market

# Python development
/plugin install parseltongue@claude-night-market

# Knowledge management
/plugin install memory-palace@claude-night-market

# Specification-driven development
/plugin install spec-kit@claude-night-market

# Release management
/plugin install minister@claude-night-market

# Full-cycle project development
/plugin install attune@claude-night-market

# Media recording
/plugin install scry@claude-night-market

# Documentation review
/plugin install scribe@claude-night-market
Use all domain specialist plugins to unlock: Domain Master

archetypes

Architecture paradigm selection and implementation planning.

Overview

Archetypes helps you choose the right architecture for your system. It provides an interactive paradigm selector and detailed implementation guides for 13 architectural patterns.

Installation

/plugin install archetypes@claude-night-market

Skills

Orchestrator

SkillDescriptionWhen to Use
architecture-paradigmsInteractive paradigm selectorChoosing architecture for new systems

Paradigm Guides

SkillArchitectureBest For
architecture-paradigm-layeredN-tierSimple web apps, internal tools
architecture-paradigm-hexagonalPorts & AdaptersInfrastructure independence
architecture-paradigm-microservicesDistributed servicesLarge-scale enterprise
architecture-paradigm-event-drivenAsync communicationReal-time processing
architecture-paradigm-serverlessFunction-as-a-ServiceEvent-driven with minimal infra
architecture-paradigm-pipelinePipes-and-filtersETL, media processing
architecture-paradigm-cqrs-esCQRS + Event SourcingAudit trails, event replay
architecture-paradigm-microkernelPlugin-basedMinimal core with extensions
architecture-paradigm-modular-monolithInternal boundariesModule separation without distribution
architecture-paradigm-space-basedData-gridHigh-scale stateful workloads
architecture-paradigm-service-basedCoarse-grained SOAModular without microservices
architecture-paradigm-functional-coreFunctional Core, Imperative ShellSuperior testability
architecture-paradigm-client-serverClient-serverClear client/server responsibilities

Usage Examples

Interactive Selection

Skill(archetypes:architecture-paradigms)

# Claude will:
# 1. Ask about your requirements
# 2. Evaluate trade-offs
# 3. Recommend paradigms
# 4. Provide implementation guidance

Direct Paradigm Access

# Get specific paradigm details
Skill(archetypes:architecture-paradigm-hexagonal)

# Returns:
# - Core concepts
# - When to use
# - Implementation patterns
# - Example code
# - Trade-offs

Paradigm Comparison

By Complexity

LevelParadigms
LowLayered, Client-Server
MediumModular Monolith, Service-Based, Functional Core
HighMicroservices, Event-Driven, CQRS-ES, Space-Based

By Team Size

TeamRecommended
1-3Layered, Functional Core, Modular Monolith
4-10Hexagonal, Service-Based, Pipeline
10+Microservices, Event-Driven

By Scalability Need

NeedParadigms
Single serverLayered, Modular Monolith
HorizontalMicroservices, Serverless
ExtremeSpace-Based, Event-Driven

Selection Criteria

The paradigm selector evaluates:

  1. Team size and structure
  2. Scalability requirements
  3. Deployment constraints
  4. Data consistency needs
  5. Development velocity priorities
  6. Operational maturity

Example Output

Hexagonal Architecture

## Hexagonal Architecture (Ports & Adapters)

### Core Concepts
- Domain logic at center
- Ports define interfaces
- Adapters implement ports
- Infrastructure is pluggable

### When to Use
- Need to swap databases/frameworks
- Test-driven development focus
- Long-lived applications
- Multiple integration points

### Structure
src/
├── domain/           # Pure business logic
│   ├── models/
│   └── services/
├── ports/            # Interface definitions
│   ├── inbound/
│   └── outbound/
└── adapters/         # Implementations
    ├── web/
    ├── persistence/
    └── external/

### Trade-offs
+ Easy testing via port mocking
+ Framework-agnostic domain
+ Clear dependency direction
- More initial structure
- Learning curve

Best Practices

  1. Start Simple: Begin with layered, evolve as needed
  2. Match Team: Don’t use microservices with a small team
  3. Consider Ops: Complex architectures need operational maturity
  4. Plan Evolution: Design for change, not perfection

Decision Tree

Start
  |
  v
Simple CRUD? --> Yes --> Layered
  |
  No
  |
  v
Need testability? --> Yes --> Functional Core or Hexagonal
  |
  No
  |
  v
High scale? --> Yes --> Event-Driven or Space-Based
  |
  No
  |
  v
Multiple teams? --> Yes --> Microservices or Service-Based
  |
  No
  |
  v
Modular Monolith
  • pensive: Architecture review complements paradigm selection
  • spec-kit: Use after paradigm selection for implementation planning

pensive

Code review and analysis toolkit with specialized review skills.

Overview

Pensive provides deep code review capabilities across multiple dimensions: architecture, APIs, bugs, tests, and more. It orchestrates reviews intelligently, selecting the right skills for each codebase.

Installation

/plugin install pensive@claude-night-market

Skills

SkillDescriptionWhen to Use
unified-reviewReview orchestrationStarting reviews (Claude picks tools)
api-reviewAPI surface evaluationReviewing OpenAPI specs, library exports
architecture-reviewArchitecture assessmentChecking ADR alignment, design principles
bug-reviewBug huntingSystematic search for logic errors
rust-reviewRust-specific checkingAuditing unsafe code, borrow patterns
test-reviewTest quality reviewEnsuring tests verify behavior
makefile-reviewMakefile best practicesReviewing Makefile quality
math-reviewMathematical correctnessReviewing mathematical logic
shell-reviewShell script auditingExit codes, portability, safety patterns
fpf-reviewFPF architecture reviewFunctional/Practical/Foundation analysis
safety-critical-patternsNASA Power of 10 rulesRobust, verifiable code with context-appropriate rigor
code-refinementCode quality analysisDuplication, efficiency, clean code violations

Commands

CommandDescription
/full-reviewUnified review with intelligent skill selection
/api-reviewRun API surface review
/architecture-reviewRun architecture assessment
/bug-reviewRun bug hunting
/rust-reviewRun Rust-specific review
/test-reviewRun test quality review
/makefile-reviewRun Makefile review
/math-reviewRun mathematical review
/shell-reviewRun shell script safety review
/fpf-reviewRun FPF architecture review
/skill-reviewAnalyze skill runtime metrics and stability gaps (canonical)
/skill-historyView recent skill executions

Note: For static skill quality analysis (frontmatter, structure), use abstract:skill-auditor instead.

Agents

AgentDescription
code-reviewerExpert code review for bugs, security, quality
architecture-reviewerPrincipal-level architecture specialist
rust-auditorExpert Rust security and safety auditor

Usage Examples

Full Review

/full-review

# Claude will:
# 1. Analyze codebase structure
# 2. Select relevant review skills
# 3. Execute reviews in priority order
# 4. Synthesize findings
# 5. Provide actionable recommendations

Specific Reviews

# Architecture review
/architecture-review

# API review
/api-review

# Bug hunting
/bug-review

# Test quality
/test-review

Manual Skill Invocation

Skill(pensive:architecture-review)

# Checks:
# - ADR compliance
# - Dependency direction
# - Layer violations
# - Design pattern usage

Review Depth

Each review skill operates at multiple levels:

LevelDescriptionTime
QuickHigh-level scan1-2 min
StandardThorough review5-10 min
DeepExhaustive analysis15+ min

Specify depth when invoking:

/architecture-review --depth deep

Review Categories

Architecture Review

  • ADR alignment
  • Dependency analysis
  • Layer boundary violations
  • Pattern consistency
  • Coupling metrics

API Review

  • Endpoint consistency
  • Error response patterns
  • Authentication/authorization
  • Versioning strategy
  • Documentation completeness

Bug Review

  • Logic errors
  • Edge cases
  • Race conditions
  • Resource leaks
  • Error handling gaps

Test Review

  • Coverage gaps
  • Test isolation
  • Assertion quality
  • Mocking patterns
  • Edge case coverage

Rust Review

  • Unsafe code audit
  • Borrow checker patterns
  • Memory safety
  • Concurrency safety
  • Idiomatic usage

Dependencies

Pensive builds on foundation plugins:

pensive
    |
    +--> imbue (review-core, evidence-logging)
    |
    +--> sanctum (git-workspace-review)

Workflow Integration

Pre-PR Review

# Before opening PR
Skill(sanctum:git-workspace-review)
/full-review

# Address findings
# Then create PR

Post-Merge Review

# After merge, deep review
/architecture-review --depth deep

Targeted Review

# Review specific area
/api-review src/api/

Superpowers Integration

CommandEnhancement
/full-reviewUses systematic-debugging for four-phase analysis
/full-reviewUses verification-before-completion for evidence

Output Format

Reviews produce structured output:

## Review Summary

### Critical Issues
1. [BUG] Race condition in UserService.update()
   - Location: src/services/user.ts:45
   - Impact: Data corruption under load
   - Recommendation: Add mutex lock

### Warnings
1. [ARCH] Layer violation detected
   - Controllers importing from repositories
   - Recommendation: Add service layer

### Suggestions
1. [TEST] Missing edge case coverage
   - UserService.delete() lacks null check test
  • imbue: Provides review scaffolding
  • sanctum: Provides workspace context
  • archetypes: Paradigm context for architecture review

parseltongue

Modern Python development suite for testing, performance, async patterns, and packaging.

Overview

Parseltongue brings Python 3.12+ best practices to your workflow. It covers the full development lifecycle: testing with pytest, performance optimization, async patterns, and modern packaging with uv.

Installation

/plugin install parseltongue@claude-night-market

Skills

SkillDescriptionWhen to Use
python-testingPytest and TDD workflowsWriting and running tests
python-performanceProfiling and optimizationDebugging slow code
python-asyncAsync programming patternsImplementing asyncio
python-packagingModern packaging with uvManaging pyproject.toml

Commands

CommandDescription
/analyze-testsReport on test suite health
/run-profilerProfile code execution
/check-asyncValidate async patterns

Agents

AgentDescription
python-proMaster Python 3.12+ with modern features
python-testerExpert testing for pytest, TDD, mocking
python-optimizerExpert performance optimization

Usage Examples

Test Analysis

/analyze-tests

# Reports:
# - Coverage metrics
# - Test distribution
# - Slow tests
# - Missing coverage areas
# - Anti-patterns detected

Profiling

/run-profiler src/heavy_function.py

# Outputs:
# - CPU time breakdown
# - Memory usage
# - Hot paths
# - Optimization suggestions

Async Validation

/check-async src/async_module.py

# Checks:
# - Proper await usage
# - Event loop handling
# - Async context managers
# - Concurrency patterns

Skill Invocation

Skill(parseltongue:python-testing)

# Provides:
# - Pytest configuration patterns
# - TDD workflow guidance
# - Mocking strategies
# - Fixture patterns

Python 3.12+ Features

Parseltongue emphasizes modern Python:

Type Hints

# Modern syntax (3.10+)
def process(data: list[str] | None) -> dict[str, int]:
    ...

Pattern Matching

# Structural pattern matching (3.10+)
match response:
    case {"status": "ok", "data": data}:
        return data
    case {"status": "error", "message": msg}:
        raise ValueError(msg)

Exception Groups

# Exception groups (3.11+)
try:
    async with asyncio.TaskGroup() as tg:
        tg.create_task(task1())
        tg.create_task(task2())
except* ValueError as eg:
    for exc in eg.exceptions:
        handle(exc)

Testing Patterns

TDD Workflow

Skill(parseltongue:python-testing)

# RED-GREEN-REFACTOR:
# 1. Write failing test
# 2. Implement minimal code
# 3. Refactor with tests green

Fixture Patterns

# Recommended patterns
@pytest.fixture
def db_session(tmp_path):
    """Session-scoped database fixture."""
    db = Database(tmp_path / "test.db")
    yield db
    db.close()

@pytest.fixture
def user(db_session):
    """User fixture depending on db."""
    return db_session.create_user("test")

Mocking Strategies

# Strategic mocking
def test_api_call(mocker):
    mock_response = mocker.patch("requests.get")
    mock_response.return_value.json.return_value = {"status": "ok"}

    result = fetch_data()

    assert result["status"] == "ok"

Performance Optimization

Profiling Tools

# cProfile integration
python -m cProfile -s cumtime script.py

# Memory profiling
from memory_profiler import profile

@profile
def memory_heavy():
    ...

Optimization Patterns

  • Generators over lists: Save memory
  • Local variables: Faster lookup
  • Built-in functions: C-optimized
  • Lazy evaluation: Defer computation

Async Patterns

async def main():
    async with aiohttp.ClientSession() as session:
        tasks = [fetch(session, url) for url in urls]
        results = await asyncio.gather(*tasks)
    return results

if __name__ == "__main__":
    asyncio.run(main())

Anti-Patterns to Avoid

  • Blocking calls in async functions
  • Creating event loops inside coroutines
  • Ignoring exceptions in fire-and-forget tasks

Packaging with uv

pyproject.toml

[project]
name = "my-package"
version = "1.0.0"
dependencies = ["requests>=2.28"]

[project.optional-dependencies]
dev = ["pytest", "ruff", "mypy"]

[tool.uv]
index-url = "https://pypi.org/simple"

Commands

# Install with uv
uv pip install -e ".[dev]"

# Lock dependencies
uv pip compile pyproject.toml -o requirements.lock

# Sync environment
uv pip sync requirements.lock

Superpowers Integration

SkillEnhancement
python-testingUses test-driven-development for TDD cycles
python-testingUses testing-anti-patterns for detection
  • leyline: Provides pytest-config patterns
  • sanctum: Test updates integrate with test-updates skill

memory-palace

Knowledge organization using spatial memory techniques.

Overview

Memory Palace applies the ancient method of loci to digital knowledge management. It helps you build “palaces” - structured knowledge repositories that use spatial metaphors for organization and retrieval.

Installation

/plugin install memory-palace@claude-night-market

Skills

SkillDescriptionWhen to Use
memory-palace-architectBuilding virtual palacesOrganizing complex concepts
knowledge-locatorSpatial searchFinding stored information
knowledge-intakeIntake and curationProcessing new information
digital-garden-cultivatorDigital garden maintenanceLong-term knowledge base care
session-palace-builderSession-specific palacesTemporary working knowledge

Commands

CommandDescription
/palaceManage memory palaces
/gardenManage digital gardens
/navigateSearch and traverse palaces

Agents

AgentDescription
palace-architectDesigns memory palace architectures
knowledge-navigatorSearches and retrieves from palaces
knowledge-librarianEvaluates and routes knowledge
garden-curatorMaintains digital gardens

Hooks

HookTypeDescription
research_interceptor.pyPreToolUseChecks local knowledge before web searches
url_detector.pyUserPromptSubmitDetects URLs for intake
local_doc_processor.pyPostToolUseProcesses local docs after reads
web_content_processor.pyPostToolUseProcesses web content for storage

Usage Examples

Create a Palace

/palace create "Python Async Patterns"

# Creates:
# - Palace structure
# - Entry rooms
# - Navigation paths

Add Knowledge

Skill(memory-palace:knowledge-intake)

# Processes:
# - New information
# - Categorization
# - Spatial placement
# - Cross-references
/navigate "async context managers"

# Returns:
# - Matching rooms
# - Related concepts
# - Cross-references
# - Source citations

Maintain Garden

/garden cultivate

# Performs:
# - Pruning outdated content
# - Strengthening connections
# - Identifying gaps
# - Suggesting additions

Cache Modes

The research interceptor supports four modes:

ModeBehaviorUse Case
cache_onlyDeny web when no cache matchOffline work, audits
cache_firstCheck cache, fall back to webDefault research
augmentBlend cache with live resultsWhen freshness matters
web_onlyBypass Memory PalaceIncident response

Set mode in hooks/memory-palace-config.yaml:

research_mode: cache_first

Palace Architecture

Palaces use spatial metaphors:

Palace: "Python Async"
├── Entry Hall
│   └── Overview concepts
├── Library Wing
│   ├── asyncio basics
│   ├── coroutines
│   └── event loops
├── Practice Room
│   ├── code examples
│   └── exercises
└── Reference Archive
    ├── official docs
    └── external sources

Knowledge Intake Flow

New Information
      |
      v
[Novelty Check] --> Duplicate? --> Skip
      |
      No
      v
[Domain Alignment] --> Matches interests? --> Flag for intake
      |
      Yes
      v
[Palace Placement] --> Store in appropriate room
      |
      v
[Cross-Reference] --> Link to related concepts

Embedding Support

Optional semantic search via embeddings:

# Build embeddings
cd plugins/memory-palace
uv run python scripts/build_embeddings.py --provider local

# Toggle at runtime
export MEMORY_PALACE_EMBEDDINGS_PROVIDER=local

Telemetry

Track research decisions:

# data/telemetry/memory-palace.csv
timestamp,query,decision,novelty_score,domains,duplicates
2025-01-15,async patterns,cache_hit,0.2,python,entry-123

Curation Workflow

Regular maintenance keeps palaces useful:

  1. Review intake queue: data/intake_queue.jsonl
  2. Approve/reject items: Based on value and fit
  3. Update vitality scores: Mark evergreen vs. probationary
  4. Prune stale content: Archive outdated information
  5. Document in curation log: docs/curation-log.md

Digital Gardens

Unlike palaces (structured), gardens are organic:

/garden status

# Shows:
# - Growth rate
# - Connection density
# - Orphan nodes
# - Suggested links
  • conservation: Memory Palace helps reduce redundant web fetches
  • imbue: Evidence logging integrates with knowledge intake

spec-kit

Specification-Driven Development (SDD) toolkit for structured feature development.

Overview

Spec-Kit enforces “define before implement” - you write specifications first, generate plans, create tasks, then execute. This reduces wasted effort and validates features match requirements.

Installation

/plugin install spec-kit@claude-night-market

Skills

SkillDescriptionWhen to Use
spec-writingSpecification authoringWriting requirements from ideas
task-planningTask generationBreaking specs into tasks
speckit-orchestratorWorkflow coordinationManaging spec-to-code lifecycle

Commands

CommandDescription
/speckit-specifyCreate a new specification
/speckit-planGenerate implementation plan
/speckit-tasksGenerate ordered tasks
/speckit-implementExecute tasks
/speckit-analyzeCheck artifact consistency
/speckit-checklistGenerate custom checklist
/speckit-clarifyAsk clarifying questions
/speckit-constitutionCreate project constitution
/speckit-startupBootstrap workflow at session start

Agents

AgentDescription
spec-analyzerValidates artifact consistency
task-generatorCreates implementation tasks
implementation-executorExecutes tasks and writes code

Usage Examples

Full SDD Workflow

# 1. Create specification
/speckit-specify Add user authentication with OAuth2

# 2. Clarify requirements
/speckit-clarify

# 3. Generate plan
/speckit-plan

# 4. Create tasks
/speckit-tasks

# 5. Execute implementation
/speckit-implement

# 6. Verify consistency
/speckit-analyze

Quick Specification

/speckit-specify Add dark mode toggle

# Claude will:
# 1. Ask clarifying questions
# 2. Generate spec.md
# 3. Identify dependencies
# 4. Suggest next steps

Session Startup

/speckit-startup

# Loads:
# - Existing spec.md
# - Current plan.md
# - Outstanding tasks
# - Progress status
# - Constitution (principles/constraints)

Artifact Structure

Spec-Kit creates three main artifacts:

spec.md

# Feature: User Authentication

## Overview
OAuth2-based authentication for web application.

## Requirements
- [ ] Google OAuth integration
- [ ] Session management
- [ ] Token refresh

## Acceptance Criteria
1. Users can sign in with Google
2. Sessions persist for 7 days
3. Tokens refresh automatically

## Non-Functional Requirements
- Login latency < 2s
- 99.9% availability

plan.md

# Implementation Plan

## Phase 1: OAuth Setup
- Configure Google OAuth credentials
- Implement OAuth callback handler

## Phase 2: Session Management
- Design session schema
- Implement token storage

## Phase 3: Integration
- Connect to frontend
- Add logout functionality

tasks.md

# Tasks

## Phase 1 Tasks
- [ ] Create OAuth config module
- [ ] Implement /auth/login endpoint
- [ ] Implement /auth/callback endpoint

## Phase 2 Tasks
- [ ] Design session table schema
- [ ] Create session service
- [ ] Implement token refresh logic

Constitution

Project constitution defines principles:

/speckit-constitution

# Creates:
# - Coding standards
# - Architecture principles
# - Testing requirements
# - Documentation standards

Consistency Analysis

/speckit-analyze

# Checks:
# - spec.md requirements map to plan.md
# - plan.md phases map to tasks.md
# - No orphan tasks
# - No missing implementations

Checklist Generation

/speckit-checklist

# Generates custom checklist:
# - [ ] All acceptance criteria met
# - [ ] Tests written
# - [ ] Documentation updated
# - [ ] Security reviewed

Dependencies

Spec-Kit uses imbue for analysis:

spec-kit
    |
    v
imbue (diff-analysis, evidence-logging)

Superpowers Integration

CommandEnhancement
/speckit-clarifyUses brainstorming for questions
/speckit-planUses writing-plans for structure
/speckit-tasksUses executing-plans, systematic-debugging
/speckit-implementUses executing-plans, systematic-debugging
/speckit-analyzeUses systematic-debugging, verification-before-completion
/speckit-checklistUses verification-before-completion

Best Practices

  1. Specify First: Never skip the specification phase
  2. Clarify Ambiguity: Use /speckit-clarify liberally
  3. Small Tasks: Break into 1-2 hour chunks
  4. Verify Often: Run /speckit-analyze after changes
  5. Update Artifacts: Keep spec/plan/tasks in sync

Workflow Tips

Starting New Feature

/speckit-specify [feature description]
/speckit-clarify
/speckit-plan

Resuming Work

/speckit-startup
# Review current state
/speckit-implement

Before PR

/speckit-analyze
/speckit-checklist
  • imbue: Provides analysis patterns
  • sanctum: Integrates for PR preparation after implementation

minister

GitHub initiative tracking and release management.

Overview

Minister helps you track project initiatives, monitor release readiness, and generate stakeholder reports. It bridges the gap between development work and project management.

Installation

/plugin install minister@claude-night-market

Skills

SkillDescriptionWhen to Use
github-initiative-pulseInitiative progress trackingWeekly status reports
release-health-gatesRelease readiness checksBefore releasing

Scripts

ScriptDescription
tracker.pyCLI for initiative database and reporting

Usage Examples

Initiative Tracking

Skill(minister:github-initiative-pulse)

# Generates:
# - Issue completion rates
# - Milestone progress
# - Velocity trends
# - Risk flags

Release Readiness

Skill(minister:release-health-gates)

# Checks:
# - CI status
# - Documentation completeness
# - Breaking change inventory
# - Risk assessment

CLI Usage

# List initiatives
python tracker.py list

# Show initiative details
python tracker.py show auth-v2

# Generate weekly report
python tracker.py report --week

# Update status
python tracker.py update auth-v2 --status in-progress

Initiative Structure

Initiatives track work across issues and PRs:

initiative:
  id: auth-v2
  title: "Authentication v2"
  status: in-progress
  milestones:
    - name: "OAuth Setup"
      due: 2025-01-30
      issues: [#42, #43, #44]
    - name: "Session Management"
      due: 2025-02-15
      issues: [#45, #46]
  metrics:
    velocity: 3.5 issues/week
    completion: 65%
    risk: low

Health Gates

Release health gates verify readiness:

GateChecks
CIAll checks passing, no flaky tests
DocsREADME updated, CHANGELOG complete
BreakingBreaking changes documented
SecurityNo critical vulnerabilities
CoverageTest coverage above threshold

Gate Output

## Release Health: v2.0.0

### CI Status: PASS
- All 156 tests passing
- Build time: 3m 42s
- No flaky tests detected

### Documentation: PASS
- README updated
- CHANGELOG has v2.0.0 section
- API docs generated

### Breaking Changes: WARN
- 2 breaking changes identified
- Migration guide needed for UserService API

### Security: PASS
- No critical/high vulnerabilities
- Dependencies up to date

### Coverage: PASS
- 87% coverage (threshold: 80%)

## Recommendation: CONDITIONAL RELEASE
Address breaking change documentation before release.

Reporting

Weekly Report

python tracker.py report --week

# Outputs:
# - Initiatives summary
# - This week's completions
# - Next week's focus
# - Blockers and risks

Stakeholder Summary

python tracker.py report --stakeholder

# Generates executive summary:
# - High-level progress
# - Key achievements
# - Timeline updates
# - Resource needs

Integration with GitHub

Minister reads from GitHub:

# Sync initiative from GitHub milestone
python tracker.py sync --milestone "v2.0"

# Pull issue status
python tracker.py refresh auth-v2

Superpowers Integration

SkillEnhancement
issue-managementUses systematic-debugging for investigation

Configuration

tracker.yaml

github:
  repo: athola/my-project
  token_env: GITHUB_TOKEN

initiatives_dir: .minister/initiatives
reports_dir: .minister/reports

health_gates:
  coverage_threshold: 80
  max_critical_vulns: 0
  require_changelog: true

Workflow Examples

Sprint Planning

# Check initiative status
python tracker.py list

# Update priorities
python tracker.py update auth-v2 --priority high

# Generate planning report
python tracker.py report --planning

Release Preparation

# Run health gates
Skill(minister:release-health-gates)

# Address any failures
# Then re-run until all pass

# Tag release
git tag v2.0.0

Weekly Standup

# Generate pulse report
Skill(minister:github-initiative-pulse)

# Share with team
# Update tracker based on discussion
  • sanctum: PR preparation integrates with release workflow
  • imbue: Feature review complements initiative tracking

Attune

Full-cycle project development from ideation to implementation.

Overview

Attune integrates the brainstorm-plan-execute workflow from superpowers with spec-driven development from spec-kit to provide a complete project lifecycle.

Workflow

graph LR
    A[Brainstorm] --> B[War Room]
    B --> C[Specify]
    C --> D[Plan]
    D --> E[Initialize]
    E --> F[Execute]

    style A fill:#e1f5fe
    style B fill:#fff9c4
    style C fill:#f3e5f5
    style D fill:#fff3e0
    style E fill:#e8f5e8
    style F fill:#fce4ec

Commands

CommandPhaseDescription
/attune:brainstorm1. IdeationSocratic questioning to explore problem space
/attune:war-room2. DeliberationMulti-LLM expert deliberation with reversibility-based routing
/attune:specify3. SpecificationCreate detailed specs from war-room decision
/attune:blueprint4. PlanningDesign architecture and break down tasks
/attune:init5. InitializationGenerate or update project structure with tooling
/attune:execute6. ImplementationExecute tasks with TDD discipline
/attune:upgrade-projectMaintenanceAdd configs to existing projects
/attune:missionFull CycleRun entire lifecycle as a single mission with state detection
/attune:validateQualityValidate project structure

Supported Languages

  • Python: uv, pytest, ruff, mypy, pre-commit
  • Rust: cargo, clippy, rustfmt, CI workflows
  • TypeScript/React: npm/pnpm/yarn, vite, jest, eslint, prettier

What Gets Configured

  • Git initialization with detailed .gitignore
  • ✅ GitHub Actions workflows (test, lint, typecheck, publish)
  • ✅ Pre-commit hooks (formatting, linting, security)
  • ✅ Makefile with standard development targets
  • ✅ Dependency management (uv/cargo/package managers)
  • ✅ Project structure (src/, tests/, README.md)

Quick Start

New Python Project

# Interactive mode
/attune:init

# Non-interactive
/attune:init --lang python --name my-project --author "Your Name"

Full Cycle Workflow

# 1. Brainstorm the idea
/attune:brainstorm

# 2. War room deliberation (auto-routes by complexity)
/attune:war-room --from-brainstorm

# 3. Create specification
/attune:specify

# 4. Plan architecture
/attune:blueprint

# 5. Initialize project
/attune:init

# 6. Execute implementation
/attune:execute

Skills

SkillPurpose
project-brainstormingSocratic ideation workflow
war-roomMulti-LLM expert council with Type 1/2 decision routing
war-room-checkpointInline RS assessment for embedded escalation during workflow
project-specificationSpec creation from war-room decision
project-planningArchitecture and task breakdown
project-initInteractive project initialization
project-executionSystematic implementation
makefile-generationGenerate language-specific Makefiles
mission-orchestratorUnified brainstorm-specify-plan-execute lifecycle orchestrator
workflow-setupConfigure CI/CD pipelines
precommit-setupSet up code quality hooks

Agents

AgentRole
project-architectGuides full-cycle workflow (brainstorm → plan)
project-implementerExecutes implementation with TDD

Integration

Attune combines capabilities from:

  • superpowers: Brainstorming, planning, execution workflows
  • spec-kit: Specification-driven development
  • abstract: Plugin and skill authoring for plugin projects

War Room Integration

The war room is a mandatory phase after brainstorming. It automatically routes to the appropriate deliberation intensity based on Reversibility Score (RS):

ModeRS RangeDurationDescription
Express≤ 0.40<2 minQuick decision by Chief Strategist
Lightweight0.41-0.605-10 min3-expert panel
Full Council0.61-0.8015-30 min7-expert deliberation
Delphi> 0.8030-60 minIterative consensus for critical decisions

The war-room-checkpoint skill can also trigger additional deliberation during planning or execution when high-stakes decisions arise.

Examples

Initialize Python CLI Project

/attune:init --lang python --type cli

Creates:

  • pyproject.toml with uv configuration
  • Makefile with test/lint/format targets
  • GitHub Actions workflows
  • Pre-commit hooks for ruff and mypy
  • Basic CLI structure

Upgrade Existing Project

# Add missing configs
/attune:upgrade-project

# Validate structure
/attune:validate

Configuration

Custom Templates

Place custom templates in:

  • ~/.claude/attune/templates/ (user-level)
  • .attune/templates/ (project-level)
  • $ATTUNE_TEMPLATES_PATH (environment variable)

Reference Projects

Templates sync from reference projects:

  • simple-resume (Python)
  • skrills (multi-language)
  • importobot (automation)
Initialize your first project with /attune:init to unlock: Project Architect

scribe

Documentation review, cleanup, and generation with AI slop detection.

Overview

Scribe helps maintain high-quality documentation by detecting AI-generated content patterns (“slop”), learning writing styles from exemplars, and generating or remediating documentation. It integrates with sanctum’s documentation workflows.

Installation

/plugin install scribe@claude-night-market

Skills

SkillDescriptionWhen to Use
slop-detectorDetect AI-generated content markersScanning docs for AI tells
style-learnerExtract writing style from exemplar textCreating style profiles
doc-generatorGenerate/remediate documentationWriting or fixing docs

Commands

CommandDescription
/slop-scanScan files for AI slop markers
/style-learnCreate style profile from examples
/doc-polishClean up AI-generated content
/doc-generateGenerate new documentation
/doc-verifyValidate documentation claims with proof-of-work

Agents

AgentDescription
doc-editorInteractive documentation editing
slop-hunterComprehensive slop detection
doc-verifierQA validation using proof-of-work methodology

Usage Examples

Detect AI Slop

# Scan current directory
/slop-scan

# Scan specific file with fix suggestions
/slop-scan README.md --fix

Clean Up Content

# Interactive polish
/doc-polish docs/guide.md

# Polish all markdown files
/doc-polish **/*.md

Learn a Style

# Create style profile from examples
/style-learn good-examples/*.md --name house-style

# Generate with learned style
/doc-generate readme --style house-style

Verify Documentation

# Verify README claims and commands
/doc-verify README.md

# Verify with strict mode
/doc-verify docs/ --strict --report qa-report.md

AI Slop Detection

Scribe detects patterns that reveal AI-generated content:

Tier 1 Words (Highest Confidence)

Words that appear dramatically more often in AI text: delve, tapestry, realm, embark, beacon, multifaceted, nuanced, pivotal, meticulous, showcasing, leveraging, streamline, comprehensive.

Phrase Patterns

Formulaic constructions like “In today’s fast-paced world,” “cannot be overstated,” “navigate the complexities,” and “treasure trove of.”

Structural Markers

Overuse of em dashes, excessive bullet points, uniform sentence length, perfect grammar without contractions.

Writing Principles

Scribe enforces these principles:

  1. Ground every claim: Use specifics, not adjectives
  2. Trim crutches: No formulaic openers or closers
  3. Show perspective: Include reasoning and trade-offs
  4. Vary structure: Mix sentence lengths, balance bullets with prose
  5. Use active voice: Direct statements over passive constructions

Vocabulary Substitutions

Instead ofUse
leverageuse
utilizeuse
comprehensivethorough
robustsolid
facilitatehelp
optimizeimprove
delveexplore
embarkstart

Integration

Scribe integrates with sanctum documentation workflows:

Sanctum CommandScribe Integration
/pr-reviewRuns slop-detector on changed .md files
/update-docsRuns slop-detector on edited docs
/update-readmeRuns slop-detector on README
/prepare-prVerifies PR descriptions with slop-detector

Dependencies

Scribe uses skills from other plugins:

  • imbue:proof-of-work: Evidence-based verification (used by doc-verifier)
  • conserve:bloat-detector: Token optimization
Clean up AI slop in 10 files to unlock: Documentation Purist

scry

Media generation for terminal recordings, browser recordings, GIF processing, and media composition.

Overview

Scry creates documentation assets through terminal recordings (VHS), browser automation recordings (Playwright), GIF processing, and multi-source media composition. Use it to build tutorials, demos, and README assets.

Installation

/plugin install scry@claude-night-market

Skills

SkillDescriptionWhen to Use
vhs-recordingTerminal recordings using VHS tape scriptsCLI demos, tool tutorials
browser-recordingBrowser recordings using PlaywrightWeb UI walkthroughs
gif-generationGIF processing and optimizationREADME assets, docs
media-compositionCombine multiple media sourcesFull tutorials

Commands

CommandDescription
/record-terminalCreate terminal recording with VHS
/record-browserRecord browser session with Playwright

Usage Examples

Terminal Recording

/record-terminal

# Or use the skill directly
Skill(scry:vhs-recording)

Creates a VHS tape script and records terminal output to GIF or video.

Browser Recording

/record-browser

# Or use the skill directly
Skill(scry:browser-recording)

Records browser sessions with Playwright for web UI documentation.

GIF Generation

Skill(scry:gif-generation)

# Optimizes recordings for documentation:
# - Resize for README display
# - Compress file size
# - Adjust frame rate

Media Composition

Skill(scry:media-composition)

# Combines assets:
# - Terminal + browser recordings
# - Multiple clips into tutorials
# - Add transitions and captions

VHS Tape Script Example

VHS uses tape scripts to define recordings:

# demo.tape
Output demo.gif

Set FontSize 16
Set Width 1200
Set Height 600

Type "echo 'Hello, World!'"
Sleep 500ms
Enter
Sleep 2s

Run with:

vhs demo.tape

Dependencies

VHS (Terminal Recording)

macOS:

brew install charmbracelet/tap/vhs
brew install ttyd ffmpeg

Linux (Debian/Ubuntu):

sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://repo.charm.sh/apt/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/charm.gpg
echo "deb [signed-by=/etc/apt/keyrings/charm.gpg] https://repo.charm.sh/apt/ * *" | sudo tee /etc/apt/sources.list.d/charm.list
sudo apt update && sudo apt install vhs
sudo apt install ffmpeg

Playwright (Browser Recording)

npm install -g playwright
npx playwright install

FFmpeg (Media Processing)

Required for GIF generation and media composition.

# macOS
brew install ffmpeg

# Linux
sudo apt install ffmpeg

Workflow Patterns

Tutorial Creation

  1. Record terminal demo with vhs-recording
  2. Record web UI walkthrough with browser-recording
  3. Combine with media-composition
  4. Optimize output with gif-generation

Quick Demo

/record-terminal
# Creates demo.gif ready for README

Documentation Assets

# Generate multiple GIFs for docs
Skill(scry:vhs-recording)
Skill(scry:gif-generation)
# Move outputs to docs/images/

Integration with sanctum

Scry integrates with sanctum for PR and documentation workflows:

# Generate demo for PR description
/record-terminal

# Include in PR body
/sanctum:pr
  • sanctum: PR preparation uses scry for demo assets
  • memory-palace: Store and organize media assets

Tutorials

Step-by-step guides for common workflows and advanced features.

Available Tutorials

TutorialDescriptionLevel
Skills ShowcaseDiscover, validate, and use skills in Claude CodeBeginner
Cache ModesMemory Palace cache mode configurationIntermediate
Embedding UpgradeAdding semantic search to Memory PalaceAdvanced
Memory Palace CurationKnowledge intake and curation workflowIntermediate
Error HandlingError handling patterns and recovery strategiesIntermediate
Cross-Plugin CollaborationUsing skills across multiple pluginsIntermediate

Tutorial Structure

Each tutorial includes:

  • Prerequisites: What you need before starting
  • Objectives: What you’ll learn
  • Step-by-step instructions: Detailed walkthrough
  • Verification: How to confirm success
  • Troubleshooting: Common issues and solutions

Skill Levels

LevelDescription
BeginnerNew to Claude Night Market
IntermediateFamiliar with basic plugin usage
AdvancedComfortable with configuration and customization

Suggested Learning Path

For New Users

  1. Complete Getting Started first
  2. Follow Skills Showcase to understand the skill system
  3. Read plugin documentation for plugins you’ve installed
  4. Return here for deeper dives

For Memory Palace Users

  1. Cache Modes - Understand interception behavior
  2. Memory Palace Curation - Manage knowledge intake
  3. Embedding Upgrade - Add semantic search

For Plugin Developers

  1. Skills Showcase - Understand skill architecture
  2. Cross-Plugin Collaboration - Learn skill dependencies
  3. Error Handling - Implement error handling

Achievement Progress

Complete all tutorials to unlock: Tutorial Master
TutorialStatus
Cache Modes
Embedding Upgrade
Memory Palace Curation

Skills Showcase - Claude Code Development Workflows

This tutorial demonstrates the foundational concept of skills in the claude-night-market ecosystem. Skills are the primary abstraction that transforms Claude Code from a general-purpose assistant into a specialized development partner.

Skills Showcase Demo

A detailed walkthrough of skill discovery, structure, validation, and composition patterns.


Overview

The claude-night-market contains 105+ skills across 14 plugins, each skill representing a reusable, composable unit of functionality. This tutorial explores:

  • Skill Discovery: How to find and catalog available skills
  • Skill Anatomy: Understanding the structure and metadata of skills
  • Skill Validation: Verifying that skills follow proper conventions
  • Skill Composition: How skills chain together into workflows

Part 1: Skill Discovery and Cataloging

Exploring Plugin Skills

Skills are organized within plugin directories under a skills/ subdirectory. Each skill is a directory containing:

  • SKILL.md - The skill definition with frontmatter and workflow instructions
  • modules/ (optional) - Modular components loaded progressively
  • scripts/ (optional) - Executable scripts for automation

To explore available skills in a plugin:

ls plugins/abstract/skills/

Output:

dogfood/  plugin-auditor/  plugin-validator/  skill-auditor/  skill-creator/

Each of these directories represents a meta-skill for plugin development.

Counting Total Skills

To get a project-wide count of all skills:

find plugins -name 'SKILL.md' -type f | wc -l

Output:

105

This count represents the total capability surface of the marketplace. Each skill is:

  • Self-contained: Can be invoked independently
  • Documented: Includes description, usage, and examples
  • Testable: Follows structured patterns for validation

Part 2: Skill Anatomy and Structure

Skill Definition Format

Skills follow a two-part structure:

  1. YAML Frontmatter - Metadata and configuration
  2. Markdown Body - Workflow instructions and context

Let’s examine a real skill:

head -30 plugins/abstract/skills/plugin-validator/SKILL.md

Sample Output:

---
name: plugin-validator
description: |
  Validate plugin structure, metadata, and skill definitions.
  Checks frontmatter, dependencies, and file organization.
category: validation
tags: [plugin, validation, quality]
tools: [Read, Glob, Bash]
complexity: medium
estimated_tokens: 800
dependencies:
  - abstract:shared
---

# Plugin Validator Skill

Validates that a plugin follows the claude-night-market conventions...

Frontmatter Fields

FieldPurposeExample
nameUnique identifierplugin-validator
descriptionWhat the skill doesMulti-line description
categorySkill categoryvalidation, workflow, analysis
tagsSearchable keywords[plugin, validation]
toolsRequired Claude Code tools[Read, Write, Bash]
complexityComplexity levellow, medium, high
estimated_tokensApproximate token usage800
dependenciesRequired skills[abstract:shared]

Progressive Loading

Some skills use progressive loading to reduce initial token cost:

progressive_loading: true
modules:
  - manifest-parsing
  - markdown-generation
  - tape-validation

Modules are loaded on-demand when specific functionality is needed.


Part 3: Skill Validation

Why Validate Skills?

The abstract:plugin-validator skill verifies that skills follow project conventions. This validation checks for structural integrity by confirming required files exist, ensures that YAML frontmatter is well-formed, and resolves dependencies between skills. It also assesses documentation quality by checking for clear descriptions and examples.

Using the Validator

In Claude Code, invoke with:

Skill(abstract:plugin-validator, plugin_name='sanctum')

The validator performs these checks:

  1. Plugin structure: Confirms skills/, commands/, .claude-plugin/ exist
  2. Skill frontmatter: Validates YAML syntax and required fields
  3. Command definitions: Checks command markdown files are valid
  4. Dependencies: Verifies all referenced skills exist

Example Validation Output:

Plugin structure valid
19 skills found with valid frontmatter
12 commands defined correctly
All dependencies resolved
WARNING: skill-x missing 'estimated_tokens' field

Part 4: Skills in Real Workflows

Example: Git Workspace Review

The sanctum:git-workspace-review skill is commonly invoked at the start of development sessions:

Skill(sanctum:git-workspace-review)

What it does:

  1. Repository State: Runs git status to identify uncommitted changes
  2. Commit History: Runs git log to show recent commits and context
  3. File Analysis: Analyzes changed files to understand impact areas
  4. Session Context: Provides Claude Code with a full view of the current work

Value Proposition:

  • Context Recovery: Quickly understand what’s in progress
  • Change Impact: See which parts of the codebase are affected
  • Commit Quality: Understand recent work to maintain consistency

Example: PR Preparation Workflow

Complex workflows compose multiple skills sequentially:

PR Preparation Workflow:
  1. Skill(sanctum:git-workspace-review) - Understand changes
  2. Skill(imbue:scope-guard) - Check scope drift
  3. Skill(sanctum:commit-messages) - Generate commit message
  4. Skill(sanctum:pr-prep) - Prepare PR description

Benefits of Skill Composition

Composing skills into workflows provides several advantages. Each skill maintains a focus on a single responsibility, which increases reusability across different projects and tasks. This modular approach maintains a consistent standard for complex operations like PR preparation and integrates quality gates that automatically check for scope drift and code quality issues.


Part 5: Skills Enable Workflow Automation

The Skills Philosophy

Skills transform the assistant’s capabilities by encoding team best practices directly into the workflow. This automation removes the need to manually describe repetitive tasks such as code review steps or documentation updates. By following the same process every time, skills maintain consistency across the project and provide the assistant with the necessary context to understand specific project structures and conventions.

Skill Composition Patterns

Sequential Composition

Skills execute in order, each building on the previous:

Skill(A) → Skill(B) → Skill(C)

Conditional Composition

Skills invoke others based on context:

if scope_drift_detected:
    Skill(imbue:scope-guard)

Parallel Composition

Independent skills can run in parallel (conceptually):

Skill(pensive:api-review) + Skill(pensive:architecture-review)

Key Insights

Design Principles

  1. Single Responsibility: Each skill does one thing well
  2. Clear Dependencies: Skills declare what they need
  3. Progressive Disclosure: Complex skills load modules on-demand
  4. Self-Documentation: Skills explain their purpose and usage

Quality Metrics

  • 105 skills across 14 plugins
  • Structured workflows for git, review, specs, testing
  • Composable and reusable across projects
  • Self-documenting with clear dependencies
  • Validated structure supports overall quality

Workflow Value

  • Git Operations: 19 skills in sanctum for branch management, commits, PRs
  • Code Review: 12 skills in pensive for multi-discipline review
  • Specification: 8 skills in spec-kit for spec-driven development
  • Testing: 6 skills in parseltongue for Python test analysis
  • Meta-Development: 5 skills in abstract for plugin creation

Further Reading


Duration: ~90 seconds Difficulty: Beginner Prerequisites: Basic understanding of Claude Code Tags: skills, workflows, claude-code, development, getting-started, architecture

Error Handling Tutorial

This tutorial provides practical guidance for implementing production-grade error handling in Claude Code skills and plugins. It covers real-world scenarios, code examples, and best practices.

Table of Contents

  1. Understanding Error Types
  2. Error Classification System
  3. Practical Error Handling Patterns
  4. Real-World Examples
  5. Debugging Techniques
  6. Testing Error Scenarios
  7. Monitoring and Observability
  8. Common Pitfalls and Solutions

Understanding Error Types

1. System Errors

These are errors caused by the underlying system environment:

  • Network failures
  • File system issues
  • Memory exhaustion
  • Database connection problems

2. Logic Errors

Errors in the program’s logic or flow:

  • Invalid input handling
  • Incorrect assumptions
  • Boundary condition failures
  • State inconsistencies

3. Integration Errors

Errors when interacting with external services:

  • API failures
  • Authentication issues
  • Rate limiting
  • Service unavailability

4. User Errors

Errors caused by user actions or input:

  • Invalid configuration
  • Incorrect usage patterns
  • Permission issues
  • Resource conflicts

Error Classification System

Based on the leyline:error-patterns standard:

Critical Errors (Halt Execution)

# E001-E099: Critical system failures
class CriticalError(Exception):
    """Error that requires immediate halt of execution"""
    pass

class AuthenticationError(CriticalError):
    """Authentication has permanently failed"""
    def __init__(self, service, message="Authentication failed"):
        self.service = service
        self.code = "E001"
        super().__init__(f"[{self.code}] {service}: {message}")

Recoverable Errors (Retry or Secondary Strategy)

# E010-E099: Recoverable errors
class RecoverableError(Exception):
    """Error that might be resolved with retry or secondary strategy"""
    pass

class NetworkTimeoutError(RecoverableError):
    """Network operation timed out"""
    def __init__(self, operation, timeout):
        self.operation = operation
        self.timeout = timeout
        self.code = "E010"
        super().__init__(f"[{self.code}] {operation} timed out after {timeout}s")

Warnings (Continue with Logging)

# E020-E099: Warning conditions
class WarningError(Exception):
    """Warning condition that should be logged but doesn't halt execution"""
    pass

class PerformanceWarning(WarningError):
    """Operation is slower than expected"""
    def __init__(self, operation, duration, threshold):
        self.operation = operation
        self.duration = duration
        self.threshold = threshold
        self.code = "E020"
        super().__init__(f"[{self.code}] {operation} took {duration:.2f}s (threshold: {threshold}s)")

Practical Error Handling Patterns

1. The Try-Except-Else-Finally Pattern

import logging

logger = logging.getLogger(__name__)

def robust_file_operation(filepath):
    """Pattern for file operations with detailed error handling"""
    try:
        # Try to open and process file
        with open(filepath, 'r') as f:
            data = f.read()

    except FileNotFoundError:
        logger.error(f"File not found: {filepath}")
        raise FileNotFoundError(f"E002 File not found: {filepath}")

    except PermissionError:
        logger.error(f"Permission denied: {filepath}")
        raise PermissionError(f"E006 Permission denied: {filepath}")

    except UnicodeDecodeError as e:
        logger.error(f"Encoding error in {filepath}: {e}")
        # Try alternative encoding
        try:
            with open(filepath, 'r', encoding='utf-8-sig') as f:
                data = f.read()
            logger.warning(f"Used alternative encoding for {filepath}")
        except Exception:
            raise ValueError(f"E012 Cannot decode file: {filepath}")

    else:
        # File opened successfully
        logger.info(f"Successfully read {filepath}")
        return data

    finally:
        # Cleanup (if needed)
        pass

2. Retry with Exponential Backoff

import time
import random
import asyncio
from typing import Callable, Any

async def retry_with_backoff(
    operation: Callable,
    max_retries: int = 3,
    base_delay: float = 1.0,
    max_delay: float = 60.0,
    jitter: bool = True
) -> Any:
    """
    Execute operation with exponential backoff retry logic
    """
    last_exception = None

    for attempt in range(max_retries + 1):
        try:
            return await operation()

        except (ConnectionError, TimeoutError) as e:
            last_exception = e

            if attempt == max_retries:
                break

            # Calculate delay with exponential backoff
            delay = min(base_delay * (2 ** attempt), max_delay)

            # Add jitter to prevent thundering herd
            if jitter:
                delay *= (0.5 + random.random() * 0.5)

            logger.warning(
                f"Attempt {attempt + 1} failed, retrying in {delay:.2f}s: {e}"
            )
            await asyncio.sleep(delay)

        except Exception as e:
            # Don't retry non-transient errors
            logger.error(f"Non-retryable error: {e}")
            raise

    raise last_exception

3. Circuit Breaker Pattern

import time
from enum import Enum
from typing import Callable, Any

class CircuitState(Enum):
    CLOSED = "closed"
    OPEN = "open"
    HALF_OPEN = "half_open"

class CircuitBreaker:
    """Circuit breaker to prevent cascading failures"""

    def __init__(
        self,
        failure_threshold: int = 5,
        timeout: float = 60.0,
        expected_exception: type = Exception
    ):
        self.failure_threshold = failure_threshold
        self.timeout = timeout
        self.expected_exception = expected_exception

        self.failure_count = 0
        self.last_failure_time = None
        self.state = CircuitState.CLOSED

    def __call__(self, func: Callable) -> Callable:
        async def wrapper(*args, **kwargs):
            if self.state == CircuitState.OPEN:
                if time.time() - self.last_failure_time > self.timeout:
                    self.state = CircuitState.HALF_OPEN
                else:
                    raise Exception("E015 Circuit breaker is OPEN")

            try:
                result = await func(*args, **kwargs)

                if self.state == CircuitState.HALF_OPEN:
                    self.state = CircuitState.CLOSED
                    self.failure_count = 0

                return result

            except self.expected_exception as e:
                self.failure_count += 1
                self.last_failure_time = time.time()

                if self.failure_count >= self.failure_threshold:
                    self.state = CircuitState.OPEN

                raise

        return wrapper

4. Graceful Degradation Pattern

from typing import Optional, Dict, Any

class GracefulDegradation:
    """Implement graceful degradation when services fail"""

    def __init__(self):
        self.secondary_actions = {}

    def register_secondary(self, operation: str, secondary_func: Callable):
        """Register a secondary function for an operation"""
        self.secondary_actions[operation] = secondary_func

    async def execute(self, operation: str, primary_func: Callable, *args, **kwargs) -> Any:
        """
        Execute primary function with secondary logic if primary fails
        """
        try:
            return await primary_func(*args, **kwargs)

        except Exception as e:
            logger.error(f"Primary operation failed: {e}")

            if operation in self.secondary_actions:
                logger.info(f"Using secondary logic for {operation}")
                try:
                    return await self.secondary_actions[operation](*args, **kwargs)
                except Exception as secondary_error:
                    logger.error(f"Secondary logic also failed: {secondary_error}")
                    raise Exception(f"E016 Both primary and secondary failed for {operation}")
            else:
                raise

# Usage example
degradation = GracefulDegradation()

# Register secondary logic
degradation.register_secondary(
    "fetch_data",
    lambda: fetch_from_cache()  # Secondary: fetch from cache
)

# Execute with secondary logic
data = await degradation.execute(
    "fetch_data",
    fetch_from_api  # Primary function
)

Real-World Examples

Example 1: API Client with Relevant Error Handling

import aiohttp
import asyncio
from typing import Optional, Dict, Any

class RobustAPIClient:
    """API client with relevant error handling"""

    def __init__(self, base_url: str, timeout: float = 30.0):
        self.base_url = base_url
        self.timeout = aiohttp.ClientTimeout(total=timeout)
        self.session = None

    async def __aenter__(self):
        self.session = aiohttp.ClientSession(
            timeout=self.timeout,
            connector=aiohttp.TCPConnector(limit=10)
        )
        return self

    async def __aexit__(self, exc_type, exc_val, exc_tb):
        if self.session:
            await self.session.close()

    @retry_with_backoff(max_retries=3)
    async def request(
        self,
        method: str,
        endpoint: str,
        **kwargs
    ) -> Dict[str, Any]:
        """Make HTTP request with detailed error handling"""

        url = f"{self.base_url}/{endpoint}"

        try:
            async with self.session.request(method, url, **kwargs) as response:
                # Handle HTTP status codes
                if response.status == 200:
                    return await response.json()
                elif response.status == 401:
                    raise AuthenticationError("API", "Invalid credentials")
                elif response.status == 403:
                    raise PermissionError("E006 Access forbidden")
                elif response.status == 429:
                    retry_after = int(response.headers.get('Retry-After', 60))
                    raise RateLimitError("API", retry_after)
                elif response.status >= 500:
                    raise ServerError(f"E017 Server error: {response.status}")
                else:
                    raise APIError(f"E018 Unexpected status: {response.status}")

        except asyncio.TimeoutError:
            raise NetworkTimeoutError(f"{method} {url}", self.timeout.total)

        except aiohttp.ClientError as e:
            raise ConnectionError(f"E019 Connection error: {e}")

        except Exception as e:
            raise APIError(f"E020 Unexpected error: {e}")

# Usage
async def fetch_user_data(user_id: int):
    try:
        async with RobustAPIClient("https://api.example.com") as client:
            return await client.request("GET", f"users/{user_id}")

    except AuthenticationError:
        logger.error("API authentication failed")
        return {"error": "authentication_required"}

    except RateLimitError as e:
        logger.warning(f"Rate limited, retry after {e.retry_after}s")
        return {"error": "rate_limited", "retry_after": e.retry_after}

    except NetworkTimeoutError:
        logger.error("Network timeout")
        return {"error": "timeout"}

    except Exception as e:
        logger.error(f"Failed to fetch user data: {e}")
        return {"error": "unknown"}

Example 2: Data Processing Pipeline

import asyncio
import logging
from typing import List, Any, Optional
from dataclasses import dataclass

logger = logging.getLogger(__name__)

@dataclass
class ProcessingResult:
    success: bool
    data: Optional[Any] = None
    error: Optional[str] = None
    warnings: List[str] = None

    def __post_init__(self):
        if self.warnings is None:
            self.warnings = []

class DataProcessor:
    """Production-grade data processing pipeline"""

    def __init__(self, max_workers: int = 4):
        self.max_workers = max_workers
        self.processed_count = 0
        self.error_count = 0

    async def process_batch(self, items: List[Any]) -> List[ProcessingResult]:
        """Process a batch of items with error isolation"""

        semaphore = asyncio.Semaphore(self.max_workers)

        async def process_with_isolation(item):
            async with semaphore:
                return await self.process_item(item)

        # Process all items concurrently
        tasks = [process_with_isolation(item) for item in items]
        results = await asyncio.gather(*tasks, return_exceptions=True)

        # Convert exceptions to error results
        processed_results = []
        for i, result in enumerate(results):
            if isinstance(result, Exception):
                processed_results.append(
                    ProcessingResult(
                        success=False,
                        error=f"E021 Processing failed: {str(result)}"
                    )
                )
                self.error_count += 1
            else:
                processed_results.append(result)
                if result.success:
                    self.processed_count += 1
                else:
                    self.error_count += 1

        return processed_results

    async def process_item(self, item: Any) -> ProcessingResult:
        """Process single item with detailed error handling"""

        warnings = []

        try:
            # Validate input
            if not self.validate_input(item):
                return ProcessingResult(
                    success=False,
                    error="E022 Invalid input format"
                )

            # Transform data
            try:
                transformed = await self.transform_data(item)
            except TransformationError as e:
                return ProcessingResult(
                    success=False,
                    error=f"E023 Transformation failed: {e}"
                )

            # Validate transformation
            validation_warnings = self.validate_output(transformed)
            warnings.extend(validation_warnings)

            # Store result
            try:
                await self.store_result(transformed)
            except StorageError as e:
                # Try alternative storage
                try:
                    await self.store_alternatively(transformed)
                    warnings.append("W001 Used alternative storage")
                except Exception:
                    return ProcessingResult(
                        success=False,
                        error=f"E024 Storage failed: {e}"
                    )

            return ProcessingResult(
                success=True,
                data=transformed,
                warnings=warnings
            )

        except Exception as e:
            logger.error(f"Unexpected error processing item: {e}")
            return ProcessingResult(
                success=False,
                error=f"E025 Unexpected error: {e}"
            )

    def validate_input(self, item: Any) -> bool:
        """Validate input data"""
        # Implementation depends on your data structure
        return item is not None

    async def transform_data(self, item: Any) -> Any:
        """Transform data with error handling"""
        # Your transformation logic here
        return item

    def validate_output(self, data: Any) -> List[str]:
        """Validate output and return warnings"""
        warnings = []
        # Your validation logic here
        return warnings

    async def store_result(self, data: Any) -> None:
        """Store result"""
        # Your storage logic here
        pass

    async def store_alternatively(self, data: Any) -> None:
        """Alternative storage method"""
        # Fallback storage logic here
        pass

Debugging Techniques

1. Structured Logging

import logging
import json
from datetime import datetime
from typing import Dict, Any

class StructuredLogger:
    """Logger for structured error reporting"""

    def __init__(self, name: str):
        self.logger = logging.getLogger(name)

    def log_error(
        self,
        error: Exception,
        context: Dict[str, Any] = None,
        user_id: str = None,
        request_id: str = None
    ):
        """Log error with structured context"""

        error_data = {
            "timestamp": datetime.utcnow().isoformat(),
            "error_type": type(error).__name__,
            "error_message": str(error),
            "error_code": getattr(error, 'code', 'UNKNOWN'),
            "context": context or {},
            "user_id": user_id,
            "request_id": request_id,
            "traceback": traceback.format_exc()
        }

        self.logger.error(json.dumps(error_data))

    def log_warning(
        self,
        message: str,
        context: Dict[str, Any] = None,
        warning_code: str = "W000"
    ):
        """Log warning with context"""

        warning_data = {
            "timestamp": datetime.utcnow().isoformat(),
            "message": message,
            "warning_code": warning_code,
            "context": context or {}
        }

        self.logger.warning(json.dumps(warning_data))

2. Debug Decorator

import functools
import time
import traceback
from typing import Callable, Any

def debug_errors(
    log_args: bool = True,
    log_result: bool = True,
    log_traceback: bool = True
):
    """Decorator for debugging function errors"""

    def decorator(func: Callable) -> Callable:
        @functools.wraps(func)
        async def async_wrapper(*args, **kwargs):
            start_time = time.time()

            try:
                if log_args:
                    logger.debug(f"Calling {func.__name__} with args={args}, kwargs={kwargs}")

                result = await func(*args, **kwargs)

                if log_result:
                    logger.debug(f"{func.__name__} returned: {type(result)}")

                return result

            except Exception as e:
                execution_time = time.time() - start_time

                error_info = {
                    "function": func.__name__,
                    "execution_time": execution_time,
                    "error": str(e),
                    "error_type": type(e).__name__
                }

                if log_args:
                    error_info["args"] = args
                    error_info["kwargs"] = kwargs

                if log_traceback:
                    error_info["traceback"] = traceback.format_exc()

                logger.error(f"Error in {func.__name__}: {json.dumps(error_info)}")
                raise

        @functools.wraps(func)
        def sync_wrapper(*args, **kwargs):
            start_time = time.time()

            try:
                if log_args:
                    logger.debug(f"Calling {func.__name__} with args={args}, kwargs={kwargs}")

                result = func(*args, **kwargs)

                if log_result:
                    logger.debug(f"{func.__name__} returned: {type(result)}")

                return result

            except Exception as e:
                execution_time = time.time() - start_time

                error_info = {
                    "function": func.__name__,
                    "execution_time": execution_time,
                    "error": str(e),
                    "error_type": type(e).__name__
                }

                if log_args:
                    error_info["args"] = args
                    error_info["kwargs"] = kwargs

                if log_traceback:
                    error_info["traceback"] = traceback.format_exc()

                logger.error(f"Error in {func.__name__}: {json.dumps(error_info)}")
                raise

        if asyncio.iscoroutinefunction(func):
            return async_wrapper
        else:
            return sync_wrapper

    return decorator

# Usage
@debug_errors()
async def problematic_function(data):
    # This function will have detailed error logging
    return await process_data(data)

Testing Error Scenarios

1. Error Injection Testing

import pytest
from unittest.mock import patch, AsyncMock
from contextlib import asynccontextmanager

class ErrorInjector:
    """Inject errors for testing purposes"""

    def __init__(self):
        self.errors = {}

    def inject_error(self, function_name: str, error: Exception):
        """Inject error for specific function"""
        self.errors[function_name] = error

    def should_error(self, function_name: str) -> bool:
        """Check if function should error"""
        return function_name in self.errors

    def get_error(self, function_name: str) -> Exception:
        """Get injected error"""
        return self.errors[function_name]

# Test example
@pytest.mark.asyncio
async def test_api_client_with_errors():
    injector = ErrorInjector()

    # Test network timeout
    injector.inject_error("request", asyncio.TimeoutError())

    with patch('aiohttp.ClientSession.request') as mock_request:
        mock_request.side_effect = injector.get_error("request")

        async with RobustAPIClient("https://api.example.com") as client:
            with pytest.raises(NetworkTimeoutError):
                await client.request("GET", "test")

    # Test server error
    injector.errors = {}
    mock_response = AsyncMock()
    mock_response.status = 500

    with patch('aiohttp.ClientSession.request') as mock_request:
        mock_request.return_value.__aenter__.return_value = mock_response

        async with RobustAPIClient("https://api.example.com") as client:
            with pytest.raises(ServerError):
                await client.request("GET", "test")

2. Property-Based Testing

import hypothesis
from hypothesis import given, strategies as st

@given(st.lists(st.integers(), min_size=1, max_size=100))
def test_sort_with_error_handling(numbers):
    """Test sorting function with various inputs"""

    try:
        result = robust_sort(numbers)
        assert result == sorted(numbers)

    except ValueError as e:
        # Should handle invalid inputs gracefully
        assert "invalid" in str(e).lower()

    except Exception as e:
        # No other exceptions should occur
        pytest.fail(f"Unexpected exception: {e}")

Monitoring and Observability

1. Error Metrics Collection

from collections import defaultdict, deque
import time
from typing import Dict, List

class ErrorMetrics:
    """Collect and analyze error metrics"""

    def __init__(self, window_size: int = 3600):  # 1 hour window
        self.window_size = window_size
        self.error_counts = defaultdict(int)
        self.error_history = deque()
        self.recent_errors = deque(maxlen=100)

    def record_error(
        self,
        error_code: str,
        error_type: str,
        context: Dict[str, Any] = None
    ):
        """Record an error occurrence"""

        timestamp = time.time()

        # Update counts
        self.error_counts[error_code] += 1
        self.error_counts[f"{error_type}_{error_code}"] += 1

        # Add to history
        error_record = {
            "timestamp": timestamp,
            "error_code": error_code,
            "error_type": error_type,
            "context": context or {}
        }

        self.error_history.append(error_record)
        self.recent_errors.append(error_record)

        # Clean old records
        cutoff = timestamp - self.window_size
        while self.error_history and self.error_history[0]["timestamp"] < cutoff:
            self.error_history.popleft()

    def get_error_rate(self, duration: float = 300) -> float:
        """Get error rate in the last duration (seconds)"""

        cutoff = time.time() - duration
        recent_errors = [
            e for e in self.error_history
            if e["timestamp"] > cutoff
        ]

        return len(recent_errors) / duration

    def get_top_errors(self, limit: int = 10) -> List[tuple]:
        """Get most frequent errors"""

        return sorted(
            self.error_counts.items(),
            key=lambda x: x[1],
            reverse=True
        )[:limit]

    def check_error_spike(self, threshold: float = 2.0, window: int = 300) -> bool:
        """Check if error rate has spiked"""

        current_rate = self.get_error_rate(window)
        baseline_rate = self.get_error_rate(window * 2) / 2

        return current_rate > baseline_rate * threshold

2. Health Check System

from typing import Dict, List, Callable
from dataclasses import dataclass
from enum import Enum

class HealthStatus(Enum):
    HEALTHY = "healthy"
    DEGRADED = "degraded"
    UNHEALTHY = "unhealthy"

@dataclass
class HealthCheck:
    name: str
    check_func: Callable
    timeout: float = 5.0
    critical: bool = True

class HealthMonitor:
    """Monitor system health"""

    def __init__(self):
        self.checks: Dict[str, HealthCheck] = {}
        self.metrics = ErrorMetrics()

    def register_check(self, health_check: HealthCheck):
        """Register a health check"""
        self.checks[health_check.name] = health_check

    async def run_check(self, check_name: str) -> Dict[str, Any]:
        """Run a specific health check"""

        if check_name not in self.checks:
            return {
                "status": HealthStatus.UNHEALTHY,
                "error": f"E026 Unknown health check: {check_name}"
            }

        check = self.checks[check_name]

        try:
            async with asyncio.timeout(check.timeout):
                result = await check.check_func()

            return {
                "status": HealthStatus.HEALTHY,
                "result": result,
                "timestamp": time.time()
            }

        except asyncio.TimeoutError:
            error_code = "E027"
            self.metrics.record_error(error_code, "timeout", {"check": check_name})

            return {
                "status": HealthStatus.UNHEALTHY if check.critical else HealthStatus.DEGRADED,
                "error": f"[{error_code}] Health check timed out",
                "timestamp": time.time()
            }

        except Exception as e:
            error_code = "E028"
            self.metrics.record_error(error_code, "health_check", {
                "check": check_name,
                "error": str(e)
            })

            return {
                "status": HealthStatus.UNHEALTHY if check.critical else HealthStatus.DEGRADED,
                "error": f"[{error_code}] Health check failed: {e}",
                "timestamp": time.time()
            }

    async def run_all_checks(self) -> Dict[str, Any]:
        """Run all health checks"""

        results = {}
        overall_status = HealthStatus.HEALTHY

        for check_name in self.checks:
            result = await self.run_check(check_name)
            results[check_name] = result

            # Update overall status
            if result["status"] == HealthStatus.UNHEALTHY:
                overall_status = HealthStatus.UNHEALTHY
            elif result["status"] == HealthStatus.DEGRADED and overall_status == HealthStatus.HEALTHY:
                overall_status = HealthStatus.DEGRADED

        return {
            "overall_status": overall_status,
            "checks": results,
            "timestamp": time.time(),
            "error_rate": self.metrics.get_error_rate()
        }

Common Pitfalls and Solutions

PitfallProblemSolution
Swallowing Exceptionsexcept: pass hides failuresLog and re-raise: logger.error(e); raise
Overly Broad Catchingexcept Exception catches everythingCatch specific types, re-raise unexpected
Missing Contextraise ValueError("Invalid")Include field/value: f"E022 Invalid {field}: {value}"
Resource LeaksFiles/connections left open on errorUse with statements or try/finally
Inconsistent HandlingMix of return None and raiseDefine base exception, use consistent pattern

Summary

Effective error handling classifies errors consistently and manages them through meaningful messages and recovery options. Include error scenarios in tests and monitor error patterns in production.

Cross-Plugin Collaboration Guide

This guide demonstrates how plugins in the Claude Night Market ecosystem work together through shared superpowers to create workflows that combine multiple domain specializations.

Overview

The Night Market plugin ecosystem is designed for collaboration. Each plugin specializes in a domain and exposes capabilities through skills:

  • Abstract - Meta-infrastructure for skills, validation, and quality
  • Sanctum - Git workflows, PR generation, and documentation
  • Scry - Media generation (VHS terminal recordings, Playwright browser recordings)
  • Conservation - Context optimization and resource management

Common Collaboration Patterns

Pattern 1: Quality Assurance Chain

Abstract (TDD/Bulletproofing) -> Sanctum (Quality Gates) -> Production

Abstract enforces TDD and skill structure, Sanctum validates before integration.

Pattern 2: Resource Optimization Loop

Conservation (Monitor) -> Abstract (Refactor) -> Conservation (Validate)

Conservation identifies issues, Abstract provides patterns to fix them.

Pattern 3: Automated Workflow Enhancement

Sanctum (Detect Need) -> Conservation (Optimize) -> Sanctum (Execute)

Sanctum recognizes resource constraints, Conservation optimizes, Sanctum proceeds.

Pattern 4: Media-Enhanced Documentation

Sanctum (Detect Tutorial Need) -> Scry (Generate Media) -> Sanctum (Update Docs)

Sanctum identifies documentation gaps, Scry generates GIFs, Sanctum integrates them.

Pattern 5: Cross-Plugin Dependencies

# Skills can depend on other plugins' capabilities
dependencies:
  plugins:
    - conservation:context-optimization
    - sanctum:git-workspace-review
    - abstract:modular-skills
    - scry:vhs-recording
    - scry:gif-generation

Two-Plugin Collaborations

Abstract + Sanctum: Skill-Driven PR Workflow

Use Case: Creating and integrating new skills with automated PR generation

Workflow Steps

Step 1: Create and Validate the Skill (Abstract)

/abstract:create-skill "my-awesome-skill"
# Creates: skill directory, implementation, tests, documentation

Step 2: Test and Bulletproof (Abstract)

/abstract:test-skill my-awesome-skill
/abstract:bulletproof-skill my-awesome-skill
# Validates: best practices, TDD methodology, edge case resistance

Step 3: Estimate Token Usage (Abstract)

/abstract:estimate-tokens my-awesome-skill
# Output: skill tokens, dependencies, total impact

Step 4: Generate PR (Sanctum)

/sanctum:pr
# Automatically: reviews workspace, runs quality gates, generates PR description

Generated PR Example

## Summary
- Add new `my-awesome-skill` skill for processing workflow data
- Implements TDD methodology with 95%+ test coverage
- Validated through Abstract's skill evaluation framework

## Testing
- Skill validation passed: `/abstract:validate-skill my-awesome-skill`
- TDD workflow passed: `/abstract:test-skill my-awesome-skill`
- Bulletproofing completed: `/abstract:bulletproof-skill my-awesome-skill`
- Project linting passed, all tests passing

Benefits

  • Quality Assurance: Abstract validates skills are well-structured and tested
  • Security: Bulletproofing prevents edge cases and bypass attempts
  • Automation: Sanctum handles mechanical PR creation
  • Consistency: Standardized PR format with all necessary information

Conservation + Abstract: Optimizing Meta-Skills

Use Case: Reducing context usage of complex evaluation skills without losing functionality

Initial Problem

# Skill consuming too much context
name: detailed-skill-eval
token_budget: 2500  # Too high!
estimated_tokens: 2300
progressive_loading: false  # Loading everything at once

Optimization Workflow

Step 1: Analyze Context Usage (Conservation)

/conservation:analyze-growth
# Output: Current context usage: 45% (CRITICAL)
# Top consumer: detailed-skill-eval: 2300 tokens

Step 2: Estimate Token Impact (Abstract)

/abstract:estimate-tokens detailed-skill-eval
# Breakdown: Core logic 900, Examples 600, Validation 400, Error handling 300

Step 3: Optimize Context Structure (Conservation)

/conservation:optimize-context
# Suggestions: Enable progressive loading, split modules, lazy load examples

Step 4: Refactor with Abstract’s Patterns

/abstract:analyze-skill detailed-skill-eval
# Provides: Modular decomposition strategy, shared pattern extraction

Results Comparison

MetricBeforeAfterImprovement
Token Usage230075067% reduction
Load Time2.3s0.8s65% faster
Memory Usage45%15%Within MECW limits
Test Coverage95%95%Maintained

Optimized Skill Structure

name: skill-eval-hub
token_budget: 800  # 65% reduction!
progressive_loading: true
modules:
  - core-eval
  - validation-rules
  - example-library
  - error-handlers
shared_patterns:
  - token-efficient-validation
  - lazy-loading-examples

Conservation + Sanctum: Optimized Git Workflows

Use Case: Managing multiple large feature branches efficiently

Challenge

# Multiple active branches need processing
- feature/auth-refactor (2,340 files changed)
- feature/performance-boost (1,890 files changed)
- feature/ui-redesign (3,210 files changed)
# Traditional approach would exceed context limits

Workflow Steps

Step 1: Analyze Resource Requirements (Conservation)

/conservation:optimize-context
# Output: Context Status: CRITICAL (68% usage)
# Available for git operations: 32%
# Recommended: Process branches sequentially with optimization

Step 2: Process Branch with Optimization (Sanctum + Conservation)

/git-catchup feature/auth-refactor --context-optimized
# Optimization applied:
# 1. Use summary mode for large diffs
# 2. Progressive loading of file details
# 3. Focus on critical changes only

Step 3: Generate Optimized PR (Sanctum)

/sanctum:pr --optimize-context
# Applies: Compressed summaries, token-efficient descriptions, progressive loading

Performance Comparison

Without Conservation:

Total context used: 124%
Result: Context overflow, incomplete processing
Success rate: 33% (1/3 branches)

With Conservation:

Total context used: 38%
Result: All branches processed successfully
Success rate: 100% (3/3 branches)

Advanced Features

Adaptive Detail Loading

Initial PR: 200 tokens (summary only)
/sanctum:show-details src/auth/           # +150 tokens
/sanctum:show-details src/auth/token.js   # +50 tokens
Total: 400 tokens (vs 2,000 without optimization)

Cross-Branch Pattern Recognition

# Conservation identifies patterns across branches
# Common changes consolidated into single documentation item
# Estimated savings: 800 tokens

Sanctum + Scry: Tutorial Generation Pipeline

Use Case: Creating and updating documentation tutorials with animated GIFs

Challenge

# Documentation needs visual demos
- Installation tutorials need terminal recordings
- Web UI guides need browser screen captures
- Combined workflows need multi-source compositions
# Manual process is time-consuming and inconsistent

Workflow Steps

Step 1: Identify Tutorial Needs (Sanctum)

/sanctum:update-tutorial --list
# Output:
# Available tutorials:
#   quickstart     assets/tapes/quickstart.tape
#   mcp            assets/tapes/mcp.manifest.yaml (terminal + browser)
#   skill-debug    assets/tapes/skill-debug.tape

Step 2: Generate Terminal Recordings (Scry)

# Sanctum's tutorial-updates skill orchestrates scry:vhs-recording
Skill(scry:vhs-recording) assets/tapes/quickstart.tape
# VHS processes tape file, generates optimized GIF
# Output: assets/gifs/quickstart.gif (1.2MB)

Step 3: Generate Browser Recordings (Scry)

# For web UI tutorials, Playwright captures video
Skill(scry:browser-recording) specs/dashboard.spec.ts
# Output: test-results/dashboard/video.webm

# Convert to optimized GIF
Skill(scry:gif-generation) --input video.webm --output dashboard.gif
# Output: assets/gifs/dashboard.gif (980KB)

Step 4: Compose Multi-Source Tutorials (Scry)

# For combined terminal + browser tutorials
Skill(scry:media-composition)
# Reads manifest, combines components
# Output: assets/gifs/mcp-combined.gif

Step 5: Generate Documentation (Sanctum)

/sanctum:update-tutorial quickstart mcp
# Sanctum generates dual-tone markdown:
# - docs/tutorials/quickstart.md (project docs, concise)
# - book/src/tutorials/quickstart.md (technical book, detailed)
# - Updates README.md demo section with GIF embeds

Manifest-Driven Composition

# assets/tapes/mcp.manifest.yaml
name: mcp
title: "MCP Server Integration"
components:
  - type: tape
    source: mcp.tape
    output: assets/gifs/mcp-terminal.gif
  - type: playwright
    source: browser/mcp-browser.spec.ts
    output: assets/gifs/mcp-browser.gif
    requires:
      - "npm run dev"  # Start server before recording
combine:
  output: assets/gifs/mcp-combined.gif
  layout: vertical
  options:
    padding: 10
    background: "#1a1a2e"

Results Comparison

MetricManualAutomated
Time per tutorial30-60 min2-5 min
ConsistencyVariable100% consistent
GIF optimizationOften skippedAlways optimized
Documentation syncOften outdatedAlways current

Benefits

  • Automation: End-to-end tutorial generation from tape files
  • Consistency: All GIFs use same quality settings and themes
  • Dual-Tone Output: Both project docs and technical book content
  • Manifest-Driven: Declarative composition for complex tutorials

Three-Way Ecosystem: Complete Development Lifecycle

Use Case: End-to-end enterprise plugin development with full optimization

Phase 1: Planning and Analysis

# 1. Analyze current resource state (Conservation)
/conservation:analyze-growth
# Output: Context at 25%, optimal for new development

# 2. Plan skill architecture (Abstract)
/abstract:analyze-skill ecosystem-orchestrator
# Output: Recommends modular architecture with 5 interconnected skills

# 3. Check git workspace (Sanctum)
/git-catchup
# Output: Clean workspace, ready for new feature branch

Phase 2: Skill Creation with Built-in Optimization

# Create orchestrator with Conservation awareness
/abstract:create-skill ecosystem-orchestrator

# Abstract creates with Conservation-suggested limits:
{
  "name": "ecosystem-orchestrator",
  "token_budget": 800,
  "progressive_loading": true,
  "modules": [
    "skill-discovery",
    "dependency-resolution",
    "execution-planning",
    "resource-monitoring"
  ]
}

Phase 3: Development and Testing

# TDD development cycle for all skills (Abstract)
for skill in ecosystem-orchestrator skill-discovery dependency-resolution; do
  /abstract:test-skill $skill
  /abstract:bulletproof-skill $skill
done

# Conservation monitors and optimizes during development
/conservation:optimize-context
# Output: Applied shared patterns, saved 1,200 tokens total

# Validate entire plugin structure (Abstract)
/abstract:validate-plugin

Phase 4: Integration and Performance Tuning

# Estimate total impact (Abstract + Conservation)
/abstract:estimate-tokens ecosystem-orchestrator --include-dependencies
# Output: Core 750 tokens, Dependencies 1,100, Total 1,850 (within limits)

# Performance analysis (Conservation)
/conservation:analyze-growth
# Output: Growth pattern optimal, MECW compliant

Phase 5: Documentation and PR Generation

# Generate detailed PR (Sanctum)
/sanctum:pr --include-performance-report

# Sanctum automatically includes:
# - Change summary
# - Test results from Abstract's testing
# - Performance metrics from Conservation
# - Context optimization details

Real-World Impact

MetricBefore IntegrationAfter Integration
Development time2-3 weeks3-5 days
Quality issuesFrequent, discovered lateCaught early
Resource problemsContext overflow commonEliminated
DocumentationManual, often incompleteAutomatic, detailed

Measurable Improvements:

  • Development speed: 70% faster
  • Bug reduction: 85% fewer production issues
  • Resource efficiency: 42% less token usage
  • Documentation quality: 100% compliance with standards

Integration Techniques

Shared State Management

# Conservation sets context budget
export CONTEXT_BUDGET=0.4

# Abstract respects budget in skill creation
/abstract:create-skill my-skill --context-limit $CONTEXT_BUDGET

# Sanctum generates PRs within budget
/sanctum:pr --respect-context-limit

Progressive Loading Framework

# Conservation provides framework
def progressive_load(module, priority):
    if context_available():
        load_module(module, priority)
    else:
        queue_for_later(module)

# Abstract implements for skills
# Sanctum implements for git operations

Quality Gates Integration

quality_gates:
  - abstract:validate-skill
  - abstract:test-skill
  - sanctum:lint-check
  - sanctum:security-scan

Measuring Collaboration Success

Development Metrics

  • Speed: Time from idea to production
  • Quality: Bug rates and test coverage
  • Consistency: Code style and pattern adherence
  • Documentation: Completeness and accuracy

Resource Metrics

  • Context Usage: Token consumption optimization
  • Performance: Response times and throughput
  • Scalability: Concurrent operation capacity
  • Efficiency: Resource utilization percentage

Collaboration Metrics

  • Interoperability: How well plugins work together
  • Integration: Clean handoffs between plugins
  • Flexibility: Ability to adapt to different scenarios
  • Maintainability: Long-term sustainability

Commands Reference

Abstract Commands

  • /abstract:create-skill - Create new skill with proper structure
  • /abstract:test-skill - Run TDD validation workflow
  • /abstract:bulletproof-skill - Harden skill against edge cases
  • /abstract:estimate-tokens - Calculate context impact
  • /abstract:analyze-skill - Get optimization recommendations
  • /abstract:validate-plugin - validate quality after optimization

Sanctum Commands

  • /git-catchup - Efficient git branch analysis
  • /sanctum:pr - Generate detailed PR description
  • /sanctum:show-details <path> - Progressive detail loading
  • /sanctum:update-tutorial - Generate tutorials with media (uses Scry)

Conservation Commands

  • /conservation:analyze-growth - Monitor resource usage trends
  • /conservation:optimize-context - Apply MECW optimization principles

Scry Commands

  • /scry:record-terminal - Record terminal sessions using VHS tape files
  • /scry:record-browser - Record browser sessions using Playwright specs

Key Takeaways

  1. Synergy Over Silos: Plugins working together create more value than separate usage
  2. Complementary Strengths: Each plugin specializes in a domain, combined they cover the development lifecycle
  3. Adaptive Workflows: Collaboration enables workflows that adapt to constraints
  4. Quality at Scale: Maintain high quality even with complex, multi-plugin workflows
  5. Resource Efficiency: Optimize for both development speed and operational cost

The Claude Night Market ecosystem is designed for collaboration. Combining plugin superpowers creates workflows that are efficient and maintainable by composing specialized capabilities.

See Also

Memory Palace Cache Modes

Learn how to configure Memory Palace’s research interceptor for different use cases.

Prerequisites

  • Memory Palace plugin installed
  • Familiarity with Memory Palace concepts

Objectives

By the end of this tutorial, you’ll:

  • Understand the four cache modes
  • Configure modes for different scenarios
  • Debug interceptor decisions
  • Monitor cache performance

Mode Overview

The research interceptor supports four modes:

ModeBehaviorUse Case
cache_onlyBlock web when no confident matchOffline work, policy audits
cache_firstCheck cache, fall back to webDefault research (recommended)
augmentBlend cache with live resultsWhen freshness matters
web_onlyBypass Memory Palace entirelyIncident response, debugging

Step 1: Check Current Mode

View your current configuration:

cat plugins/memory-palace/hooks/memory-palace-config.yaml

Look for the research_mode setting:

research_mode: cache_first

Step 2: Understanding the Decision Matrix

The interceptor evaluates queries using:

Freshness Detection

Queries containing temporal keywords trigger augmentation:

  • latest, 2025, today, this week
  • Even with strong cache hits

Match Strength

ScoreClassificationAction
> 0.8Strong matchUse cache
0.4-0.8Partial matchMode-dependent
< 0.4Weak/no matchFall back to web

Autonomy Overrides

When autonomy level >= 2, partial matches auto-approve without flagging the intake queue.

Step 3: Changing Modes

Edit the configuration file:

# hooks/memory-palace-config.yaml

# For offline work
research_mode: cache_only

# For normal research (default)
research_mode: cache_first

# For real-time topics
research_mode: augment

# To bypass completely
research_mode: web_only

Restart Claude Code for changes to take effect.

Step 4: Monitoring Decisions

The interceptor logs decisions to telemetry:

cat plugins/memory-palace/data/telemetry/memory-palace.csv

Fields include:

  • decision: cache_hit, cache_miss, augmented, blocked
  • novelty_score: 0-1 score for new information
  • intake_delta_reasoning: Why intake was triggered/skipped

Troubleshooting

Hook never fires

Check: Is cache_intercept enabled?

feature_flags:
  cache_intercept: true

Check: Is mode not web_only?

Legitimate query blocked in cache_only

Solution: Add missing entry to corpus

# Inspect keyword index
cat plugins/memory-palace/data/indexes/keyword-index.yaml

# Rebuild indexes
uv run python plugins/memory-palace/scripts/build_indexes.py

Too many augmentation messages

Solution: Adjust thresholds

# Raise intake threshold
intake_threshold: 0.6

# Or increase autonomy
autonomy_level: 2

Intake queue spam

Solution: Review duplicates

Check intakeFlagPayload.duplicate_entry_ids in telemetry and tidy corpus entries.

Operational Checklist

After configuring modes:

  1. Update docs/curation-log.md documenting mode choice
  2. Keep data/indexes/vitality-scores.yaml fresh
  3. When changing defaults, gate with feature flag
  4. Run interceptor tests:
    pytest tests/hooks/test_research_interceptor.py
    

Verification

Confirm your configuration works:

# Make a test query
# In Claude, ask about a topic in your corpus

# Check telemetry for decision
tail -1 plugins/memory-palace/data/telemetry/memory-palace.csv

Expected output shows cache_hit for known topics.

Next Steps

Achievement Unlocked: Cache Commander

Embedding Upgrade Guide

Add semantic search capabilities to Memory Palace for improved knowledge retrieval.

Prerequisites

  • Memory Palace plugin installed
  • Python environment with uv
  • (Optional) sentence-transformers for high-quality embeddings

Objectives

By the end of this tutorial, you’ll:

  • Build embedding indexes for your corpus
  • Toggle between embedding providers
  • Benchmark retrieval quality
  • Configure production settings

Step 1: Choose an Embedding Provider

Memory Palace supports multiple providers:

ProviderQualityDependenciesUse Case
hashBasicNoneCI, constrained environments
localHighsentence-transformersProduction, quality focus

Step 2: Build Provider Slices

Navigate to the plugin directory:

cd plugins/memory-palace

Build Hash Embeddings (No Dependencies)

uv run python scripts/build_embeddings.py --provider hash

This creates deterministic 16-dimensional vectors using hashing.

Build Local Embeddings (High Quality)

First, install sentence-transformers:

uv pip install sentence-transformers

Then build:

uv run python scripts/build_embeddings.py --provider local

This creates 384-dimensional vectors using a local transformer model.

Step 3: Verify the Build

Check the generated index:

cat data/indexes/embeddings.yaml

Expected structure:

providers:
  hash:
    embeddings: {...}
    vector_dimension: 16
  local:
    embeddings: {...}
    vector_dimension: 384
metadata:
  default_provider: hash

Both providers are stored, so you can switch without rebuilding.

Step 4: Toggle at Runtime

Set the provider via environment variable:

# Use hash embeddings
export MEMORY_PALACE_EMBEDDINGS_PROVIDER=hash

# Use local embeddings
export MEMORY_PALACE_EMBEDDINGS_PROVIDER=local

The hooks automatically use the environment variable.

Step 5: Benchmark Quality

Run retrieval benchmarks:

uv run python scripts/build_embeddings.py \
  --provider local \
  --benchmark fixtures/semantic_queries.json \
  --benchmark-top-k 3 \
  --benchmark-only

The benchmark file should contain test queries:

{
  "queries": [
    {
      "query": "async context managers in Python",
      "expected": ["async-patterns.md", "context-managers.md"]
    }
  ]
}

Step 6: Using in Code

The CacheLookup class handles provider selection:

from memory_palace.cache_lookup import CacheLookup

lookup = CacheLookup(
    corpus_dir="plugins/memory-palace/docs/knowledge-corpus",
    index_dir="plugins/memory-palace/data/indexes",
    embedding_provider="env",  # Reads from environment
)

# Semantic search
results = lookup.search("gradient descent", mode="embeddings")

Fallback Strategy

If sentence-transformers is missing:

  1. System automatically falls back to hash provider
  2. CI environments always have a working provider
  3. Set MEMORY_PALACE_EMBEDDINGS_PROVIDER=hash to guarantee fallback

Adding Custom Providers

Extend with additional providers:

# In your custom builder
def build_custom_embeddings(corpus):
    # Call external API
    embeddings = external_api.embed(corpus)
    return embeddings

# Builder preserves existing providers
uv run python scripts/build_embeddings.py \
  --provider custom \
  --custom-builder my_builder.py

Performance Considerations

ProviderMemoryLatencyAccuracy
hash~1MB<10ms~60%
local~500MB~100ms~90%

For production:

  • Use local for quality-critical retrieval
  • Use hash for quick lookups and CI

Troubleshooting

sentence-transformers installation fails

# Try with specific version
uv pip install sentence-transformers==2.2.2

Embeddings not updating

# Force rebuild
rm data/indexes/embeddings.yaml
uv run python scripts/build_embeddings.py --provider local

Provider not found

validate environment variable is set correctly:

echo $MEMORY_PALACE_EMBEDDINGS_PROVIDER

Verification

Confirm semantic search works:

# In Python
from memory_palace.cache_lookup import CacheLookup

lookup = CacheLookup(
    corpus_dir="docs/knowledge-corpus",
    index_dir="data/indexes",
    embedding_provider="local"
)

results = lookup.search("your test query", mode="embeddings")
print(results)

Next Steps

Achievement Unlocked: Semantic Scholar

Memory Palace Curation Workflow

Learn how the research interceptor collaborates with knowledge-intake for effective curation.

Prerequisites

  • Memory Palace plugin installed
  • Understanding of cache modes (see Cache Modes)

Objectives

By the end of this tutorial, you’ll:

  • Understand the intake flag payload
  • Process the intake queue
  • Use dual-output workflows
  • Maintain curation quality

The Intake Flow

When a research query runs, Memory Palace evaluates whether new information should be captured:

Query Execution
      |
      v
[Hook Evaluation]
      |
      +-- Build IntakeFlagPayload
      |   - should_flag_for_intake
      |   - novelty_score
      |   - domain_alignment
      |   - duplicate_entry_ids
      |
      v
[Decision]
      |
      +-- High novelty + domain match --> Flag for intake
      |
      +-- Low novelty or duplicate --> Skip intake
      |
      v
[Output]
      - Telemetry row
      - Queue entry (if flagged)

Step 1: Understanding IntakeFlagPayload

The IntakeFlagPayload dataclass tracks three signals:

FieldDescription
should_flag_for_intakeShould this query be queued?
novelty_scoreHeuristic for new information (0-1)
domain_alignmentMatches against interests config
# memory_palace/curation/models.py
@dataclass
class IntakeFlagPayload:
    should_flag_for_intake: bool
    novelty_score: float
    domain_alignment: List[str]
    duplicate_entry_ids: List[str]
    intake_delta_reasoning: str

Step 2: Monitoring the Hook

The hook outputs intake context to the runtime transcript:

[Memory Palace Intake]
Novelty: 0.75
Domains: python, async
Duplicates: []
Flag: True
Reasoning: High novelty content in aligned domain

Step 3: Processing the Intake Queue

When should_flag_for_intake=True, the hook writes to:

data/intake_queue.jsonl

Process the queue:

# View pending items
cat data/intake_queue.jsonl | jq .

# Process with CLI
uv run python skills/knowledge-intake/scripts/intake_cli.py \
  --queue data/intake_queue.jsonl \
  --review

Step 4: Intake Decision Options

For each queued item:

ActionWhen to Use
AcceptHigh value, unique information
MergeSimilar to existing entry
RejectLow value or duplicate
DeferNeed more context
# Accept item
uv run python intake_cli.py --item abc123 --accept

# Merge with existing
uv run python intake_cli.py --item abc123 --merge entry-456

# Reject
uv run python intake_cli.py --item abc123 --reject

Step 5: Using Dual-Output Mode

Generate both palace entry and developer documentation:

uv run python intake_cli.py \
  --candidate /tmp/candidate.json \
  --dual-output \
  --prompt-pack marginal-value-dual \
  --auto-accept

This creates:

  1. Palace entry: Stored in corpus
  2. Developer doc: Added to docs/
  3. Prompt artifact: Saved to docs/prompts/<pack>.md

Step 6: Telemetry Review

Check telemetry for intake patterns:

# View recent decisions
tail -20 data/telemetry/memory-palace.csv

Columns include:

  • novelty_score
  • aligned_domains
  • intake_delta_reasoning
  • duplicate_entry_ids

Curation Best Practices

Regular Review Cadence

  1. Daily: Check intake queue
  2. Weekly: Review telemetry patterns
  3. Monthly: KonMari session (prune low-value entries)

Document Decisions

Update docs/curation-log.md after each session:

## 2025-01-15 Curation Session

### Promoted
- async-patterns.md: High usage, evergreen

### Merged
- context-managers.md + async-context.md: Redundant

### Archived
- python-3.8-features.md: Outdated

Maintain Vitality Scores

Keep data/indexes/vitality-scores.yaml current:

entries:
  async-patterns:
    vitality: evergreen
    last_accessed: 2025-01-15
  python-3.8-features:
    vitality: probationary
    last_accessed: 2024-06-01

Troubleshooting

Too many intake flags

Solution: Raise intake threshold

# In config
intake_threshold: 0.7  # Higher = fewer flags

Missing domain alignment

Solution: Update domains of interest

# hooks/shared/config.py
domains_of_interest:
  - python
  - async
  - testing
  - architecture

Duplicate detection failing

Solution: Rebuild indexes

uv run python scripts/build_indexes.py

Verification

Confirm the workflow:

  1. Make a research query
  2. Check telemetry for decision
  3. View intake queue if flagged
  4. Process queue item
  5. Verify corpus update
# After processing
ls docs/knowledge-corpus/ | grep new-entry

Next Steps

Achievement Unlocked: Knowledge Curator

Capabilities Reference

Quick lookup table of all skills, commands, agents, and hooks in the Claude Night Market.

For full flag documentation and workflow examples: See Capabilities Reference Details.

Quick Reference Index

All Skills (Alphabetical)

SkillPluginDescription
api-reviewpensiveAPI surface evaluation
architecture-paradigm-client-serverarchetypesClient-server communication
architecture-paradigm-cqrs-esarchetypesCQRS and Event Sourcing
architecture-paradigm-event-drivenarchetypesAsynchronous communication
architecture-paradigm-functional-corearchetypesFunctional Core, Imperative Shell
architecture-paradigm-hexagonalarchetypesPorts & Adapters architecture
architecture-paradigm-layeredarchetypesTraditional N-tier architecture
architecture-paradigm-microkernelarchetypesPlugin-based extensibility
architecture-paradigm-microservicesarchetypesIndependent distributed services
architecture-paradigm-modular-monolitharchetypesSingle deployment with internal boundaries
architecture-paradigm-pipelinearchetypesPipes-and-filters model
architecture-paradigm-serverlessarchetypesFunction-as-a-Service
architecture-paradigm-service-basedarchetypesCoarse-grained SOA
architecture-paradigm-space-basedarchetypesData-grid architecture
architecture-paradigmsarchetypesOrchestrator for paradigm selection
agent-teamsconjureCoordinate Claude Code Agent Teams through filesystem-based protocol
architecture-aware-initattuneArchitecture-aware project initialization with research
architecture-reviewpensiveArchitecture assessment
authentication-patternsleylineAuth flow patterns
bloat-detectorconserveDetection algorithms for dead code, God classes, documentation duplication
browser-recordingscryPlaywright browser recordings
bug-reviewpensiveBug hunting
catchupimbueContext recovery
clear-contextconserveAuto-clear workflow with session state persistence
code-quality-principlesconserveCore principles for AI-assisted code quality
commit-messagessanctumConventional commits
context-optimizationconserveMECW principles and 50% context rule
cpu-gpu-performanceconserveResource monitoring and selective testing
decisive-actionconserveDecisive action patterns for efficient workflows
delegation-coreconjureFramework for delegation decisions
diff-analysisimbueSemantic changeset analysis
digital-garden-cultivatormemory-palaceDigital garden maintenance
doc-consolidationsanctumDocument merging
doc-generatorscribeGenerate and remediate documentation
doc-updatessanctumDocumentation maintenance
error-patternsleylineStandardized error handling
escalation-governanceabstractModel escalation decisions
evaluation-frameworkleylineDecision thresholds
evidence-loggingimbueCapture methodology
feature-reviewimbueFeature prioritization and gap analysis
file-analysissanctumFile structure analysis
do-issuesanctumGitHub issue resolution workflow
fpf-reviewpensiveFPF architecture review (Functional/Practical/Foundation)
gemini-delegationconjureGemini CLI integration
gif-generationscryGIF processing and optimization
git-platformleylineCross-platform git forge detection and command mapping
git-workspace-reviewsanctumRepo state analysis
github-initiative-pulseministerInitiative progress tracking
hook-authoringabstractSecurity-first hook development
hooks-evalabstractHook security scanning
knowledge-intakememory-palaceIntake and curation
knowledge-locatormemory-palaceSpatial search
makefile-dogfooderabstractMakefile analysis and enhancement
makefile-generationattuneGenerate language-specific Makefiles
makefile-reviewpensiveMakefile best practices
math-reviewpensiveMathematical correctness
mcp-code-executionconserveMCP patterns for data pipelines
methodology-curatorabstractSurface expert frameworks for skill development
media-compositionscryMulti-source media stitching
mission-orchestratorattuneUnified lifecycle orchestrator for project development
mecw-patternsleylineMECW implementation
memory-palace-architectmemory-palaceBuilding virtual palaces
modular-skillsabstractModular design patterns
optimizing-large-skillsconserveLarge skill optimization
performance-optimizationabstractProgressive loading, token budgeting, and context-aware content delivery
code-refinementpensiveDuplication, algorithms, and clean code analysis
damage-controlleylineAgent-level error recovery for multi-agent coordination
pr-prepsanctumPR preparation
pr-reviewsanctumPR review workflows
precommit-setupattuneSet up pre-commit hooks
progressive-loadingleylineDynamic content loading
project-brainstormingattuneSocratic ideation workflow
project-executionattuneSystematic implementation
project-initattuneInteractive project initialization
project-planningattuneArchitecture and task breakdown
project-specificationattuneSpec creation from brainstorm
proof-of-workimbueEvidence-based work validation
python-asyncparseltongueAsync patterns
python-packagingparseltonguePackaging with uv
python-performanceparseltongueProfiling and optimization
python-testingparseltonguePytest/TDD workflows
pytest-configleylinePytest configuration patterns
qwen-delegationconjureQwen MCP integration
quota-managementleylineRate limiting and quotas
release-health-gatesministerRelease readiness checks
review-chambermemory-palacePR review knowledge capture and retrieval
response-compressionconserveResponse compression patterns
review-coreimbueScaffolding for detailed reviews
risk-classificationleylineInline 4-tier risk classification for agent tasks
rigorous-reasoningimbueAnti-sycophancy guardrails
rule-cataloghookifyPre-built behavioral rule templates
rust-reviewpensiveRust-specific checking
safety-critical-patternspensiveNASA Power of 10 rules for robust code
scope-guardimbueAnti-overengineering
service-registryleylineService discovery patterns
session-managementsanctumSession naming, checkpointing, and resume strategies
session-palace-buildermemory-palaceSession-specific palaces
shared-patternsabstractReusable plugin development patterns
shell-reviewpensiveShell script auditing for safety and portability
slop-detectorscribeDetect AI-generated content markers
smart-sourcingconserveBalance accuracy with token efficiency
skill-authoringabstractTDD methodology for skill creation
skills-evalabstractSkill quality assessment
spec-writingspec-kitSpecification authoring
speckit-orchestratorspec-kitWorkflow coordination
storage-templatesleylineStorage abstraction patterns
style-learnerscribeExtract writing style from exemplar text
structured-outputimbueFormatting patterns
task-planningspec-kitTask generation
test-reviewpensiveTest quality review
subagent-testingabstractTesting patterns for subagent interactions
test-updatessanctumTest maintenance
testing-quality-standardsleylineTest quality guidelines
token-conservationconserveToken usage strategies
tutorial-updatessanctumTutorial maintenance and updates
unified-reviewpensiveReview orchestration
update-readmesanctumREADME modernization
usage-loggingleylineTelemetry tracking
version-updatessanctumVersion bumping
vhs-recordingscryTerminal recordings with VHS
war-roomattuneMulti-LLM expert council with Type 1/2 reversibility routing
war-room-checkpointattuneInline reversibility assessment for embedded escalation
workflow-improvementsanctumWorkflow retrospectives
workflow-monitorimbueWorkflow execution monitoring and issue creation
workflow-setupattuneConfigure CI/CD pipelines
writing-ruleshookifyGuide for authoring behavioral rules

All Commands (Alphabetical)

CommandPluginDescription
/ai-hygiene-auditconserveAudit codebase for AI-generated code quality issues (vibe coding, Tab bloat, slop)
/aggregate-logsabstractGenerate LEARNINGS.md from skill execution logs
/analyze-growthconserveAnalyzes skill growth patterns
/analyze-hookabstractAnalyzes hook for security/performance
/bloat-scanconserveProgressive bloat detection (3-tier scan)
/analyze-skillabstractSkill complexity analysis
/analyze-testsparseltongueTest suite health report
/api-reviewpensiveAPI surface review
/attune:brainstormattuneBrainstorm project ideas using Socratic questioning
/attune:executeattuneExecute implementation tasks systematically
/attune:initattuneInitialize new project with development infrastructure
/attune:missionattuneRun full project lifecycle as a single mission with state detection and recovery
/attune:blueprintattunePlan architecture and break down tasks
/attune:specifyattuneCreate detailed specifications from brainstorm
/attune:upgrade-projectattuneAdd or update configurations in existing project
/attune:validateattuneValidate project structure against best practices
/attune:war-roomattuneMulti-LLM expert deliberation with reversibility-based routing
/architecture-reviewpensiveArchitecture assessment
/bug-reviewpensiveBug hunting review
/bulletproof-skillabstractAnti-rationalization workflow
/catchupimbueQuick context recovery
/check-asyncparseltongueAsync pattern validation
/close-issueministerAnalyze if GitHub issues can be closed based on commits
/commit-msgsanctumGenerate commit message
/context-reportabstractContext optimization report
/create-tagsanctumCreate git tags for releases
/create-commandabstractScaffold new command
/create-hookabstractScaffold new hook
/create-issueministerCreate GitHub issue with labels and references
/create-skillabstractScaffold new skill
/doc-generatescribeGenerate new documentation
/doc-polishscribeClean up AI-generated content
/doc-verifyscribeValidate documentation claims with proof-of-work
/estimate-tokensabstractToken usage estimation
/evaluate-skillabstractEvaluate skill execution quality
/feature-reviewimbueFeature prioritization
/do-issuesanctumFix GitHub issues
/fix-prsanctumAddress PR review comments
/fix-workflowsanctumWorkflow retrospective with automatic improvement context gathering
/full-reviewpensiveUnified code review
/gardenmemory-palaceManage digital gardens
/git-catchupsanctumGit repository catchup
/hookifyhookifyCreate behavioral rules to prevent unwanted actions
/hookify:configurehookifyInteractive rule enable/disable interface
/hookify:helphookifyDisplay hookify help and documentation
/hookify:installhookifyInstall hookify rule from catalog
/hookify:listhookifyList all hookify rules with status
/hooks-evalabstractHook evaluation
/improve-skillsabstractAuto-improve skills from observability data
/make-dogfoodabstractMakefile enhancement
/makefile-reviewpensiveMakefile review
/math-reviewpensiveMathematical review
/merge-docssanctumConsolidate ephemeral docs
/navigatememory-palaceSearch palaces
/optimize-contextconserveContext optimization
/palacememory-palaceManage palaces
/plugin-reviewabstractComprehensive plugin architecture review
/prsanctumPrepare pull request
/prepare-prsanctumComplete PR preparation with updates and validation
/pr-reviewsanctumEnhanced PR review
/record-browserscryRecord browser session
/record-terminalscryCreate terminal recording
/reinstall-all-pluginsleylineRefresh all plugins
/resolve-threadssanctumResolve PR review threads
/review-roommemory-palaceManage PR review knowledge in palaces
/run-profilerparseltongueProfile code execution
/rust-reviewpensiveRust-specific review
/shell-reviewpensiveShell script safety and portability review
/skill-historypensiveView recent skill executions with context
/skill-logsmemory-palaceView and manage skill execution memories
/skill-reviewpensiveAnalyze skill metrics and stability gaps
/slop-scanscribeScan files for AI slop markers
/skills-evalabstractSkill quality assessment
/speckit-analyzespec-kitCheck artifact consistency
/speckit-checklistspec-kitGenerate checklist
/speckit-clarifyspec-kitClarifying questions
/speckit-constitutionspec-kitProject constitution
/speckit-implementspec-kitExecute tasks
/speckit-planspec-kitGenerate plan
/speckit-specifyspec-kitCreate specification
/speckit-startupspec-kitBootstrap workflow
/speckit-tasksspec-kitGenerate tasks
/structured-reviewimbueStructured review workflow
/style-learnscribeCreate style profile from examples
/test-reviewpensiveTest quality review
/test-skillabstractSkill testing workflow
/unbloatconserveSafe bloat remediation with interactive approval
/update-all-pluginsleylineUpdate all plugins
/update-dependenciessanctumUpdate project dependencies
/update-docssanctumUpdate documentation
/update-labelsministerReorganize GitHub issue labels with professional taxonomy
/update-pluginssanctumAudit plugin registrations + automatic performance analysis and improvement recommendations
/update-readmesanctumModernize README
/update-testssanctumMaintain tests
/update-tutorialsanctumUpdate tutorial content
/update-versionsanctumBump versions
/validate-hookabstractValidate hook compliance
/validate-pluginabstractCheck plugin structure

All Agents (Alphabetical)

AgentPluginDescription
ai-hygiene-auditorconserveAudit codebases for AI-generation warning signs
architecture-reviewerpensivePrincipal-level architecture review
bloat-auditorconserveOrchestrates bloat detection scans
code-reviewerpensiveExpert code review
commit-agentsanctumCommit message generator
context-optimizerconserveContext optimization
continuation-agentconserveContinue work from session state checkpoint
doc-editorscribeInteractive documentation editing
doc-verifierscribeQA validation using proof-of-work methodology
dependency-updatersanctumDependency version management
garden-curatormemory-palaceDigital garden maintenance
git-workspace-agentsanctumRepository state analyzer
implementation-executorspec-kitTask executor
knowledge-librarianmemory-palaceKnowledge routing
knowledge-navigatormemory-palacePalace search
media-recorderscryAutonomous media generation for demos and GIFs
meta-architectabstractPlugin ecosystem design
palace-architectmemory-palacePalace design
plugin-validatorabstractPlugin validation
pr-agentsanctumPR preparation
project-architectattuneGuides full-cycle workflow (brainstorm → plan)
project-implementerattuneExecutes implementation with TDD
python-linterparseltongueStrict ruff linting without bypasses
python-optimizerparseltonguePerformance optimization
python-proparseltonguePython 3.12+ expertise
python-testerparseltongueTesting expertise
review-analystimbueStructured reviews
rust-auditorpensiveRust security audit
skill-auditorabstractSkill quality audit
skill-evaluatorabstractSkill execution evaluator
skill-improverabstractImplements skill improvements from observability
slop-hunterscribeComprehensive AI slop detection
spec-analyzerspec-kitSpec consistency
task-generatorspec-kitTask creation
unbloat-remediatorconserveExecutes safe bloat remediation
workflow-improvement-*sanctumWorkflow improvement pipeline
workflow-recreate-agentsanctumWorkflow reconstruction

All Hooks (Alphabetical)

HookPluginTypeDescription
bridge.after_tool_useconjurePostToolUseSuggests delegation for large output
bridge.on_tool_startconjurePreToolUseSuggests delegation for large input
context_warning.pyconservePreToolUseContext utilization monitoring
detect-git-platform.shleylineSessionStartDetect git forge platform from remote URL
local_doc_processor.pymemory-palacePostToolUseProcesses local docs
permission_request.pyconservePermissionRequestPermission automation
post-evaluation.jsonabstractConfigQuality scoring config
post_implementation_policy.pysanctumSessionStartRequires docs/tests updates
pre-skill-load.jsonabstractConfigPre-load validation
homeostatic_monitor.pyabstractPostToolUseStability gap monitoring, queues degrading skills for improvement
pre_skill_execution.pyabstractPreToolUseSkill execution tracking
research_interceptor.pymemory-palacePreToolUseCache lookup before web
security_pattern_check.pysanctumPreToolUseSecurity anti-pattern detection
session_complete_notify.pysanctumStopCross-platform toast notifications
session-start.shconserve/imbueSessionStartSession initialization
skill_execution_logger.pyabstractPostToolUseSkill metrics logging
skill_tracker_pre.pymemory-palacePreToolUseSkill execution start tracking
skill_tracker_post.pymemory-palacePostToolUseSkill execution completion
tdd_bdd_gate.pyimbuePreToolUseIron Law enforcement at write-time
url_detector.pymemory-palaceUserPromptSubmitURL detection
user-prompt-submit.shimbueUserPromptSubmitScope validation
verify_workflow_complete.pysanctumStopEnd-of-session workflow verification
web_content_processor.pymemory-palacePostToolUseWeb content processing

Command Reference — Core Plugins

Flag and option documentation for core plugin commands (abstract, attune, conserve, imbue, sanctum).

Extended plugins: Memory Palace, Pensive, Parseltongue, Spec-Kit, Scribe, Scry, Hookify, Leyline

See also: Capabilities Reference | Skills | Agents | Hooks | Workflows


Command Syntax

/<plugin>:<command-name> [--flags] [positional-args]

Common Flag Patterns:

Flag PatternDescriptionExample
--verboseEnable detailed output/bloat-scan --verbose
--dry-runPreview without executing/unbloat --dry-run
--forceSkip confirmation prompts/attune:init --force
--report FILEOutput to file/bloat-scan --report audit.md
--level NSet intensity/depth/bloat-scan --level 3
--skip-XSkip specific phase/prepare-pr --skip-updates

Abstract Plugin

/abstract:validate-plugin

Validate plugin structure against ecosystem conventions.

# Usage
/abstract:validate-plugin [plugin-name] [--strict] [--fix]

# Options
--strict       Fail on warnings (not just errors)
--fix          Auto-fix correctable issues
--report FILE  Output validation report

# Examples
/abstract:validate-plugin sanctum
/abstract:validate-plugin --strict conserve
/abstract:validate-plugin memory-palace --fix

/abstract:create-skill

Scaffold a new skill with proper frontmatter and structure.

# Usage
/abstract:create-skill <plugin>:<skill-name> [--template basic|modular] [--category]

# Options
--template     Skill template type (basic or modular with modules/)
--category     Skill category for classification
--interactive  Guided creation flow

# Examples
/abstract:create-skill pensive:shell-review --template modular
/abstract:create-skill imbue:new-methodology --category workflow-methodology

/abstract:create-command

Scaffold a new command with hooks and documentation.

# Usage
/abstract:create-command <plugin>:<command-name> [--hooks] [--extends]

# Options
--hooks        Include lifecycle hook templates
--extends      Base command or skill to extend
--aliases      Comma-separated command aliases

# Examples
/abstract:create-command sanctum:new-workflow --hooks
/abstract:create-command conserve:deep-clean --extends "conserve:bloat-scan"

/abstract:create-hook

Scaffold a new hook with security-first patterns.

# Usage
/abstract:create-hook <plugin>:<hook-name> [--type] [--lang]

# Options
--type     Hook event type (PreToolUse|PostToolUse|SessionStart|Stop|UserPromptSubmit)
--lang     Implementation language (bash|python)
--matcher  Tool matcher pattern

# Examples
/abstract:create-hook memory-palace:cache-check --type PreToolUse --lang python
/abstract:create-hook sanctum:commit-validator --type PreToolUse --matcher "Bash"

/abstract:analyze-skill

Analyze skill complexity and optimization opportunities.

# Usage
/abstract:analyze-skill <plugin>:<skill-name> [--metrics] [--suggest]

# Options
--metrics    Show detailed token/complexity metrics
--suggest    Generate optimization suggestions
--compare    Compare against skill baselines

# Examples
/abstract:analyze-skill imbue:proof-of-work --metrics
/abstract:analyze-skill sanctum:pr-prep --suggest

/abstract:make-dogfood

Update Makefile demonstration targets to reflect current features.

# Usage
/abstract:make-dogfood [--check] [--update]

# Options
--check     Verify Makefile is current (exit 1 if stale)
--update    Apply updates to Makefile
--dry-run   Show what would change

# Examples
/abstract:make-dogfood --check
/abstract:make-dogfood --update

/abstract:skills-eval

Evaluate skill quality across the ecosystem.

# Usage
/abstract:skills-eval [--plugin PLUGIN] [--threshold SCORE]

# Options
--plugin     Limit to specific plugin
--threshold  Minimum quality score (default: 70)
--output     Output format (table|json|markdown)

# Examples
/abstract:skills-eval --plugin sanctum
/abstract:skills-eval --threshold 80 --output markdown

/abstract:hooks-eval

Evaluate hook security and performance.

# Usage
/abstract:hooks-eval [--plugin PLUGIN] [--security]

# Options
--plugin    Limit to specific plugin
--security  Focus on security patterns
--perf      Focus on performance impact

# Examples
/abstract:hooks-eval --security
/abstract:hooks-eval --plugin memory-palace --perf

/abstract:evaluate-skill

Evaluate skill execution quality.

# Usage
/abstract:evaluate-skill <plugin>:<skill-name> [--metrics] [--suggestions]

# Options
--metrics      Show detailed execution metrics
--suggestions  Generate improvement suggestions
--compare      Compare against baseline metrics

# Examples
/abstract:evaluate-skill imbue:proof-of-work --metrics
/abstract:evaluate-skill sanctum:pr-prep --suggestions

Attune Plugin

/attune:init

Initialize project with complete development infrastructure.

# Usage
/attune:init [--lang LANGUAGE] [--name NAME] [--author AUTHOR]

# Options
--lang LANGUAGE         Project language: python|rust|typescript|go
--name NAME             Project name (default: directory name)
--author AUTHOR         Author name
--email EMAIL           Author email
--python-version VER    Python version (default: 3.10)
--description TEXT      Project description
--path PATH             Project path (default: .)
--force                 Overwrite existing files without prompting
--no-git                Skip git initialization

# Examples
/attune:init --lang python --name my-cli
/attune:init --lang rust --author "Your Name" --force

/attune:brainstorm

Brainstorm project ideas using Socratic questioning.

# Usage
/attune:brainstorm [TOPIC] [--output FILE]

# Options
--output FILE    Save brainstorm results to file
--rounds N       Number of question rounds (default: 5)
--focus AREA     Focus area: features|architecture|ux|technical

# Examples
/attune:brainstorm "CLI tool for data processing"
/attune:brainstorm --focus architecture --rounds 3

/attune:blueprint

Plan architecture and break down tasks.

# Usage
/attune:blueprint [--from BRAINSTORM] [--output FILE]

# Options
--from FILE      Use brainstorm results as input
--output FILE    Save plan to file
--depth LEVEL    Planning depth: high|detailed|exhaustive
--include        Include specific aspects: tests|ci|docs

# Examples
/attune:blueprint --from brainstorm.md --depth detailed
/attune:blueprint --include tests,ci

/attune:specify

Create detailed specifications from brainstorm or plan.

# Usage
/attune:specify [--from FILE] [--type TYPE]

# Options
--from FILE    Input file (brainstorm or plan)
--type TYPE    Spec type: technical|functional|api|data-model
--output DIR   Output directory for specs

# Examples
/attune:specify --from plan.md --type technical
/attune:specify --type api --output .specify/

/attune:execute

Execute implementation tasks systematically.

# Usage
/attune:execute [--plan FILE] [--phase PHASE] [--task ID]

# Options
--plan FILE     Task plan file (default: .specify/tasks.md)
--phase PHASE   Execute specific phase: setup|tests|core|integration|polish
--task ID       Execute specific task by ID
--parallel      Enable parallel execution where marked [P]
--continue      Resume from last checkpoint

# Examples
/attune:execute --plan tasks.md --phase setup
/attune:execute --task T1.2 --parallel

/attune:validate

Validate project structure against best practices.

# Usage
/attune:validate [--strict] [--fix]

# Options
--strict    Fail on warnings
--fix       Auto-fix correctable issues
--config    Path to custom validation config

# Examples
/attune:validate --strict
/attune:validate --fix

/attune:upgrade-project

Add or update configurations in existing project.

# Usage
/attune:upgrade-project [--component COMPONENT] [--force]

# Options
--component    Specific component: makefile|precommit|workflows|gitignore
--force        Overwrite existing without prompting
--diff         Show diff before applying

# Examples
/attune:upgrade-project --component makefile
/attune:upgrade-project --component workflows --force

Conserve Plugin

/conserve:bloat-scan

Progressive bloat detection for dead code and duplication.

# Usage
/bloat-scan [--level 1|2|3] [--focus TYPE] [--report FILE] [--dry-run]

# Options
--level 1|2|3      Scan tier: 1=quick, 2=targeted, 3=deep audit
--focus TYPE       Focus area: code|docs|deps|all (default: all)
--report FILE      Save report to file
--dry-run          Preview findings without taking action
--exclude PATTERN  Additional exclude patterns

# Scan Tiers
# Tier 1 (2-5 min): Large files, stale files, commented code, old TODOs
# Tier 2 (10-20 min): Dead code, duplicate patterns, import bloat
# Tier 3 (30-60 min): All above + cyclomatic complexity, dependency graphs

# Examples
/bloat-scan                           # Quick Tier 1 scan
/bloat-scan --level 2 --focus code    # Targeted code analysis
/bloat-scan --level 3 --report Q1-audit.md  # Deep audit with report

/conserve:unbloat

Safe bloat remediation with interactive approval.

# Usage
/unbloat [--approve LEVEL] [--dry-run] [--backup]

# Options
--approve LEVEL    Auto-approve level: high|medium|low|all
--dry-run          Show what would be removed
--backup           Create backup branch before changes
--interactive      Prompt for each item (default)

# Examples
/unbloat --dry-run                    # Preview all removals
/unbloat --approve high --backup      # Auto-approve high priority, backup first
/unbloat --interactive                # Approve each item manually

/conserve:optimize-context

Optimize context window usage.

# Usage
/optimize-context [--target PERCENT] [--scope PATH]

# Options
--target PERCENT   Target context utilization (default: 50%)
--scope PATH       Limit to specific directory
--suggest          Only show suggestions, don't apply
--aggressive       Apply all optimizations

# Examples
/optimize-context --target 40%
/optimize-context --scope plugins/sanctum/ --suggest

/conserve:analyze-growth

Analyze skill growth patterns.

# Usage
/analyze-growth [--plugin PLUGIN] [--days N] [--trend]

# Options
--plugin PLUGIN    Limit to specific plugin
--days N           Analysis period (default: 30)
--trend            Show growth trend predictions
--alert            Alert if growth exceeds threshold

# Examples
/analyze-growth --plugin conserve --days 60
/analyze-growth --trend --alert

Imbue Plugin

/imbue:catchup

Quick context recovery after session restart.

# Usage
/catchup [--depth LEVEL] [--focus AREA]

# Options
--depth LEVEL    Recovery depth: shallow|standard|deep (default: standard)
--focus AREA     Focus on: git|docs|issues|all
--since DATE     Catch up from specific date

# Examples
/catchup                           # Standard recovery
/catchup --depth deep              # Full context recovery
/catchup --focus git --since "3 days ago"

/imbue:feature-review

Feature prioritization and gap analysis.

# Usage
/feature-review [--scope BRANCH] [--against BASELINE]

# Options
--scope BRANCH     Review specific branch
--against BASELINE Compare against baseline (main|tag|commit)
--gaps             Focus on gap analysis
--priorities       Generate priority rankings

# Examples
/feature-review --scope feature/new-api
/feature-review --gaps --against main

/imbue:structured-review

Structured review workflow with methodology options.

# Usage
/structured-review PATH [--methodology METHOD]

# Options
--methodology METHOD    Review methodology: evidence-based|checklist|formal
--todos                 Generate TodoWrite items
--summary              Include executive summary

# Examples
/structured-review plugins/sanctum/ --methodology evidence-based
/structured-review . --todos --summary

Sanctum Plugin

/sanctum:prepare-pr (alias: /pr)

Complete PR preparation workflow.

# Usage
/prepare-pr [--no-code-review] [--reviewer-scope SCOPE] [--skip-updates] [FILE]
/pr [options...]  # Alias

# Options
--no-code-review           Skip automated code review (faster)
--reviewer-scope SCOPE     Review strictness: strict|standard|lenient
--skip-updates             Skip documentation/test updates (Phase 0)
FILE                       Output file for PR description (default: pr_description.md)

# Reviewer Scope Levels
# strict   - All suggestions must be addressed
# standard - Critical issues must be fixed, suggestions are recommendations
# lenient  - Focus on blocking issues only

# Examples
/prepare-pr                                    # Full workflow
/pr                                            # Alias for full workflow
/prepare-pr --skip-updates                     # Skip Phase 0 updates
/prepare-pr --no-code-review                   # Skip code review
/prepare-pr --reviewer-scope strict            # Strict review for critical changes
/prepare-pr --skip-updates --no-code-review    # Fastest (legacy behavior)

/sanctum:commit-msg

Generate commit message.

# Usage
/commit-msg [--type TYPE] [--scope SCOPE]

# Options
--type TYPE      Force commit type: feat|fix|docs|refactor|test|chore
--scope SCOPE    Force commit scope
--breaking       Include breaking change footer
--issue N        Reference issue number

# Examples
/commit-msg
/commit-msg --type feat --scope api
/commit-msg --breaking --issue 42

/sanctum:do-issue

Fix GitHub issues.

# Usage
/do-issue ISSUE_NUMBER [--branch NAME]

# Options
--branch NAME    Branch name (default: issue-N)
--auto-merge     Attempt auto-merge after PR
--draft          Create draft PR

# Examples
/do-issue 42
/do-issue 123 --branch fix/auth-bug
/do-issue 99 --draft

/sanctum:fix-pr

Address PR review comments.

# Usage
/fix-pr [PR_NUMBER] [--auto-resolve]

# Options
PR_NUMBER        PR number (default: current branch's PR)
--auto-resolve   Auto-resolve addressed comments
--batch          Address all comments in batch
--interactive    Address one comment at a time

# Examples
/fix-pr 42
/fix-pr --auto-resolve
/fix-pr 42 --batch

/sanctum:fix-workflow

Workflow retrospective with automatic improvement context.

# Usage
/fix-workflow [WORKFLOW_NAME] [--context]

# Options
WORKFLOW_NAME    Specific workflow to analyze
--context        Gather improvement context automatically
--lessons        Generate lessons learned
--improvements   Suggest workflow improvements

# Examples
/fix-workflow pr-review --context
/fix-workflow --lessons --improvements

/sanctum:pr-review

Enhanced PR review.

# Usage
/pr-review [PR_NUMBER] [--thorough]

# Options
PR_NUMBER    PR to review (default: current)
--thorough   Deep review with all checks
--quick      Fast review of critical issues only
--security   Security-focused review

# Examples
/pr-review 42
/pr-review --thorough
/pr-review --quick --security

/sanctum:update-docs

Update project documentation.

# Usage
/update-docs [--scope SCOPE] [--check]

# Options
--scope SCOPE    Scope: all|api|readme|guides
--check          Check only, don't modify
--sync           Sync with code changes

# Examples
/update-docs
/update-docs --scope api
/update-docs --check

/sanctum:update-readme

Modernize README.

# Usage
/update-readme [--badges] [--toc]

# Options
--badges    Update/add badges
--toc       Update table of contents
--examples  Update code examples
--full      Full README refresh

# Examples
/update-readme
/update-readme --badges --toc
/update-readme --full

/sanctum:update-tests

Maintain tests.

# Usage
/update-tests [PATH] [--coverage]

# Options
PATH            Test path to update
--coverage      Ensure coverage targets
--missing       Add missing tests
--modernize     Update to modern patterns

# Examples
/update-tests tests/
/update-tests --missing --coverage

/sanctum:update-version

Bump versions.

# Usage
/update-version [VERSION] [--type TYPE]

# Options
VERSION        Explicit version (e.g., 1.2.3)
--type TYPE    Bump type: major|minor|patch|prerelease
--tag          Create git tag
--push         Push tag to remote

# Examples
/update-version 2.0.0
/update-version --type minor --tag
/update-version --type patch --tag --push

/sanctum:update-dependencies

Update project dependencies.

# Usage
/update-dependencies [--type TYPE] [--dry-run]

# Options
--type TYPE    Dependency type: all|prod|dev|security
--dry-run      Preview updates without applying
--major        Include major version updates
--security     Security updates only

# Examples
/update-dependencies
/update-dependencies --dry-run
/update-dependencies --type security
/update-dependencies --major

/sanctum:git-catchup

Git repository catchup.

# Usage
/git-catchup [--since DATE] [--author AUTHOR]

# Options
--since DATE      Start date for catchup
--author AUTHOR   Filter by author
--branch BRANCH   Specific branch
--format FORMAT   Output format: summary|detailed|log

# Examples
/git-catchup --since "1 week ago"
/git-catchup --author "user@example.com"

/sanctum:create-tag

Create git tags for releases.

# Usage
/create-tag VERSION [--message MSG] [--sign]

# Options
VERSION        Tag version (e.g., v1.0.0)
--message MSG  Tag message
--sign         Create signed tag
--push         Push tag to remote

# Examples
/create-tag v1.0.0
/create-tag v1.0.0 --message "Release 1.0.0" --sign --push

Extended plugins: Memory Palace, Pensive, Parseltongue, Spec-Kit, Scribe, Scry, Hookify, Leyline

See also: Skills | Agents | Hooks | Workflows

Command Reference — Extended Plugins

Flag and option documentation for extended plugin commands (memory-palace, parseltongue, pensive, spec-kit, scribe, scry, hookify, leyline).

Core plugins: Abstract, Attune, Conserve, Imbue, Sanctum

See also: Capabilities Reference | Skills | Agents | Hooks | Workflows


Memory Palace Plugin

/memory-palace:garden

Manage digital gardens.

# Usage
/garden [ACTION] [--path PATH]

# Actions
tend           Review and update garden entries
prune          Remove stale/low-value entries
cultivate      Add new entries from queue
status         Show garden health metrics

# Options
--path PATH    Garden path (default: docs/knowledge-corpus/)
--dry-run      Preview changes
--score N      Minimum score threshold for cultivation

# Examples
/garden tend                    # Review garden entries
/garden prune --dry-run         # Preview what would be removed
/garden cultivate --score 70    # Add high-quality entries
/garden status                  # Show health metrics

/memory-palace:navigate

Search across knowledge palaces.

# Usage
/navigate QUERY [--scope SCOPE] [--type TYPE]

# Options
--scope SCOPE    Search scope: local|corpus|all
--type TYPE      Content type: docs|code|web|all
--limit N        Maximum results (default: 10)
--relevance N    Minimum relevance score

# Examples
/navigate "authentication patterns" --scope corpus
/navigate "pytest fixtures" --type docs --limit 5

/memory-palace:palace

Manage knowledge palaces.

# Usage
/palace [ACTION] [PALACE_NAME]

# Actions
create NAME    Create new palace
list           List all palaces
status NAME    Show palace status
archive NAME   Archive palace

# Options
--template TEMPLATE    Palace template: session|project|topic
--from FILE           Initialize from existing content

# Examples
/palace create project-x --template project
/palace list
/palace status project-x
/palace archive old-project

/memory-palace:review-room

Review items in the knowledge queue.

# Usage
/review-room [--status STATUS] [--source SOURCE]

# Options
--status STATUS    Filter by status: pending|approved|rejected
--source SOURCE    Filter by source: webfetch|websearch|manual
--batch N          Review N items at once
--auto-score       Auto-generate scores

# Examples
/review-room --status pending --batch 10
/review-room --source webfetch --auto-score

Parseltongue Plugin

/parseltongue:analyze-tests

Test suite health report.

# Usage
/analyze-tests [PATH] [--coverage] [--flaky]

# Options
--coverage    Include coverage analysis
--flaky       Detect potentially flaky tests
--slow N      Flag tests slower than N seconds
--missing     Find untested code

# Examples
/analyze-tests tests/ --coverage
/analyze-tests --flaky --slow 5
/analyze-tests src/api/ --missing

/parseltongue:run-profiler

Profile code execution.

# Usage
/run-profiler [COMMAND] [--type TYPE]

# Options
--type TYPE    Profiler type: cpu|memory|line|call
--output FILE  Output file for profile data
--flame        Generate flame graph
--top N        Show top N hotspots

# Examples
/run-profiler "python main.py" --type cpu
/run-profiler "pytest tests/" --type memory --flame
/run-profiler --type line --top 20

/parseltongue:check-async

Async pattern validation.

# Usage
/check-async [PATH] [--strict]

# Options
--strict      Strict async compliance
--suggest     Suggest async improvements
--blocking    Find blocking calls in async code

# Examples
/check-async src/ --strict
/check-async --blocking --suggest

Pensive Plugin

/pensive:full-review

Unified code review.

# Usage
/full-review [PATH] [--scope SCOPE] [--output FILE]

# Options
--scope SCOPE    Review scope: changed|staged|all
--output FILE    Save review to file
--severity MIN   Minimum severity: critical|high|medium|low
--categories     Include categories: bugs|security|style|perf

# Examples
/full-review src/ --scope staged
/full-review --scope changed --severity high
/full-review . --output review.md --categories bugs,security

/pensive:code-review

Expert code review.

# Usage
/code-review [FILES...] [--focus FOCUS]

# Options
--focus FOCUS    Focus area: bugs|api|tests|security|style
--evidence       Include evidence logging
--lsp            Enable LSP-enhanced review (requires ENABLE_LSP_TOOL=1)

# Examples
/code-review src/api.py --focus bugs
/code-review --focus security --evidence
ENABLE_LSP_TOOL=1 /code-review src/ --lsp

/pensive:architecture-review

Architecture assessment.

# Usage
/architecture-review [PATH] [--depth DEPTH]

# Options
--depth DEPTH    Analysis depth: surface|standard|deep
--patterns       Identify architecture patterns
--anti-patterns  Flag anti-patterns
--suggestions    Generate improvement suggestions

# Examples
/architecture-review src/ --depth deep
/architecture-review --patterns --anti-patterns

/pensive:rust-review

Rust-specific review.

# Usage
/rust-review [PATH] [--safety]

# Options
--safety     Focus on unsafe code analysis
--lifetimes  Analyze lifetime patterns
--memory     Memory safety review
--perf       Performance-focused review

# Examples
/rust-review src/lib.rs --safety
/rust-review --lifetimes --memory

/pensive:test-review

Test quality review.

# Usage
/test-review [PATH] [--coverage]

# Options
--coverage     Include coverage analysis
--patterns     Review test patterns (AAA, BDD)
--flaky        Detect flaky test patterns
--gaps         Find testing gaps

# Examples
/test-review tests/ --coverage
/test-review --patterns --gaps

/pensive:shell-review

Shell script safety and portability review.

# Usage
/shell-review [FILES...] [--strict]

# Options
--strict       Strict POSIX compliance
--security     Security-focused review
--portability  Check cross-shell compatibility

# Examples
/shell-review scripts/*.sh --strict
/shell-review --security install.sh

/pensive:skill-review

Analyze skill runtime metrics and stability. This is the canonical command for skill performance analysis (execution counts, success rates, stability gaps).

For static quality analysis (frontmatter, structure), use abstract:skill-auditor.

# Usage
/skill-review [--plugin PLUGIN] [--recommendations]

# Options
--plugin PLUGIN      Limit to specific plugin
--all-plugins        Aggregate metrics across all plugins
--unstable-only      Only show skills with stability_gap > 0.3
--skill NAME         Deep-dive specific skill
--recommendations    Generate improvement recommendations

# Examples
/skill-review --plugin sanctum
/skill-review --unstable-only
/skill-review --skill imbue:proof-of-work
/skill-review --all-plugins --recommendations

Spec-Kit Plugin

/speckit-startup

Bootstrap specification workflow.

# Usage
/speckit-startup [--dir DIR]

# Options
--dir DIR    Specification directory (default: .specify/)
--template   Use template structure
--minimal    Minimal specification setup

# Examples
/speckit-startup
/speckit-startup --dir specs/
/speckit-startup --minimal

/speckit-clarify

Generate clarifying questions.

# Usage
/speckit-clarify [TOPIC] [--rounds N]

# Options
TOPIC        Topic to clarify
--rounds N   Number of question rounds
--depth      Deep clarification
--technical  Technical focus

# Examples
/speckit-clarify "user authentication"
/speckit-clarify --rounds 3 --technical

/speckit-specify

Create specification.

# Usage
/speckit-specify [--from FILE] [--output DIR]

# Options
--from FILE    Input source (brainstorm, requirements)
--output DIR   Output directory
--type TYPE    Spec type: full|api|data|ui

# Examples
/speckit-specify --from requirements.md
/speckit-specify --type api --output .specify/

/speckit-plan

Generate implementation plan.

# Usage
/speckit-plan [--from SPEC] [--phases]

# Options
--from SPEC    Source specification
--phases       Include phase breakdown
--estimates    Include time estimates
--dependencies Show task dependencies

# Examples
/speckit-plan --from .specify/spec.md
/speckit-plan --phases --estimates

/speckit-tasks

Generate task breakdown.

# Usage
/speckit-tasks [--from PLAN] [--parallel]

# Options
--from PLAN      Source plan
--parallel       Mark parallelizable tasks
--granularity    Task granularity: coarse|medium|fine
--assignable     Make tasks assignable

# Examples
/speckit-tasks --from .specify/plan.md
/speckit-tasks --parallel --granularity fine

/speckit-implement

Execute implementation plan.

# Usage
/speckit-implement [--phase PHASE] [--task ID] [--continue]

# Options
--phase PHASE   Execute specific phase
--task ID       Execute specific task
--continue      Resume from checkpoint
--parallel      Enable parallel execution

# Examples
/speckit-implement --phase setup
/speckit-implement --task T1.2
/speckit-implement --continue

/speckit-checklist

Generate implementation checklist.

# Usage
/speckit-checklist [--type TYPE] [--output FILE]

# Options
--type TYPE    Checklist type: ux|test|security|deployment
--output FILE  Output file
--interactive  Interactive completion mode

# Examples
/speckit-checklist --type security
/speckit-checklist --type ux --output checklists/ux.md

/speckit-analyze

Check artifact consistency.

# Usage
/speckit-analyze [--strict] [--fix]

# Options
--strict    Strict consistency checking
--fix       Auto-fix inconsistencies
--report    Generate consistency report

# Examples
/speckit-analyze
/speckit-analyze --strict --report

Scribe Plugin

/slop-scan

Scan files for AI-generated content markers.

# Usage
/slop-scan [PATH] [--fix] [--report FILE]

# Options
PATH          File or directory to scan (default: current directory)
--fix         Show fix suggestions
--report FILE Output to report file

# Examples
/slop-scan
/slop-scan docs/
/slop-scan README.md --fix
/slop-scan **/*.md --report slop-report.md

/style-learn

Create style profile from examples.

# Usage
/style-learn [FILES] --name NAME

# Options
FILES         Example files to learn from
--name NAME   Profile name
--merge       Merge with existing profile

# Examples
/style-learn good-examples/*.md --name house-style
/style-learn docs/api.md --name api-docs --merge

/doc-polish

Clean up AI-generated content.

# Usage
/doc-polish [FILES] [--style NAME] [--dry-run]

# Options
FILES         Files to polish
--style NAME  Apply learned style
--dry-run     Preview changes without writing

# Examples
/doc-polish README.md
/doc-polish docs/*.md --style house-style
/doc-polish **/*.md --dry-run

/doc-generate

Generate new documentation.

# Usage
/doc-generate TYPE [--style NAME] [--output FILE]

# Options
TYPE          Document type: readme|api|changelog|usage
--style NAME  Apply learned style
--output FILE Output file path

# Examples
/doc-generate readme
/doc-generate api --style api-docs
/doc-generate changelog --output CHANGELOG.md

/doc-verify

Validate documentation claims with proof-of-work.

# Usage
/doc-verify [FILES] [--strict] [--report FILE]

# Options
FILES         Files to verify
--strict      Treat warnings as errors
--report FILE Output QA report

# Examples
/doc-verify README.md
/doc-verify docs/ --strict
/doc-verify **/*.md --report qa-report.md

Scry Plugin

/scry:record-terminal

Create terminal recording.

# Usage
/record-terminal [COMMAND] [--output FILE] [--format FORMAT]

# Options
COMMAND         Command to record
--output FILE   Output file (default: recording.gif)
--format FORMAT Output format: gif|svg|mp4|tape
--width N       Terminal width
--height N      Terminal height
--speed N       Playback speed multiplier

# Examples
/record-terminal "make test" --output demo.gif
/record-terminal --format svg --width 80 --height 24

/scry:record-browser

Record browser session.

# Usage
/record-browser [URL] [--output FILE] [--actions FILE]

# Options
URL             Starting URL
--output FILE   Output file
--actions FILE  Playwright actions script
--headless      Run headless
--viewport WxH  Viewport size

# Examples
/record-browser "http://localhost:3000" --output demo.mp4
/record-browser --actions test-flow.js --headless

Hookify Plugin

/hookify:install

Install hooks.

# Usage
/hookify:install [HOOK_NAME] [--plugin PLUGIN]

# Options
HOOK_NAME       Specific hook to install
--plugin PLUGIN Install hooks from plugin
--all           Install all available hooks
--dry-run       Preview installation

# Examples
/hookify:install memory-palace-web-processor
/hookify:install --plugin conserve
/hookify:install --all --dry-run

/hookify:configure

Configure hook settings.

# Usage
/hookify:configure [HOOK_NAME] [--enable|--disable] [--set KEY=VALUE]

# Options
HOOK_NAME         Hook to configure
--enable          Enable hook
--disable         Disable hook
--set KEY=VALUE   Set configuration value
--reset           Reset to defaults

# Examples
/hookify:configure memory-palace --set research_mode=cache_first
/hookify:configure context-warning --disable

/hookify:list

List installed hooks.

# Usage
/hookify:list [--plugin PLUGIN] [--status]

# Options
--plugin PLUGIN  Filter by plugin
--status         Show enabled/disabled status
--verbose        Show full configuration

# Examples
/hookify:list
/hookify:list --plugin memory-palace --status

Leyline Plugin

/leyline:reinstall-all-plugins

Refresh all plugins.

# Usage
/reinstall-all-plugins [--force] [--clean]

# Options
--force    Force reinstall even if up-to-date
--clean    Clean install (remove then reinstall)
--verify   Verify installation after reinstall

# Examples
/reinstall-all-plugins
/reinstall-all-plugins --clean --verify

/leyline:update-all-plugins

Update all plugins.

# Usage
/update-all-plugins [--check] [--exclude PLUGINS]

# Options
--check           Check for updates only
--exclude PLUGINS Comma-separated plugins to skip
--major           Include major version updates

# Examples
/update-all-plugins
/update-all-plugins --check
/update-all-plugins --exclude "experimental,beta"

Core plugins: Abstract, Attune, Conserve, Imbue, Sanctum

See also: Skills | Agents | Hooks | Workflows

Superpowers Integration

How Claude Night Market plugins integrate with the superpowers skills.

Overview

Many Night Market capabilities achieve their full potential when used alongside superpowers. While all plugins work standalone, superpowers provides foundational methodology skills that enhance workflows.

Installation

# Add the superpowers marketplace
/plugin marketplace add obra/superpowers

# Install the superpowers plugin
/plugin install superpowers@superpowers-marketplace

Dependency Matrix

PluginComponentTypeSuperpowers DependencyEnhancement
abstract/create-skillCommandbrainstormingSocratic questioning
abstract/create-commandCommandbrainstormingConcept development
abstract/create-hookCommandbrainstormingSecurity design
abstract/test-skillCommandtest-driven-developmentTDD methodology
sanctum/prCommandreceiving-code-reviewPR validation
sanctum/pr-reviewCommandreceiving-code-reviewPR analysis
sanctum/fix-prCommandreceiving-code-reviewComment resolution
sanctum/do-issueCommandMultipleFull workflow
spec-kit/speckit-clarifyCommandbrainstormingClarification
spec-kit/speckit-planCommandwriting-plansPlanning
spec-kit/speckit-tasksCommandexecuting-plans, systematic-debuggingTask breakdown
spec-kit/speckit-implementCommandexecuting-plans, systematic-debuggingExecution
spec-kit/speckit-analyzeCommandsystematic-debugging, verification-before-completionConsistency
spec-kit/speckit-checklistCommandverification-before-completionValidation
pensive/full-reviewCommandsystematic-debugging, verification-before-completionDebugging + evidence
parseltonguepython-testingSkilltest-driven-development, testing-anti-patternsTDD + anti-patterns
imbuescope-guardSkillbrainstorming, writing-plans, execute-planAnti-overengineering
imbue/feature-reviewCommandbrainstormingFeature prioritization
conservation/optimize-contextCommandcondition-based-waitingSmart waiting
ministerissue-managementSkillsystematic-debuggingBug investigation

Superpowers Skills Referenced

SkillPurposeUsed By
brainstormingSocratic questioning for idea refinementabstract, spec-kit, imbue
test-driven-developmentRED-GREEN-REFACTOR TDD cycleabstract, sanctum, parseltongue
receiving-code-reviewTechnical rigor for evaluating suggestionssanctum
requesting-code-reviewQuality gates for code submissionsanctum
writing-plansStructured implementation planningspec-kit, imbue
executing-plansTask execution with checkpointsspec-kit
systematic-debuggingFour-phase debugging frameworkspec-kit, pensive, minister
verification-before-completionEvidence-based review standardsspec-kit, pensive, imbue
testing-anti-patternsCommon testing mistake preventionparseltongue
condition-based-waitingSmart polling/waiting strategiesconservation
subagent-driven-developmentAutonomous subagent orchestrationsanctum
finishing-a-development-branchBranch cleanup and finalizationsanctum

Graceful Degradation

All Night Market plugins work without superpowers:

Without Superpowers

  • Commands: Execute core functionality
  • Skills: Provide standalone guidance
  • Agents: Function with reduced automation

With Superpowers

  • Commands: Enhanced methodology phases
  • Skills: Integrated methodology patterns
  • Agents: Full automation depth

Example: /do-issue Workflow

Without Superpowers

1. Parse issue
2. Analyze codebase
3. Implement fix
4. Create PR

With Superpowers

1. Parse issue
2. [subagent-driven-development] Plan subagent tasks
3. [writing-plans] Create structured plan
4. [test-driven-development] Write failing test
5. Implement fix
6. [requesting-code-review] Self-review
7. [finishing-a-development-branch] Cleanup
8. Create PR

For the full Night Market experience:

# 1. Add marketplaces
/plugin marketplace add obra/superpowers
/plugin marketplace add athola/claude-night-market

# 2. Install superpowers (foundational)
/plugin install superpowers@superpowers-marketplace

# 3. Install Night Market plugins
/plugin install sanctum@claude-night-market
/plugin install spec-kit@claude-night-market
/plugin install pensive@claude-night-market

Checking Integration

Verify superpowers is available:

/plugin list
# Should show superpowers@superpowers-marketplace

Commands will automatically detect and use superpowers when available.

Function Extraction Guidelines

Last Updated: 2025-12-06

Overview

This document provides standards and guidelines for function extraction and refactoring in the Claude Night Market plugin ecosystem. Following these guidelines validates maintainable, testable, and readable code.

Principles

1. Single Responsibility Principle (SRP)

A function should have only one reason to change.

2. Keep Functions Small

  • Ideal: 10-20 lines of code
  • Acceptable: 20-30 lines with clear logic
  • Maximum: 50 lines with strong justification
  • Never exceed 100 lines without splitting

3. Limited Parameters

  • Ideal: 0-3 parameters
  • Acceptable: 4-5 parameters with clear types
  • Consider object parameter if 6+ parameters

4. Clear Naming

  • Functions should be verbs that describe their action
  • Use consistent naming conventions across the codebase
  • Avoid abbreviations unless widely understood

When to Extract Functions

Immediate Extraction Required

  1. Function exceeds 30 lines

    # BAD - Too long
    def process_large_content(content):
        lines = content.split('\n')
        filtered_lines = []
        for line in lines:
            if line.strip():
                if not line.startswith('#'):
                    if len(line) < 100:
                        filtered_lines.append(line.strip())
        # ... 20 more lines
    
  2. Function has multiple responsibilities

    # BAD - Multiple responsibilities
    def analyze_and_optimize(content):
        # Analysis part
        complexity = calculate_complexity(content)
        quality = assess_quality(content)
    
        # Optimization part
        optimized = remove_redundancy(content)
        optimized = shorten_sentences(optimized)
        return optimized, complexity, quality
    
  3. Nested function depth exceeds 3 levels

    # BAD - Too nested
    def process_data(data):
        if data:
            for item in data:
                if item.valid:
                    for subitem in item.children:
                        if subitem.active:
                            # Deep nesting - extract this
                            process_subitem(subitem)
    

Consider Extraction

  1. Function has 4+ parameters

    # CONSIDER - Many parameters
    def create_report(title, content, author, date, format, include_header, include_footer):
        pass
    
    # BETTER - Use configuration object
    @dataclass
    class ReportConfig:
        title: str
        content: str
        author: str
        date: datetime
        format: str = "pdf"
        include_header: bool = True
        include_footer: bool = True
    
    def create_report(config: ReportConfig):
        pass
    
  2. Complex conditional logic

    # CONSIDER - Complex conditions
    def calculate_rate(user, product, time, location, special_offer):
        if user.premium and product.category in ["electronics", "books"]:
            if time.hour < 12 and location.country == "US":
                if special_offer and not user.used_recently:
                    return 0.9
        # ... more conditions
    
    # BETTER - Extract condition checks
    def _is_eligible_for_discount(user, product, time, location, special_offer):
        return (user.premium and
                product.category in ["electronics", "books"] and
                time.hour < 12 and
                location.country == "US" and
                special_offer and
                not user.used_recently)
    

Extraction Patterns

1. Extract Method Pattern

Before:

def generate_report(data):
    # Validate data
    if not data:
        raise ValueError("Data cannot be empty")
    if not all(isinstance(item, dict) for item in data):
        raise TypeError("All items must be dictionaries")

    # Process data
    processed = []
    for item in data:
        processed_item = {
            'id': item.get('id'),
            'name': item.get('name', '').title(),
            'value': float(item.get('value', 0))
        }
        processed.append(processed_item)

    # Calculate totals
    total = sum(item['value'] for item in processed)
    average = total / len(processed) if processed else 0

    return {
        'items': processed,
        'summary': {
            'total': total,
            'average': average,
            'count': len(processed)
        }
    }

After:

def generate_report(data):
    """Generate a report from data items."""
    _validate_data(data)
    processed_items = _process_data_items(data)
    summary = _calculate_summary(processed_items)

    return {
        'items': processed_items,
        'summary': summary
    }

def _validate_data(data):
    """Validate input data."""
    if not data:
        raise ValueError("Data cannot be empty")
    if not all(isinstance(item, dict) for item in data):
        raise TypeError("All items must be dictionaries")

def _process_data_items(data):
    """Process individual data items."""
    return [
        {
            'id': item.get('id'),
            'name': item.get('name', '').title(),
            'value': float(item.get('value', 0))
        }
        for item in data
    ]

def _calculate_summary(items):
    """Calculate summary statistics."""
    total = sum(item['value'] for item in items)
    return {
        'total': total,
        'average': total / len(items) if items else 0,
        'count': len(items)
    }

2. Strategy Pattern for Complex Logic

Before:

def optimize_content(content, strategy_type):
    if strategy_type == "aggressive":
        # Remove all emphasis
        lines = content.split('\n')
        cleaned = []
        for line in lines:
            if not line.strip().startswith('**'):
                cleaned.append(line)
        return '\n'.join(cleaned)
    elif strategy_type == "moderate":
        # Shorten code blocks
        # ... 20 lines of logic
    elif strategy_type == "gentle":
        # Only remove images
        # ... 20 lines of logic

After:

from abc import ABC, abstractmethod

class OptimizationStrategy(ABC):
    """Base class for content optimization strategies."""

    @abstractmethod
    def optimize(self, content: str) -> str:
        """Optimize content according to strategy."""
        pass

class AggressiveOptimizationStrategy(OptimizationStrategy):
    """Aggressive content optimization."""

    def optimize(self, content: str) -> str:
        lines = content.split('\n')
        cleaned = [
            line for line in lines
            if not line.strip().startswith('**')
        ]
        return '\n'.join(cleaned)

class ModerateOptimizationStrategy(OptimizationStrategy):
    """Moderate content optimization."""

    def optimize(self, content: str) -> str:
        # Implementation for moderate optimization
        pass

class GentleOptimizationStrategy(OptimizationStrategy):
    """Gentle content optimization."""

    def optimize(self, content: str) -> str:
        # Implementation for gentle optimization
        pass

# Strategy registry
OPTIMIZATION_STRATEGIES = {
    "aggressive": AggressiveOptimizationStrategy(),
    "moderate": ModerateOptimizationStrategy(),
    "gentle": GentleOptimizationStrategy()
}

def optimize_content(content: str, strategy_type: str) -> str:
    """Optimize content using specified strategy."""
    if strategy_type not in OPTIMIZATION_STRATEGIES:
        raise ValueError(f"Unknown strategy: {strategy_type}")

    strategy = OPTIMIZATION_STRATEGIES[strategy_type]
    return strategy.optimize(content)

3. Builder Pattern for Complex Construction

Before:

def create_complex_object(name, type, config, options, metadata):
    obj = ComplexObject()
    obj.name = name
    obj.type = type

    # Complex configuration
    if config.get('enabled', True):
        obj.enabled = True
        obj.timeout = config.get('timeout', 30)
        obj.retries = config.get('retries', 3)

    # Options processing
    for key, value in options.items():
        if key.startswith('custom_'):
            obj.custom_fields[key[7:]] = value
        else:
            setattr(obj, key, value)

    # Metadata handling
    obj.created_at = metadata.get('created_at', datetime.now())
    obj.created_by = metadata.get('created_by', 'system')

    return obj

After:

class ComplexObjectBuilder:
    """Builder for ComplexObject instances."""

    def __init__(self):
        self._object = ComplexObject()

    def with_name(self, name: str) -> 'ComplexObjectBuilder':
        self._object.name = name
        return self

    def with_type(self, obj_type: str) -> 'ComplexObjectBuilder':
        self._object.type = obj_type
        return self

    def with_config(self, config: Dict[str, Any]) -> 'ComplexObjectBuilder':
        self._object.enabled = config.get('enabled', True)
        self._object.timeout = config.get('timeout', 30)
        self._object.retries = config.get('retries', 3)
        return self

    def with_options(self, options: Dict[str, Any]) -> 'ComplexObjectBuilder':
        for key, value in options.items():
            if key.startswith('custom_'):
                self._object.custom_fields[key[7:]] = value
            else:
                setattr(self._object, key, value)
        return self

    def with_metadata(self, metadata: Dict[str, Any]) -> 'ComplexObjectBuilder':
        self._object.created_at = metadata.get('created_at', datetime.now())
        self._object.created_by = metadata.get('created_by', 'system')
        return self

    def build(self) -> ComplexObject:
        return self._object

# Usage
def create_complex_object(name, type, config, options, metadata):
    return (ComplexObjectBuilder()
            .with_name(name)
            .with_type(type)
            .with_config(config)
            .with_options(options)
            .with_metadata(metadata)
            .build())

Testing Extracted Functions

1. Unit Test Each Extracted Function

# Test for _validate_data
def test_validate_data_valid():
    data = [{'id': 1, 'name': 'test'}]
    # Should not raise
    _validate_data(data)

def test_validate_data_empty():
    with pytest.raises(ValueError, match="Data cannot be empty"):
        _validate_data([])

def test_validate_data_invalid_type():
    with pytest.raises(TypeError, match="All items must be dictionaries"):
        _validate_data([{'id': 1}, "invalid"])

2. Test Strategy Implementations

def test_aggressive_optimization():
    content = "**Bold text**\nNormal text\n**More bold**"
    strategy = AggressiveOptimizationStrategy()
    result = strategy.optimize(content)
    assert "Normal text" in result
    assert "**" not in result

3. Integration Tests

def test_generate_report_integration():
    data = [
        {'id': 1, 'name': 'test item', 'value': 100},
        {'id': 2, 'name': 'another item', 'value': 200}
    ]
    report = generate_report(data)

    assert report['summary']['total'] == 300
    assert report['summary']['average'] == 150
    assert len(report['items']) == 2

Code Review Checklist

When reviewing code for function extraction:

Function Size

  • Function is under 30 lines
  • If over 30 lines, there’s a clear justification
  • No function exceeds 100 lines

Responsibilities

  • Function has a single, clear purpose
  • Function name describes its purpose accurately
  • Function doesn’t mix abstraction levels

Parameters

  • Function has 0-5 parameters
  • Parameters are well-typed
  • Related parameters are grouped into objects

Complexity

  • Cyclomatic complexity is under 10
  • Nesting depth is under 4 levels
  • No deeply nested ternary operators

Testability

  • Function can be tested independently
  • Function has no hidden dependencies
  • Side effects are clearly documented

Documentation

  • Function has a clear docstring
  • Parameters are documented
  • Return value is documented
  • Exceptions are documented

Refactoring Workflow

1. Identify Refactoring Candidates

# Find long functions
find . -name "*.py" -exec wc -l {} \; | sort -n | tail -20

# Find complex functions (manual code review)
# Look for functions with:
# - Multiple return statements
# - Deep nesting
# - Many parameters
# - Mixed responsibilities

2. Create Tests First

# Write failing tests for the current behavior
def test_existing_behavior():
    # Test the function as it exists now
    pass

3. Extract Incrementally

  1. Extract small, private helper functions
  2. Run tests after each extraction
  3. Gradually extract larger functions
  4. Keep the public API stable

4. Optimize Imports and Dependencies

  • Remove unused imports
  • Group related imports
  • Consider circular dependency issues

5. Update Documentation

  • Update function docstrings
  • Update API documentation
  • Add examples for complex functions

Tools and Automation

1. Complexity Analysis

# Using radon (complexity analyzer)
pip install radon
radon cc your_file.py -a

# Using flake8 with complexity plugin
pip install flake8-mccabe
flake8 --max-complexity 10 your_file.py

2. Automated Refactoring Tools

# Using rope (refactoring library)
pip install rope
rope refactor.py -e

# Using black for formatting (maintains consistency)
pip install black
black your_file.py

3. Pre-commit Hooks

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/PyCQA/flake8
    rev: 4.0.1
    hooks:
      - id: flake8
        args: [--max-complexity=10, --max-line-length=100]

  - repo: https://github.com/psf/black
    rev: 22.3.0
    hooks:
      - id: black
        language_version: python3

Examples from the Codebase

Before: GrowthController.generate_control_strategies()

The original function was 60+ lines and handled multiple responsibilities.

After Refactoring:

def generate_control_strategies(self, growth_rate: float) -> StrategyPlan:
    """Generate detailed control strategies for growth management."""
    strategies = self._select_control_strategies(growth_rate)
    monitoring = self._define_monitoring_needs(strategies)
    implementation = self._plan_implementation(strategies, monitoring)

    return StrategyPlan(strategies, monitoring, implementation)

def _select_control_strategies(self, growth_rate: float) -> List[Strategy]:
    """Select appropriate control strategies based on growth rate."""
    # Extracted strategy selection logic

def _define_monitoring_needs(self, strategies: List[Strategy]) -> MonitoringPlan:
    """Define monitoring requirements for selected strategies."""
    # Extracted monitoring logic

def _plan_implementation(self, strategies: List[Strategy],
                        monitoring: MonitoringPlan) -> ImplementationPlan:
    """Plan implementation steps for strategies and monitoring."""
    # Extracted implementation planning

This refactoring:

  • Reduced main function to 5 lines
  • Created three focused helper functions
  • Made each function independently testable
  • Improved readability and maintainability

Conclusion

Following these function extraction guidelines will:

  1. Improve Maintainability: Smaller, focused functions are easier to understand and modify
  2. Enhance Testability: Each function can be tested in isolation
  3. Increase Reusability: Extracted functions can be reused in different contexts
  4. Reduce Bugs: Simpler functions have fewer edge cases and are easier to verify
  5. Improve Code Review: Smaller functions are easier to review and understand

Remember: The goal is not just to make functions smaller, but to make the code more readable, maintainable, and testable.

Achievement System

Track your learning progress through the Claude Night Market documentation.

How It Works

As you explore the documentation, complete tutorials, and try plugins, you earn achievements. Progress is saved in your browser’s local storage.

Your Progress

0 / 15 achievements unlocked

Available Achievements

Getting Started

AchievementDescriptionStatus
Marketplace PioneerAdd the Night Market marketplace
Skill ApprenticeUse your first skill
PR PioneerPrepare your first pull request

Documentation Explorer

AchievementDescriptionStatus
Plugin ExplorerRead all plugin documentation pages
Domain MasterUse all domain specialist plugins

Tutorial Completion

AchievementDescriptionStatus
Cache CommanderComplete the Cache Modes tutorial
Semantic ScholarComplete the Embedding Upgrade tutorial
Knowledge CuratorComplete the Curation tutorial
Tutorial MasterComplete all tutorials

Plugin Mastery

AchievementDescriptionStatus
Foundation BuilderInstall all foundation layer plugins
Utility ExpertInstall all utility layer plugins
Full StackInstall all plugins

Advanced

AchievementDescriptionStatus
Spec MasterComplete a full spec-kit workflow
Review ExpertComplete a full pensive review
Palace ArchitectBuild your first memory palace

Reset Progress

Warning: This cannot be undone.

Achievement Tiers

TierAchievementsBadge
Bronze1-5Night Market Visitor
Silver6-10Night Market Regular
Gold11-14Night Market Expert
Platinum15Night Market Master