Claude Night Market
Claude Night Market contains 16 plugins for Claude Code that automate git operations, code review, and specification-driven development. Each plugin operates independently, allowing you to install only the components required for your specific workflow.
Architecture
The ecosystem uses a layered architecture to manage dependencies and token usage.
- Domain Specialists: Plugins like
pensive(code review) andminister(issue tracking) provide high-level task automation. - Utility Layer: Provides resource management services, such as token conservation in
conserve. - Foundation Layer: Implements core mechanics used across the ecosystem, including permission handling in
sanctum. - Meta Layer:
abstractprovides tools for cross-plugin validation and enforcement of project standards.
Design Philosophy
The project prioritizes token efficiency through shallow dependency chains. Progressive loading ensures that plugin logic enters the system prompt only when a specific feature is active. We enforce a “specification-first” workflow, requiring a written design phase before code generation begins.
Claude Code Integration
Plugins require Claude Code 2.1.0 or later to use features like:
- Hot-reloading: Skills update immediately upon file modification.
- Context Forking: Risky operations run in isolated context windows.
- Lifecycle Hooks: Frontmatter hooks execute logic at specific execution points.
- Wildcard Permissions: Pre-approved tool access reduces manual confirmation prompts.
Integration with Superpowers
These plugins integrate with the superpowers marketplace. While Night Market handles high-level process and workflow orchestration, superpowers provides the underlying methodology for TDD, debugging, and execution analysis.
Quick Start
# 1. Add the marketplace
/plugin marketplace add athola/claude-night-market
# 2. Install a plugin
/plugin install sanctum@claude-night-market
# 3. Use a command
/pr
# 4. Invoke a skill
Skill(sanctum:git-workspace-review)
Getting Started
This section will guide you through setting up Claude Night Market and using your first plugins.
Overview
This section covers:
- Installing the marketplace and plugins
- Invoking skills, commands, and agents
- Plugin dependency structure
Prerequisites
- Claude Code installed and configured.
- A terminal.
- Git (for version control workflows).
Quick Overview
The Claude Night Market provides three types of capabilities:
| Type | Description | How to Use |
|---|---|---|
| Skills | Reusable methodology guides | Skill(plugin:skill-name) |
| Commands | Quick actions with slash syntax | /command-name |
| Agents | Autonomous task executors | Referenced in skill workflows |
Sections
- Installation: Add the marketplace and install plugins
- Your First Plugin: Hands-on tutorial with sanctum
- Quick Start Guide: Common workflows and patterns
Achievement: Getting Started
Complete the installation steps to unlock the Marketplace Pioneer badge.
Installation
This guide walks you through adding the Claude Night Market to your Claude Code setup.
Prerequisites
- Claude Code 2.1.16+ (2.1.32+ for agent teams features)
- Python 3.9+ — required for hook execution. macOS ships Python 3.9.6 as the system interpreter; hooks run under this rather than virtual environments. Plugin packages may target higher versions (3.10+, 3.12+) via
uv.
Step 1: Add the Marketplace
Open Claude Code and run:
/plugin marketplace add athola/claude-night-market
This registers the marketplace, making all plugins available for installation.
Step 2: Browse Available Plugins
View the marketplace contents:
/plugin marketplace list
You’ll see plugins organized by layer:
| Layer | Plugins | Purpose |
|---|---|---|
| Meta | abstract | Plugin infrastructure |
| Foundation | imbue, sanctum, leyline | Core workflows |
| Utility | conserve, conjure | Resource optimization |
| Domain | archetypes, pensive, parseltongue, memory-palace, spec-kit, minister, attune | Specialized tasks |
Step 3: Install Individual Plugins
Install plugins based on your needs:
# Git and workspace operations
/plugin install sanctum@claude-night-market
# Specification-driven development
/plugin install spec-kit@claude-night-market
# Code review toolkit
/plugin install pensive@claude-night-market
# Python development
/plugin install parseltongue@claude-night-market
Step 4: Verify Installation
Check that plugins loaded correctly:
/plugin list
Installed plugins appear with their available skills and commands.
Optional: Install Superpowers
For enhanced methodology integration:
# Add superpowers marketplace
/plugin marketplace add obra/superpowers
# Install superpowers
/plugin install superpowers@superpowers-marketplace
Superpowers provides TDD, debugging, and review patterns that enhance Night Market plugins.
Recommended Plugin Sets
Minimal Setup
For basic git workflows:
/plugin install sanctum@claude-night-market
Development Setup
For active feature development:
/plugin install sanctum@claude-night-market
/plugin install imbue@claude-night-market
/plugin install spec-kit@claude-night-market
Full Setup
For detailed workflow coverage:
/plugin install abstract@claude-night-market
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
/plugin install leyline@claude-night-market
/plugin install conserve@claude-night-market
/plugin install pensive@claude-night-market
/plugin install spec-kit@claude-night-market
Troubleshooting
Plugin not loading?
- Verify marketplace was added:
/plugin marketplace list - Check for typos in plugin name
- Restart Claude Code session
Conflicts between plugins?
Plugins are composable. If you experience issues:
- Check the plugin’s README for dependency requirements
- Validate foundation plugins (imbue, leyline) are installed if using domain plugins
Next Steps
Continue to Your First Plugin for a hands-on tutorial.
Your First Plugin: sanctum
This hands-on tutorial walks you through using the sanctum plugin for git and workspace operations.
What You’ll Build
By the end of this tutorial, you’ll:
- Review your git workspace state
- Generate a conventional commit message
- Prepare a pull request description
Prerequisites
- sanctum plugin installed:
/plugin install sanctum@claude-night-market - A git repository with some uncommitted changes
Part 1: Workspace Review
Before any git operation, understand your current state.
Invoke the Skill
Skill(sanctum:git-workspace-review)
This skill runs a preflight checklist:
- Current branch and remote tracking
- Staged vs unstaged changes
- Recent commit history
- Untracked files
What to Expect
Claude will analyze your repository and report:
Repository: my-project
Branch: feature/add-login
Tracking: origin/feature/add-login (up to date)
Staged Changes:
M src/auth/login.ts
A src/auth/types.ts
Unstaged Changes:
M README.md
Untracked:
src/auth/tests/login.test.ts
Part 2: Commit Message Generation
Now generate a conventional commit message for your staged changes.
Using the Command
/commit-msg
Or invoke the skills directly:
Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)
Understanding the Output
Claude analyzes staged changes and generates:
feat(auth): add login form with validation
- Implement LoginForm component with email/password fields
- Add form validation using zod schema
- Create auth types for login request/response
Closes #42
The commit follows Conventional Commits format:
- Type: feat, fix, docs, style, refactor, test, chore
- Scope: Optional context (auth, api, ui)
- Description: Imperative mood, present tense
- Body: Bullet points explaining what changed
- Footer: Issue references
Part 3: PR Preparation
Finally, prepare a pull request description.
Using the Command
/pr
This runs the full PR preparation workflow:
- Workspace review
- Quality gates check
- Change summarization
- PR description generation
Quality Gates
Before generating the PR, Claude checks:
Quality Gates:
[x] Code compiles
[x] Tests pass
[x] Linting clean
[x] No console.log statements
[x] Documentation updated
Generated PR Description
## Summary
Add user authentication with login form validation.
## Changes
- **New Feature**: Login form component with email/password validation
- **Types**: Auth request/response type definitions
- **Tests**: Unit tests for login validation logic
## Testing
- [x] Manual testing of form submission
- [x] Unit tests pass (15 new tests)
- [x] Integration tests pass
## Screenshots
[Add screenshots if UI changes]
## Checklist
- [x] Tests added
- [x] Documentation updated
- [x] No breaking changes
Workflow Chaining
These skills work together. The recommended flow:
git-workspace-review (foundation)
├── commit-messages (depends on workspace state)
├── pr-prep (depends on workspace state)
├── doc-updates (depends on workspace state)
└── version-updates (depends on workspace state)
Always run git-workspace-review first to establish context.
Common Patterns
Pre-Commit Workflow
# Stage your changes
git add -p
# Review and commit
Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)
# Apply the message
git commit -m "<generated message>"
Pre-PR Workflow
# Run quality checks
make fmt && make lint && make test
# Prepare PR
/pr
# Create on GitHub
gh pr create --title "<title>" --body "<generated body>"
Next Steps
- Read the Quick Start Guide for more workflow patterns
- Explore other plugins in the Plugin Overview
- Check the Capabilities Reference for all available skills
Achievements Earned
- Skill Apprentice: Used your first skill
- PR Pioneer: Prepared your first PR
Quick Start Guide
Common workflows and patterns for Claude Night Market plugins.
Workflow Recipes
Feature Development
Start features with a specification:
# (Optional) Resume persistent speckit context for this repo/session
/speckit-startup
# Create specification from idea
/speckit-specify Add user authentication with OAuth2
# Generate implementation plan
/speckit-plan
# Create ordered tasks
/speckit-tasks
# Execute tasks
/speckit-implement
# Verify artifacts stay consistent
/speckit-analyze
Code Review
Run a detailed code review:
# Full review with intelligent skill selection
/full-review
# Or specific review types
/architecture-review # Architecture assessment
/api-review # API surface evaluation
/bug-review # Bug hunting
/test-review # Test quality
/rust-review # Rust-specific (if applicable)
Context Recovery
Get up to speed on changes:
# Quick catchup on recent changes
/catchup
# Or with sanctum's git-specific variant
/git-catchup
Context Optimization
Monitor and optimize context usage:
# Analyze context window usage
/optimize-context
# Check skill growth patterns
/analyze-growth
Skill Invocation Patterns
Basic Skill Usage
# Standard format
Skill(plugin:skill-name)
# Examples
Skill(sanctum:git-workspace-review)
Skill(imbue:diff-analysis)
Skill(conservation:context-optimization)
Skill Chaining
Some skills depend on others:
# Pensive depends on imbue and sanctum
Skill(sanctum:git-workspace-review)
Skill(imbue:review-core)
Skill(pensive:architecture-review)
Skill with Dependencies
Check a plugin’s README for dependency chains:
spec-kit depends on imbue
pensive depends on imbue + sanctum
sanctum depends on imbue (for some skills)
Command Quick Reference
Git Operations (sanctum)
| Command | Purpose |
|---|---|
/commit-msg | Generate commit message |
/pr | Prepare pull request |
/fix-pr | Address PR review comments |
/do-issue | Fix GitHub issues |
/update-docs | Update documentation |
/update-readme | Modernize README |
/update-tests | Maintain tests |
/update-version | Bump versions |
Specification (spec-kit)
| Command | Purpose |
|---|---|
/speckit-specify | Create specification |
/speckit-plan | Generate plan |
/speckit-tasks | Create tasks |
/speckit-implement | Execute tasks |
/speckit-analyze | Check consistency |
/speckit-clarify | Ask clarifying questions |
Review (pensive)
| Command | Purpose |
|---|---|
/full-review | Unified review |
/architecture-review | Architecture check |
/api-review | API surface review |
/bug-review | Bug hunting |
/test-review | Test quality |
Analysis (imbue)
| Command | Purpose |
|---|---|
/catchup | Quick context recovery |
/structured-review | Structured review with evidence |
/feature-review | Feature prioritization |
Plugin Management (leyline)
| Command | Purpose |
|---|---|
/reinstall-all-plugins | Refresh all plugins |
/update-all-plugins | Update all plugins |
Environment Variables
Some plugins support configuration via environment variables:
Conservation
# Skip optimization guidance for fast processing
CONSERVATION_MODE=quick claude
# Full guidance with extended allowance
CONSERVATION_MODE=deep claude
Memory Palace
# Set embedding provider
MEMORY_PALACE_EMBEDDINGS_PROVIDER=hash # or local
Tips
1. Start with Foundation
Install foundation plugins first:
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
Then add domain specialists as needed.
2. Use TodoWrite Integration
Most skills output TodoWrite items for tracking:
git-review:repo-confirmed
git-review:status-overview
pr-prep:quality-gates
Monitor these for workflow progress.
3. Chain Skills Intentionally
Don’t invoke all skills at once. Build understanding incrementally:
# First: understand state
Skill(sanctum:git-workspace-review)
# Then: perform action
Skill(sanctum:commit-messages)
4. Use Superpowers
If superpowers is installed, commands gain enhanced capabilities:
/create-skilluses brainstorming/test-skilluses TDD methodology/pruses code review patterns
Next Steps
- Explore individual plugins in the Plugins section
- Reference all capabilities in Capabilities Reference
Common Workflows Guide
When and how to use commands, skills, and subagents for typical development tasks.
Quick Reference
| Task | Primary Tool | Plugin |
|---|---|---|
| Initialize a project | /attune:arch-init | attune |
| Review a PR | /full-review | pensive |
| Fix PR feedback | /fix-pr | sanctum |
| Prepare a PR | /pr | sanctum |
| Catch up on changes | /catchup | imbue |
| Write specifications | /speckit-specify | spec-kit |
| Improve system | /speckit-analyze | spec-kit |
| Debug an issue | Skill(superpowers:debugging) | superpowers |
| Manage knowledge | /palace | memory-palace |
Initializing a New Project
When: Starting a new project from scratch or setting up a new codebase.
Step 1: Architecture-Aware Initialization
Start with an architecture-aware initialization to select the right project structure based on team size and domain complexity. This process guides you through project type selection, online research into best practices, and template customization.
# Interactive architecture selection with research
/attune:arch-init --name my-project
Output: Complete project structure with ARCHITECTURE.md, ADR, and paradigm-specific directories.
Step 2: Standard Initialization
If the architecture is decided, use standard initialization to generate language-specific boilerplate including Makefiles, CI/CD pipelines, and pre-commit hooks.
# Quick initialization when you know the architecture
/attune:init --lang python --name my-project
Step 3: Establish Persistent State
Establish a persistent state to manage artifacts and constraints across sessions. This maintains non-negotiable principles and supports consistent progress tracking.
# (Once) Define non-negotiable principles for the project
/speckit-constitution
# (Each Claude session) Load speckit context + progress tracking
/speckit-startup
Optional enhancements:
- Install spec-kit for spec-driven artifacts:
/plugin install spec-kit@claude-night-market - Install superpowers for rigorous methodology loops:
/plugin marketplace add obra/superpowers
/plugin install superpowers@superpowers-marketplace
Alternative: Brainstorming Workflow
For complex projects requiring exploration, begin by brainstorming the problem space and creating a detailed specification before planning the architecture and tasks.
# 1. Brainstorm the problem space
/attune:brainstorm --domain "my problem area"
# 2. Create detailed specification
/attune:specify
# 3. Plan architecture and tasks
/attune:blueprint
# 4. Initialize with chosen architecture
/attune:arch-init --name my-project
# 5. Execute implementation
/attune:execute
What You Get
| Artifact | Description |
|---|---|
pyproject.toml / Cargo.toml / package.json | Build configuration |
Makefile | Development targets (test, lint, format) |
.pre-commit-config.yaml | Code quality hooks |
.github/workflows/ | CI/CD pipelines |
ARCHITECTURE.md | Architecture overview |
docs/adr/ | Architecture decision records |
Reviewing a Pull Request
When: Reviewing code changes in a PR or before merging.
Full Multi-Discipline Review
# Full review with skill selection
/full-review
This orchestrates multiple specialized reviews:
- Architecture assessment
- API surface evaluation
- Bug hunting
- Test quality analysis
Specific Review Types
# Architecture-focused review
/architecture-review
# API surface evaluation
/api-review
# Bug hunting
/bug-review
# Test quality assessment
/test-review
# Rust-specific review (for Rust projects)
/rust-review
Using Skills Directly
For more control, invoke skills:
# First: understand the workspace state
Skill(sanctum:git-workspace-review)
# Then: run specific review
Skill(pensive:architecture-review)
Skill(pensive:api-review)
Skill(pensive:bug-review)
External PR Review
# Review a GitHub PR by URL
/pr-review https://github.com/org/repo/pull/123
# Or just the PR number in current repo
/pr-review 123
Fixing PR Feedback
When: Addressing review comments on your PR.
Quick Fix
# Address PR review comments
/fix-pr
# Or with specific PR reference
/fix-pr 123
This:
- Reads PR review comments
- Identifies actionable feedback
- Applies fixes systematically
- Prepares follow-up commit
Manual Workflow
# 1. Review the feedback
Skill(sanctum:git-workspace-review)
# 2. Apply fixes
# (make your changes)
# 3. Prepare commit message
/commit-msg
# 4. Update PR
git push
Preparing a Pull Request
When: Code is complete and ready for review.
Pre-PR Checklist
Run these commands before creating a PR:
# 1. Update documentation
/sanctum:update-docs
# 2. Update README if needed
/sanctum:update-readme
# 3. Review and update tests
/sanctum:update-tests
# 4. Update Makefile demo targets (for plugins)
/abstract:make-dogfood
# 5. Final quality check
make lint && make test
Create the PR
# Full PR preparation
/pr
# This handles:
# - Branch status check
# - Commit message quality
# - Documentation updates
# - PR description generation
Using Skills for PR Prep
# Review workspace before PR
Skill(sanctum:git-workspace-review)
# Generate quality commit message
Skill(sanctum:commit-messages)
# Check PR readiness
Skill(sanctum:pr-preparation)
Catching Up on Changes
When: Returning to a project after time away, or joining an ongoing project.
Quick Catchup
# Standard catchup on recent changes
/catchup
# Git-specific catchup
/git-catchup
Detailed Understanding
# 1. Review workspace state
Skill(sanctum:git-workspace-review)
# 2. Analyze recent diffs
Skill(imbue:diff-analysis)
# 3. Understand branch context
Skill(sanctum:branch-comparison)
Session Recovery
# Resume a previous Claude session
claude --resume
# Or continue with context
claude --continue
Writing Specifications
When: Planning a feature before implementation.
Spec-Driven Development Workflow
# 1. Create specification from idea
/speckit-specify Add user authentication with OAuth2
# 2. Generate implementation plan
/speckit-plan
# 3. Create ordered tasks
/speckit-tasks
# 4. Execute tasks with tracking
/speckit-implement
Persistent Presence Loop (World Model + Agent Model)
Treat SDD artifacts as a self-modeling architecture where the repo state serves as the world model and the loaded skills as the agent model. Experiments are run with small diffs and verified through rigorous loops (tests, linters, repro scripts), while model updates refine both the code artifacts and the orchestration methodology to optimize future loops.
Curriculum generation via /speckit-tasks keeps actions grounded and dependency-ordered, while the skill library and iterative refinement ensure the plan adapts to reality. The cycle moves from planning to action to reflection via /speckit-plan, /speckit-implement, and /speckit-analyze.
Background reading:
- MineDojo: https://minedojo.org/ (internet-scale knowledge + benchmarks)
- Voyager: https://voyager.minedojo.org/ (arXiv: https://arxiv.org/abs/2305.16291) (automatic curriculum + skill library)
- GTNH_Agent: https://github.com/sefiratech/GTNH_Agent (persistent, modular Minecraft automation)
Clarification and Analysis
# Ask clarifying questions about requirements
/speckit-clarify
# Analyze specification consistency
/speckit-analyze
Using Skills
# Invoke spec writing skill directly
Skill(spec-kit:spec-writing)
# Task planning skill
Skill(spec-kit:task-planning)
Meta-Development
When: Improving claude-night-market itself (skills, commands, templates, orchestration).
When improving the system itself, treat the repo as the world model and available tools as the agent model. Run experiments with minimal diffs behind verification, evaluate them with evidence-first methods like /speckit-analyze and Skill(superpowers:verification-before-completion), and update both the artifacts and the methodology so the next loop is cheaper.
Optional pattern: split roles (planner/critic/executor) for long-horizon work, similar to multi-role agent stacks used in open-ended Minecraft agents.
Useful tools:
# Use speckit to keep artifacts + principles explicit
/speckit-constitution
/speckit-analyze
# Use superpowers to enforce evidence
Skill(superpowers:systematic-debugging)
Skill(superpowers:verification-before-completion)
Debugging Issues
When: Investigating bugs or unexpected behavior.
With Superpowers Integration
# Systematic debugging methodology
Skill(superpowers:debugging)
# This provides:
# - Hypothesis formation
# - Evidence gathering
# - Root cause analysis
# - Fix validation
GitHub Issue Resolution
# Fix a GitHub issue
/do-issue 42
# Or with URL
/do-issue https://github.com/org/repo/issues/42
Analysis Tools
# Test analysis (parseltongue)
/analyze-tests
# Performance profiling
/run-profiler
# Context optimization
/optimize-context
Managing Knowledge
When: Capturing insights, decisions, or learnings.
Memory Palace
# Open knowledge management
/palace
# Access digital garden
/garden
Knowledge Capture
# Capture insight during work
Skill(memory-palace:knowledge-capture)
# Link related concepts
Skill(memory-palace:concept-linking)
Plugin Development
When: Creating or maintaining Night Market plugins.
Create a New Plugin
# Scaffold new plugin
make create-plugin NAME=my-plugin
# Or using attune for plugins
/attune:init --type plugin --name my-plugin
Validate Plugin Structure
# Check plugin structure
/abstract:validate-plugin
# Audit skill quality
/abstract:skill-audit
Update Plugin Documentation
# Update all documentation
/sanctum:update-docs
# Update Makefile demo targets
/abstract:make-dogfood
# Sync templates with reference projects
/attune:sync-templates
Testing
# Run plugin tests
make test
# Validate structure
make validate
# Full quality check
make lint && make test && make build
Context Management
When: Managing token usage or context window.
Monitor Usage
# Check context window usage
/context
# Analyze context optimization
/optimize-context
Reduce Context
# Clear context for fresh start
/clear
# Then catch up
/catchup
# Or scan for bloat
/bloat-scan
Optimization Skills
# Context optimization skill
Skill(conserve:context-optimization)
# Growth analysis
/analyze-growth
Subagent Delegation
When: Delegating specialized work to focused agents.
Available Subagents
| Subagent | Purpose | When to Use |
|---|---|---|
abstract:plugin-validator | Validate plugin structure | Before publishing plugins |
abstract:skill-auditor | Audit skill quality | During skill development |
pensive:code-reviewer | Focused code review | Reviewing specific files |
attune:project-architect | Architecture design | Planning new features |
attune:project-implementer | Task execution | Systematic implementation |
Example: Code Review Delegation
# Delegate to specialized reviewer
Agent(pensive:code-reviewer) Review src/auth/ for security issues
Example: Plugin Validation
# Delegate validation to subagent
Agent(abstract:plugin-validator) Check plugins/my-plugin
End-to-End Example: New Feature
Here’s a complete workflow for adding a new feature:
# 1. PLANNING PHASE
/speckit-specify Add caching layer for API responses
/speckit-plan
/speckit-tasks
# 2. IMPLEMENTATION PHASE
# Create branch
git checkout -b feature/add-caching
# Implement with Iron Law TDD
Skill(imbue:proof-of-work) # Enforces: NO IMPLEMENTATION WITHOUT FAILING TEST FIRST
# Or with superpowers TDD
Skill(superpowers:tdd)
# Execute planned tasks
/speckit-implement
# 3. QUALITY PHASE
# Run reviews
/architecture-review
/test-review
# Fix any issues
# (make changes)
# 4. PR PREPARATION PHASE
/sanctum:update-docs
/sanctum:update-tests
make lint && make test
# 5. CREATE PR
/pr
Command vs Skill vs Agent
| Type | Syntax | When to Use |
|---|---|---|
| Command | /command-name | Quick actions, one-off tasks |
| Skill | Skill(plugin:skill-name) | Methodologies, detailed workflows |
| Agent | Agent(plugin:agent-name) | Delegated work, specialized focus |
Examples
# Command: Quick action
/pr
# Skill: Detailed methodology
Skill(sanctum:pr-preparation)
# Agent: Delegated specialized work
Agent(pensive:code-reviewer) Review authentication module
Skill Invocation: Secondary Strategy
The Skill tool is a Claude Code feature that may not be available in all environments. When the Skill tool is unavailable:
Secondary Pattern:
# 1. If Skill tool fails or is unavailable, read the skill file directly:
Read plugins/{plugin}/skills/{skill-name}/SKILL.md
# 2. Follow the skill content as instructions
# The skill file contains the complete methodology to execute
Example:
# Instead of: Skill(sanctum:commit-messages)
# Secondary: Read plugins/sanctum/skills/commit-messages/SKILL.md
# Then follow the instructions in that file
Skill file locations:
- Plugin skills:
plugins/{plugin}/skills/{skill-name}/SKILL.md - User skills:
~/.claude/skills/{skill-name}/SKILL.md
This allows workflows to function across different environments.
Claude Code 2.1.0 Features
New Capabilities
| Feature | Description | Usage |
|---|---|---|
| Skill Hot-Reload | Skills auto-reload without restart | Edit SKILL.md, immediately available |
| Plan Mode Shortcut | Enter plan mode directly | /plan |
| Forked Context | Run skills in isolated context | context: fork in frontmatter |
| Agent Field | Specify agent for skill execution | agent: agent-name in frontmatter |
| Frontmatter Hooks | Lifecycle hooks in skills/agents | hooks: section in frontmatter |
| Wildcard Permissions | Flexible Bash patterns | Bash(npm *), Bash(* install) |
| Skill Visibility | Control slash menu visibility | user-invocable: false |
Skill Development Workflow (Hot-Reload)
With Claude Code 2.1.0, skill development is faster:
# 1. Create/edit skill
vim ~/.claude/skills/my-skill/SKILL.md
# 2. Save changes (no restart needed!)
# 3. Skill is immediately available
Skill(my-skill)
# 4. Iterate rapidly
Using Forked Context
For isolated operations that shouldn’t pollute main context:
# In skill frontmatter
---
name: isolated-analysis
context: fork # Runs in separate context
---
Use cases:
- Heavy file analysis that would bloat context
- Experimental operations that might fail
- Parallel workflows
Frontmatter Hooks
Define hooks scoped to skill/agent/command lifecycle:
---
name: validated-workflow
hooks:
PreToolUse:
- matcher: "Bash"
command: "./validate.sh"
once: true # Run only once per session
PostToolUse:
- matcher: "Write|Edit"
command: "./format.sh"
Stop:
- command: "./cleanup.sh"
---
Permission Wildcards
New wildcard patterns for flexible permissions:
allowed-tools:
- Bash(npm *) # All npm commands
- Bash(* install) # Any install command
- Bash(git * main) # Git with main branch
Note (2.1.20+):
Bash(*)is now treated as equivalent to plainBash. Use scoped wildcards likeBash(npm *)for targeted permissions, or plainBashfor unrestricted access.
Disabling Specific Agents
Control which agents can be invoked:
# Via CLI
claude --disallowedTools "Task(expensive-agent)"
# Via settings.json
{
"permissions": {
"deny": ["Task(expensive-agent)"]
}
}
Subagent Resilience
Subagents are designed to continue operations after a permission denial by attempting alternative approaches instead of failing immediately. this behavior results in more reliable agent workflows when interacting with restrictive environments.
Agent-Aware Hooks (2.1.2+)
SessionStart hooks receive agent_type field when launched with --agent:
import json, sys
input_data = json.loads(sys.stdin.read())
agent_type = input_data.get("agent_type", "")
if agent_type in ["code-reviewer", "quick-query"]:
context = "Minimal context" # Skip heavy context
else:
context = full_context
print(json.dumps({"hookSpecificOutput": {"additionalContext": context}}))
This reduces context overhead by 200-800 tokens for lightweight agents.
See Also
- Quick Start Guide - Condensed recipes
- Capabilities Reference - All commands and skills
- Plugin Catalog - Detailed plugin documentation
Technical Debt Migration Guide
Last Updated: 2025-12-06
Overview
Use this guide to migrate plugin code to shared constants and follow function extraction guidelines.
Quick Start
1. Update Your Plugin to Use Shared Constants
Replace scattered magic numbers with centralized constants:
# BEFORE
def check_file_size(content):
if len(content) > 15000: # Magic number!
return "File too large"
if len(content) > 5000: # Another magic number!
return "File is large"
# AFTER
from plugins.shared.constants import MAX_SKILL_FILE_SIZE, LARGE_SIZE_LIMIT
def check_file_size(content):
if len(content) > MAX_SKILL_FILE_SIZE:
return "File too large"
if len(content) > LARGE_SIZE_LIMIT:
return "File is large"
2. Apply Function Extraction Guidelines
Use the patterns from the guidelines to refactor complex functions:
# BEFORE - Complex function with multiple responsibilities
def analyze_and_optimize_skill(content, strategy):
# Validation
if not content:
raise ValueError("Content cannot be empty")
# Analysis
tokens = estimate_tokens(content)
complexity = calculate_complexity(content)
# Optimization
if strategy == "aggressive":
# 20 lines of optimization logic
pass
elif strategy == "moderate":
# 20 lines of optimization logic
pass
return optimized_content, tokens, complexity
# AFTER - Extracted and organized
def analyze_and_optimize_skill(content: str, strategy: str) -> OptimizationResult:
"""Analyze and optimize skill content."""
_validate_content(content)
analysis = _analyze_content(content)
optimized = _optimize_content(content, strategy)
return OptimizationResult(optimized, analysis)
def _validate_content(content: str) -> None:
"""Validate input content."""
if not content:
raise ValueError("Content cannot be empty")
def _analyze_content(content: str) -> ContentAnalysis:
"""Analyze content properties."""
tokens = estimate_tokens(content)
complexity = calculate_complexity(content)
return ContentAnalysis(tokens, complexity)
def _optimize_content(content: str, strategy: str) -> str:
"""Optimize content using specified strategy."""
optimizer = get_strategy_optimizer(strategy)
return optimizer.optimize(content)
Detailed Migration Steps
1. Audit Plugin
Find all magic numbers and complex functions:
# Find magic numbers (search for numeric literals in conditions)
grep -n -E "(if|when|while).*[0-9]+" your_plugin/**/*.py
# Find long functions
find your_plugin -name "*.py" -exec wc -l {} + | awk '$1 > 30 {print}'
# Find functions with many parameters
grep -n "def .*\(.*," your_plugin/**/*.py | grep -oE "\([^)]*\)" | grep -o "," | wc -l
2. Plan Migration
Create a migration plan for your plugin:
-
Identify Constants
- List all magic numbers
- Categorize by purpose (timeouts, sizes, thresholds)
- Check if they exist in shared constants
-
Identify Functions to Refactor
- Functions > 30 lines
- Functions with > 4 parameters
- Functions with multiple responsibilities
-
Create Migration Tasks
- Update constants first (lowest risk)
- Refactor simple functions next
- Tackle complex functions last
3. Replace Magic Numbers
File Size Constants
# Replace these patterns:
if len(content) > 15000:
if file_size > 100000:
if line_count > 200:
# With:
from plugins.shared.constants import (
MAX_SKILL_FILE_SIZE,
MAX_TOTAL_SKILL_SIZE,
LARGE_FILE_LINES
)
Timeout Constants
# Replace these patterns:
timeout=10
timeout=300
time.sleep(30)
# With:
from plugins.shared.constants import (
DEFAULT_SERVICE_CHECK_TIMEOUT,
DEFAULT_EXECUTION_TIMEOUT,
MEDIUM_TIMEOUT
)
Quality Thresholds
# Replace these patterns:
if quality_score > 70.0:
if quality_score > 80.0:
if quality_score > 90.0:
# With:
from plugins.shared.constants import (
MINIMUM_QUALITY_THRESHOLD,
HIGH_QUALITY_THRESHOLD,
EXCELLENT_QUALITY_THRESHOLD
)
4. Refactor Complex Functions
Follow this iterative approach:
4.1 Write Tests First
# Test the current behavior
def test_function_to_refactor():
result = your_complex_function(input_data)
assert result.expected_field == expected_value
# Add more assertions based on current behavior
4.2 Extract Small Helper Functions
# Start with small, obvious extractions
def _calculate_value(item):
"""Extract value calculation from complex function."""
return item.base * item.multiplier + item.offset
def _validate_input(data):
"""Extract input validation."""
if not data:
raise ValueError("Data required")
return True
4.3 Extract Strategy Classes
For functions with conditional logic:
# Before: Complex conditional function
def process_item(item, mode):
if mode == "fast":
# Fast processing logic
pass
elif mode == "thorough":
# Thorough processing logic
pass
elif mode == "minimal":
# Minimal processing logic
pass
# After: Strategy pattern
class ItemProcessor(ABC):
@abstractmethod
def process(self, item):
pass
class FastProcessor(ItemProcessor):
def process(self, item):
# Fast processing implementation
pass
class ThoroughProcessor(ItemProcessor):
def process(self, item):
# Thorough processing implementation
pass
# Registry
PROCESSORS = {
"fast": FastProcessor(),
"thorough": ThoroughProcessor(),
"minimal": MinimalProcessor()
}
def process_item(item, mode):
processor = PROCESSORS.get(mode)
if not processor:
raise ValueError(f"Unknown mode: {mode}")
return processor.process(item)
5. Update Configuration
If your plugin has configuration files:
# config.yaml - Use shared defaults
plugin_name: your_plugin
# Import shared defaults and override only what's needed
shared_constants:
import: file_limits, timeouts, quality
# Plugin-specific settings
specific_settings:
custom_threshold: 42
feature_enabled: true
Migration Checklist
Pre-Migration
- Run existing tests to establish baseline
- Create backup of current code
- Document current behavior
- Identify all dependencies
Constants Migration
- List all magic numbers in your plugin
- Map to appropriate shared constants
- Update imports
- Replace magic numbers
- Run tests to verify no breaking changes
Function Refactoring
- Identify functions > 30 lines
- Write tests for each function
- Extract small helper functions first
- Apply strategy pattern where appropriate
- Keep public APIs stable
- Update documentation
Post-Migration
- Run full test suite
- Update documentation
- Verify performance
- Update CHANGELOG
- Create migration notes for users
Common Migration Patterns
1. Gradual Migration
Don’t refactor everything at once. Use feature flags:
# Gradually migrate to new implementation
def legacy_function(data):
if USE_NEW_IMPLEMENTATION:
return new_refactored_function(data)
else:
return old_implementation(data)
# Set this in config when ready
USE_NEW_IMPLEMENTATION = os.getenv("USE_NEW_IMPLEMENTATION", "false").lower() == "true"
2. Adapter Pattern
Keep old API while using new implementation:
def old_api_function(param1, param2, param3):
"""Legacy API - delegates to new implementation."""
config = LegacyConfig(param1, param2, param3)
return new_refactored_function(config)
# New, cleaner API
def new_refactored_function(config: Config):
"""New, improved implementation."""
pass
3. Parallel Implementation
Run both old and new implementations in parallel to verify:
def process_with_validation(data):
"""Run both implementations and compare."""
old_result = old_implementation(data)
new_result = new_implementation(data)
if not results_equivalent(old_result, new_result):
log_discrepancy(old_result, new_result)
# Return old result for safety
return old_result
return new_result
Testing Your Migration
1. Property-Based Testing
Use hypothesis to test refactored functions:
from hypothesis import given, strategies as st
@given(st.lists(st.integers()))
def test_sort_refactor(data):
"""Test that refactored sort produces same result."""
old_result = old_sort_function(data.copy())
new_result = new_sort_function(data.copy())
assert old_result == new_result
2. Integration Tests
Verify the whole workflow still works:
def test_complete_workflow():
"""Test that refactoring didn't break the workflow."""
input_data = create_test_data()
# Run through entire process
result = your_plugin_workflow(input_data)
# Verify key properties
assert result is not None
assert result.quality_score >= 70
assert len(result.processed_data) > 0
3. Performance Tests
Verify refactoring didn’t hurt performance:
import time
def test_performance():
"""Verify refactoring didn't degrade performance."""
data = create_large_dataset()
start = time.time()
old_result = old_implementation(data)
old_time = time.time() - start
start = time.time()
new_result = new_implementation(data)
new_time = time.time() - start
# New implementation shouldn't be more than 10% slower
assert new_time < old_time * 1.1
Rollback Plan
If Migration Fails
-
Immediate Rollback
git revert <migration-commit> -
Partial Rollback
- Keep constants migration
- Revert function refactoring
- Fix issues and retry
-
Feature Flag Rollback
# Disable new implementation os.environ["USE_NEW_IMPLEMENTATION"] = "false"
Documenting Issues
If you encounter problems:
- Document the specific issue
- Note the affected functionality
- Create a bug report with:
- Migration step that failed
- Error messages
- Minimal reproduction case
- Expected vs actual behavior
Getting Help
Resources
Support
- Create an issue for migration problems
- Join the #migration Slack channel
- Review example migrations in other plugins
Contributing
- Share your migration experience
- Suggest improvements to guidelines
- Add new shared constants as needed
Migration Examples
Example: Memory Palace Plugin
Challenges:
- 15 magic numbers scattered across files
- Functions averaging 45 lines
- Complex conditional logic
Solution:
- Replaced all magic numbers with shared constants
- Refactored 8 functions using extraction patterns
- Introduced strategy pattern for content processing
Results:
- 40% reduction in code complexity
- Improved test coverage from 60% to 85%
- Easier to add new content types
Example: Parseltongue Plugin
Challenges:
- Complex analysis functions with 8+ parameters
- Duplicated logic across multiple analyzers
- Hard to test individual components
Solution:
- Extracted configuration objects for parameters
- Created shared analysis utilities
- Applied builder pattern for complex objects
Results:
- Functions reduced to average 15 lines
- Parameter count reduced to 3-4 per function
- 100% test coverage for core logic
Conclusion
Migrating to shared constants and following function extraction guidelines improves code quality and maintainability.
Key Steps:
- Migrate incrementally: Don’t try to do everything at once.
- Test thoroughly: Verify behavior doesn’t change.
- Document changes: Help others understand the migration.
- Ask for help: Use the community’s experience.
Plugin Overview
The Claude Night Market organizes plugins into four layers, each building on the foundations below.
Architecture
graph TB
subgraph Meta[Meta Layer]
abstract[abstract<br/>Plugin infrastructure]
end
subgraph Foundation[Foundation Layer]
imbue[imbue<br/>Intelligent workflows]
sanctum[sanctum<br/>Git & workspace ops]
leyline[leyline<br/>Pipeline building blocks]
end
subgraph Utility[Utility Layer]
conserve[conserve<br/>Resource optimization]
conjure[conjure<br/>External delegation]
end
subgraph Domain[Domain Specialists]
archetypes[archetypes<br/>Architecture patterns]
pensive[pensive<br/>Code review toolkit]
parseltongue[parseltongue<br/>Python development]
memory_palace[memory-palace<br/>Spatial memory]
spec_kit[spec-kit<br/>Spec-driven dev]
minister[minister<br/>Release management]
attune[attune<br/>Full-cycle development]
scribe[scribe<br/>Documentation review]
end
abstract --> leyline
pensive --> imbue
pensive --> sanctum
sanctum --> imbue
conjure --> leyline
spec_kit --> imbue
scribe --> imbue
scribe --> conserve
style Meta fill:#fff3e0,stroke:#e65100
style Foundation fill:#e1f5fe,stroke:#01579b
style Utility fill:#f3e5f5,stroke:#4a148c
style Domain fill:#e8f5e8,stroke:#1b5e20
Layer Summary
| Layer | Purpose | Plugins |
|---|---|---|
| Meta | Plugin infrastructure and evaluation | abstract |
| Foundation | Core workflow methodologies | imbue, sanctum, leyline |
| Utility | Resource optimization and delegation | conserve, conjure |
| Domain | Specialized task execution | archetypes, pensive, parseltongue, memory-palace, spec-kit, minister, attune, scribe |
Dependency Rules
- Downward Only: Plugins depend on lower layers, never upward
- Foundation First: Most domain plugins work better with foundation plugins installed
- Graceful Degradation: Plugins function standalone but gain capabilities with dependencies
Quick Installation
Minimal (Git Workflows)
/plugin install sanctum@claude-night-market
Standard (Development)
/plugin install sanctum@claude-night-market
/plugin install imbue@claude-night-market
/plugin install spec-kit@claude-night-market
Full (All Capabilities)
/plugin install abstract@claude-night-market
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
/plugin install leyline@claude-night-market
/plugin install conserve@claude-night-market
/plugin install conjure@claude-night-market
/plugin install archetypes@claude-night-market
/plugin install pensive@claude-night-market
/plugin install parseltongue@claude-night-market
/plugin install memory-palace@claude-night-market
/plugin install spec-kit@claude-night-market
/plugin install minister@claude-night-market
/plugin install attune@claude-night-market
/plugin install scribe@claude-night-market
Browse by Layer
- Meta Layer - Plugin infrastructure
- Foundation Layer - Core workflows
- Utility Layer - Resource optimization
- Domain Specialists - Specialized tasks
Browse by Plugin
| Plugin | Description |
|---|---|
| abstract | Meta-skills for plugin development |
| imbue | Analysis and evidence gathering |
| sanctum | Git and workspace operations |
| leyline | Infrastructure building blocks |
| conserve | Context and resource optimization |
| conjure | External LLM delegation |
| archetypes | Architecture paradigms |
| pensive | Code review toolkit |
| parseltongue | Python development |
| memory-palace | Knowledge organization |
| spec-kit | Specification-driven development |
| minister | Release management |
| attune | Full-cycle project development |
| scribe | Documentation review and AI slop detection |
Meta Layer
The meta layer provides infrastructure for building, evaluating, and maintaining plugins themselves.
Purpose
While other layers focus on user-facing workflows, the meta layer focuses on:
- Plugin Development: Tools for creating new skills, commands, and hooks
- Quality Assurance: Evaluation frameworks for plugin quality
- Architecture Guidance: Patterns for modular, maintainable plugins
Plugins
| Plugin | Description |
|---|---|
| abstract | Meta-skills infrastructure for plugin development |
When to Use
Use meta layer plugins when:
- Creating a new plugin for the marketplace
- Evaluating existing skill quality
- Refactoring large skills into modules
- Validating plugin structure before publishing
Key Capabilities
Plugin Validation
/validate-plugin [path]
Checks plugin structure against official requirements.
Skill Creation
/create-skill
Scaffolds new skills using best practices and TDD methodology.
Quality Assessment
/skills-eval
Scores skill quality and suggests improvements.
Architecture Position
Meta Layer
|
v
Foundation Layer (imbue, sanctum, leyline)
|
v
Utility Layer (conservation, conjure)
|
v
Domain Specialists
The meta layer sits above all others, providing tools to build and maintain the entire ecosystem.
abstract
Meta-skills infrastructure for the plugin ecosystem - skill authoring, hook development, and quality evaluation.
Overview
The abstract plugin provides tools for building, evaluating, and maintaining Claude Code plugins. It’s the toolkit for plugin developers.
Installation
/plugin install abstract@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
skill-authoring | TDD methodology with Iron Law enforcement | Creating new skills with quality standards |
hook-authoring | Security-first hook development | Building safe, effective hooks |
modular-skills | Modular design patterns | Breaking large skills into modules |
skills-eval | Skill quality assessment | Auditing skills for token efficiency |
hooks-eval | Hook security scanning | Verifying hook safety |
escalation-governance | Model escalation decisions | Deciding when to escalate models |
makefile-dogfooder | Makefile analysis | Ensuring Makefile completeness |
methodology-curator | Expert framework curation | Grounding skills in proven methodologies |
shared-patterns | Plugin development patterns | Reusable templates |
subagent-testing | Subagent test patterns | Testing subagent interactions |
Commands
| Command | Description |
|---|---|
/validate-plugin [path] | Check plugin structure against requirements |
/create-skill | Scaffold new skill with best practices |
/create-command | Scaffold new command |
/create-hook | Scaffold hook with security-first design |
/analyze-hook | Analyze hook for security and performance |
/analyze-skill | Get modularization recommendations |
/bulletproof-skill | Anti-rationalization workflow for hardening |
/context-report | Context optimization report |
/estimate-tokens | Estimate token usage for skills |
/hooks-eval | detailed hook evaluation |
/make-dogfood | Analyze and enhance Makefiles |
/skills-eval | Run skill quality assessment |
/test-skill | Skill testing with TDD methodology |
/validate-hook | Validate hook compliance |
Agents
| Agent | Description |
|---|---|
meta-architect | Designs plugin ecosystem architectures |
plugin-validator | Validates plugin structure |
skill-auditor | Audits skills for quality and compliance |
Hooks
| Hook | Type | Description |
|---|---|---|
homeostatic_monitor.py | PostToolUse | Reads stability gap metrics, queues degrading skills for auto-improvement |
pre_skill_execution.py | PreToolUse | Skill execution tracking |
skill_execution_logger.py | PostToolUse | Skill metrics logging |
post-evaluation.json | Config | Quality scoring and improvement tracking |
pre-skill-load.json | Config | Pre-load validation for dependencies |
Self-Adapting System
A closed-loop system that monitors skill health and auto-triggers improvements:
homeostatic_monitor.pychecks stability gap after each Skill invocation- Skills with gap > 0.3 are queued in
improvement_queue.py - After 3+ flags, the
skill-improveragent runs automatically skill_versioning.pytracks changes via YAML frontmatterrollback_reviewer.pycreates GitHub issues if regressions are detectedexperience_library.pystores successful trajectories for future context
Cross-plugin dependency: reads stability metrics from memory-palace’s .history.json.
Usage Examples
Create a New Skill
/create-skill
# Claude will:
# 1. Use brainstorming for idea refinement
# 2. Apply TDD methodology
# 3. Generate skill scaffold
# 4. Create tests
Evaluate Skill Quality
Skill(abstract:skills-eval)
# Scores skills on:
# - Token efficiency
# - Documentation quality
# - Trigger clarity
# - Modular structure
Validate Plugin Structure
/validate-plugin /path/to/my-plugin
# Checks:
# - plugin.json structure
# - Required files present
# - Skill format compliance
# - Command syntax
Best Practices
Skill Design
- Single Responsibility: Each skill does one thing well
- Clear Triggers: Include “Use when…” in descriptions
- Token Efficiency: Keep skills under 2000 tokens
- TodoWrite Integration: Output actionable items
Hook Security
- No Secrets: Never log sensitive data
- Fail Safe: Default to allowing operations
- Minimal Scope: Request only needed permissions
- Audit Trail: Log decisions for review
- Agent-Aware (2.1.2+): SessionStart hooks receive
agent_typeto customize context
Superpowers Integration
When superpowers is installed:
| Command | Enhancement |
|---|---|
/create-skill | Uses brainstorming for idea refinement |
/create-command | Uses brainstorming for concept development |
/create-hook | Uses brainstorming for security design |
/test-skill | Uses test-driven-development for TDD cycles |
Related Plugins
- leyline: Infrastructure patterns abstract builds on
- imbue: Review patterns for skill evaluation
Foundation Layer
The foundation layer provides core workflow methodologies that other plugins build upon.
Purpose
Foundation plugins establish:
- Analysis Patterns: How to approach investigation and review tasks
- Workspace Operations: Git and file system interactions
- Infrastructure Utilities: Reusable patterns for building plugins
Plugins
| Plugin | Description | Key Use Case |
|---|---|---|
| imbue | Workflow methodologies | Analysis, evidence gathering |
| sanctum | Git operations | Commits, PRs, documentation |
| leyline | Building blocks | Error handling, authentication |
Dependency Flow
imbue (standalone)
|
sanctum --> imbue
|
leyline (standalone)
- imbue: No dependencies, purely methodology
- sanctum: Uses imbue for review patterns
- leyline: No dependencies, infrastructure patterns
When to Use
imbue
Use when you need to:
- Structure a detailed review
- Analyze changes systematically
- Capture evidence for decisions
- Prevent overengineering (scope-guard)
sanctum
Use when you need to:
- Understand repository state
- Generate commit messages
- Prepare pull requests
- Update documentation
leyline
Use when you need to:
- Implement error handling patterns
- Add authentication flows
- Build plugin infrastructure
- Standardize testing approaches
Key Workflows
Pre-Commit Flow
Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)
Review Flow
Skill(imbue:review-core)
Skill(imbue:evidence-logging)
Skill(imbue:structured-output)
PR Preparation
Skill(sanctum:git-workspace-review)
Skill(sanctum:pr-prep)
Installation
# Minimal foundation
/plugin install imbue@claude-night-market
# Full foundation
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
/plugin install leyline@claude-night-market
imbue
Workflow methodologies for analysis, evidence gathering, and structured output.
Overview
Imbue provides reusable patterns for approaching analysis tasks. It’s a methodology plugin - the patterns apply to various inputs (git diffs, specs, logs) and chain together for complex workflows.
Core Philosophy: “NO IMPLEMENTATION WITHOUT A FAILING TEST FIRST” - The Iron Law enforced through proof-of-work validation.
Installation
/plugin install imbue@claude-night-market
Principles
- Generalizable: Patterns work across different input types
- Composable: Skills chain together naturally
- Evidence-based: Emphasizes capturing proof for reproducibility
- TDD-First: Iron Law enforcement prevents cargo cult testing
Skills
Review Patterns
| Skill | Description | When to Use |
|---|---|---|
review-core | Scaffolding for detailed reviews | Starting architecture, security, or code quality reviews |
evidence-logging | Evidence capture methodology | Creating audit trails during analysis |
structured-output | Output formatting patterns | Preparing final reports |
Analysis Methods
| Skill | Description | When to Use |
|---|---|---|
diff-analysis | Semantic changeset analysis | Understanding impact of changes |
catchup | Context recovery | Getting up to speed after time away |
Workflow Guards
| Skill | Description | When to Use |
|---|---|---|
scope-guard | Anti-overengineering | Evaluating if features should be built now |
proof-of-work | Evidence-based validation | Enforcing Iron Law TDD discipline |
rigorous-reasoning | Anti-sycophancy guardrails | Analyzing conflicts, evaluating contested claims |
Feature Planning
| Skill | Description | When to Use |
|---|---|---|
feature-review | Feature prioritization | Sprint planning, roadmap reviews |
Workflow Automation
| Skill | Description | When to Use |
|---|---|---|
workflow-monitor | Execution monitoring and issue creation | After workflow failures or inefficiencies |
Commands
| Command | Description |
|---|---|
/catchup | Quick context recovery from recent changes |
/structured-review | Start structured review workflow with evidence logging |
/feature-review | Feature prioritization with RICE+WSJF scoring |
Agents
| Agent | Description |
|---|---|
review-analyst | Autonomous structured reviews with evidence gathering |
Hooks
| Hook | Type | Description |
|---|---|---|
session-start.sh | SessionStart | Initializes scope-guard, Iron Law, and learning mode |
user-prompt-submit.sh | UserPromptSubmit | Validates prompts against scope thresholds |
tdd_bdd_gate.py | PreToolUse | Enforces Iron Law at write-time |
pre-pr-scope-check.sh | Manual | Checks scope before PR creation |
proof-enforcement.md | Design | Iron Law TDD compliance enforcement |
Usage Examples
Structured Review
Skill(imbue:review-core)
# Required TodoWrite items:
# 1. review-core:context-established
# 2. review-core:scope-inventoried
# 3. review-core:evidence-captured
# 4. review-core:deliverables-structured
# 5. review-core:contingencies-documented
Diff Analysis
Skill(imbue:diff-analysis)
# Answers: "What changed and why does it matter?"
# - Categorizes changes by function
# - Assesses risks
# - Summarizes implications
Quick Catchup
/catchup
# Summarizes:
# - Recent commits
# - Changed files
# - Key decisions
# - Action items
Feature Prioritization
/feature-review
# Uses hybrid RICE+WSJF scoring:
# - Reach, Impact, Confidence, Effort
# - Weighted Shortest Job First
# - ISO 25010 quality dimensions
Scope Guard
The scope-guard skill prevents overengineering via four components:
| Component | Purpose |
|---|---|
decision-framework | Worthiness formula and scoring |
anti-overengineering | Rules to prevent scope creep |
branch-management | Threshold monitoring (lines, commits, days) |
baseline-scenarios | Validated test scenarios |
Iron Law TDD Enforcement
The proof-of-work skill enforces the Iron Law:
NO IMPLEMENTATION WITHOUT A FAILING TEST FIRST
This prevents “Cargo Cult TDD” where tests validate pre-conceived implementations.
Self-Check Protocol
| Thought Pattern | Violation | Action |
|---|---|---|
| “Let me plan the implementation first” | Skipping RED | Write failing test FIRST |
| “I know what tests we need” | Pre-conceived impl | Document failure, THEN design |
| “The design is straightforward” | Skipping uncertainty | Let design EMERGE from tests |
TodoWrite Items
proof:iron-law-red - Failing test documented
proof:iron-law-green - Minimal code to pass
proof:iron-law-refactor - Code improved, tests green
proof:iron-law-coverage - Coverage gates verified
See iron-law-enforcement.md module for full enforcement patterns.
Rigorous Reasoning
The rigorous-reasoning skill prevents sycophantic patterns through structured analysis:
| Component | Purpose |
|---|---|
priority-signals | Override principles (no courtesy agreement, checklist over intuition) |
conflict-analysis | Harm/rights checklist for interpersonal conflicts |
debate-methodology | Truth claims and contested territory handling |
red-flag monitoring | Detect sycophantic thought patterns |
Red Flag Self-Check
| Thought Pattern | Reality Check | Action |
|---|---|---|
| “I agree that…” | Did you validate? | Apply harm/rights checklist |
| “You’re right that…” | Is this proven? | Check for evidence |
| “That’s a fair point” | Fair by what standard? | Specify the standard |
TodoWrite Integration
All skills output TodoWrite items for progress tracking:
review-core:context-established
review-core:scope-inventoried
diff-analysis:baseline-established
diff-analysis:changes-categorized
catchup:context-confirmed
catchup:delta-captured
Integration Pattern
Imbue is foundational - other plugins build on it:
# Sanctum uses imbue for review patterns
Skill(imbue:review-core)
Skill(sanctum:git-workspace-review)
# Pensive uses imbue for evidence gathering
Skill(imbue:evidence-logging)
Skill(pensive:architecture-review)
Superpowers Integration
| Skill | Enhancement |
|---|---|
scope-guard | Uses brainstorming, writing-plans, execute-plan |
/feature-review | Uses brainstorming for feature suggestions |
Related Plugins
- sanctum: Uses imbue for review scaffolding
- pensive: Uses imbue for evidence gathering
- spec-kit: Uses imbue for analysis patterns
sanctum
Git and workspace operations for active development workflows.
Overview
Sanctum handles the practical side of development: commits, PRs, documentation updates, and version management. It’s the plugin you’ll use most during active coding.
Installation
/plugin install sanctum@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
git-workspace-review | Preflight repo state analysis | Before any git operation |
file-analysis | Codebase structure mapping | Understanding project layout |
commit-messages | Conventional commit generation | After staging changes |
pr-prep | PR preparation with quality gates | Before creating PRs |
pr-review | PR analysis and feedback | Reviewing others’ PRs |
doc-consolidation | Merge ephemeral docs | Consolidating LLM-generated docs |
doc-updates | Documentation maintenance | Syncing docs with code |
test-updates | Test generation and enhancement | Maintaining test suites |
update-readme | README modernization | Refreshing project entry points |
version-updates | Version bumping | Managing semantic versions |
workflow-improvement | Workflow retrospectives | Improving development processes |
tutorial-updates | Tutorial maintenance | Keeping tutorials current |
Commands
| Command | Description |
|---|---|
/git-catchup | Git repository catchup |
/commit-msg | Draft conventional commit message |
/pr | Prepare PR with quality gates |
/pr-review | Enhanced PR review |
/fix-pr | Address PR review comments |
/do-issue | Fix GitHub issues systematically |
/fix-workflow | Improve recent workflow |
/merge-docs | Consolidate ephemeral docs |
/update-docs | Update documentation |
/update-plugins | Audit and sync plugin.json registrations |
/update-readme | Modernize README |
/update-tests | Maintain tests |
/update-tutorial | Update tutorial content |
/update-version | Bump versions |
/update-dependencies | Update project dependencies |
/create-tag | Create git tags for releases |
/resolve-threads | Resolve PR review threads |
Agents
| Agent | Description |
|---|---|
git-workspace-agent | Repository state analysis |
commit-agent | Commit message generation |
pr-agent | PR preparation specialist |
workflow-recreate-agent | Workflow slice reconstruction |
workflow-improvement-* | Workflow improvement pipeline |
dependency-updater | Dependency version management |
Hooks
| Hook | Type | Description |
|---|---|---|
post_implementation_policy.py | SessionStart | Requires docs/tests/readme updates |
verify_workflow_complete.py | Stop | Verifies workflow completion |
session_complete_notify.py | Stop | Toast notification when awaiting input |
Usage Examples
Pre-Commit Workflow
# Stage changes
git add -p
# Review workspace
Skill(sanctum:git-workspace-review)
# Generate commit message
Skill(sanctum:commit-messages)
# Apply
git commit -m "<generated message>"
PR Preparation
# Run quality checks first
make fmt && make lint && make test
# Prepare PR
/pr
# Creates:
# - Summary
# - Change list
# - Testing checklist
# - Quality gate results
Fix PR Review Comments
/fix-pr
# Claude will:
# 1. Read PR comments
# 2. Triage by priority
# 3. Implement fixes
# 4. Resolve threads on GitHub
Fix GitHub Issue
/do-issue 42
# Uses subagent-driven-development:
# 1. Analyze issue
# 2. Create plan
# 3. Implement fix
# 4. Test
# 5. Prepare PR
Skill Dependencies
Most sanctum skills depend on git-workspace-review:
git-workspace-review (foundation)
├── commit-messages
├── pr-prep
├── doc-updates
├── update-readme
└── version-updates
file-analysis (standalone)
Always run git-workspace-review first to establish context.
TodoWrite Integration
git-review:repo-confirmed
git-review:status-overview
git-review:diff-stat
git-review:diff-details
pr-prep:workspace-reviewed
pr-prep:quality-gates
pr-prep:changes-summarized
pr-prep:testing-documented
pr-prep:pr-drafted
Workflow Patterns
Pre-Commit
git add -p
Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)
Pre-PR
make fmt && make lint && make test
Skill(sanctum:git-workspace-review)
Skill(sanctum:pr-prep)
Post-Review
/fix-pr
# Implements fixes, resolves threads
Release
Skill(sanctum:git-workspace-review)
Skill(sanctum:version-updates)
Skill(sanctum:doc-updates)
git commit && git tag
Superpowers Integration
| Command | Enhancement |
|---|---|
/pr | Uses receiving-code-review for validation |
/pr-review | Uses receiving-code-review for analysis |
/fix-pr | Uses receiving-code-review for resolution |
/do-issue | Uses multiple superpowers for full workflow |
Related Plugins
- imbue: Provides review scaffolding sanctum uses
- pensive: Code review complements sanctum’s git operations
leyline
Infrastructure and pipeline building blocks for plugins.
Overview
Leyline provides reusable infrastructure patterns that other plugins build on. Think of it as a standard library for plugin development - error handling, authentication, storage, and testing patterns.
Installation
/plugin install leyline@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
quota-management | Rate limiting and quotas | Building services that consume APIs |
usage-logging | Telemetry tracking | Logging tool usage for analytics |
service-registry | Service discovery patterns | Managing external tool connections |
damage-control | Agent-level error recovery for multi-agent coordination | Crash recovery, context overflow, merge conflicts |
error-patterns | Standardized error handling | Implementing production-grade error recovery |
authentication-patterns | Auth flow patterns | Handling API keys and OAuth |
evaluation-framework | Decision thresholds | Building evaluation criteria |
mecw-patterns | MECW implementation | Minimal Effective Context Window |
progressive-loading | Dynamic content loading | Lazy loading strategies |
risk-classification | Inline 4-tier risk classification for agent tasks | Risk-based task routing with war-room escalation |
pytest-config | Pytest configuration | Standardized test configuration |
storage-templates | Storage abstraction | File and database patterns |
testing-quality-standards | Test quality guidelines | Ensuring high-quality tests |
Commands
| Command | Description |
|---|---|
/reinstall-all-plugins | Uninstall and reinstall all plugins to refresh cache |
/update-all-plugins | Update all installed plugins from marketplaces |
Usage Examples
Plugin Management
# Refresh all plugins (fixes version mismatches)
/reinstall-all-plugins
# Update to latest versions
/update-all-plugins
Using as Dependencies
Leyline skills are typically used as dependencies in other plugins:
# In your skill's SKILL.md frontmatter
dependencies:
- leyline:error-patterns
- leyline:quota-management
Error Handling Pattern
Skill(leyline:error-patterns)
# Provides:
# - Structured error types
# - Recovery strategies
# - Logging standards
# - User-friendly messages
Authentication Pattern
Skill(leyline:authentication-patterns)
# Covers:
# - API key management
# - OAuth flows
# - Token refresh
# - Secret storage
Testing Standards
Skill(leyline:testing-quality-standards)
# Enforces:
# - Test naming conventions
# - Coverage requirements
# - Mocking guidelines
# - Fixture patterns
Pattern Categories
Rate Limiting
# quota-management pattern
from leyline import QuotaManager
manager = QuotaManager(
daily_limit=1000,
hourly_limit=100,
burst_limit=10
)
if manager.can_proceed():
# Make API call
manager.record_usage()
Telemetry
# usage-logging pattern
from leyline import UsageLogger
logger = UsageLogger(output="telemetry.csv")
logger.log_tool_use("WebFetch", tokens=500, latency_ms=1200)
Storage Abstraction
# storage-templates pattern
from leyline import Storage
storage = Storage.from_config()
storage.save("key", data)
data = storage.load("key")
MECW Patterns
The mecw-patterns skill implements Minimum Effective Context Window principles:
| Pattern | Description |
|---|---|
| Summarize Early | Compress context before it grows |
| Load on Demand | Fetch details only when needed |
| Evict Stale | Remove outdated information |
| Prioritize Recent | Weight recent context higher |
Integration
Leyline is used by:
- abstract: Plugin validation uses error patterns
- conjure: Delegation uses quota management
- conservation: Context optimization uses MECW patterns
Best Practices
- Don’t Duplicate: Use leyline patterns instead of reimplementing
- Compose Patterns: Combine multiple patterns for complex needs
- Test with Standards: Use pytest-config for consistent testing
- Log Everything: Use usage-logging for debugging and analytics
Related Plugins
- abstract: Uses leyline for plugin infrastructure
- conjure: Uses leyline for quota and service management
- conservation: Uses leyline for MECW implementation
Utility Layer
The utility layer provides resource optimization and external integration capabilities.
Purpose
Utility plugins handle:
- Resource Management: Context window optimization, token conservation
- External Delegation: Offloading tasks to external LLM services
- Performance Monitoring: CPU/GPU and memory tracking
Plugins
| Plugin | Description | Key Use Case |
|---|---|---|
| conserve | Resource optimization | Context management |
| conjure | External delegation | Long-context tasks |
| hookify | Behavioral rules | Preventing unwanted actions |
When to Use
conserve
Use when you need to:
- Monitor context window usage
- Optimize token consumption
- Handle large codebases efficiently
- Track resource usage patterns
conjure
Use when you need to:
- Process files too large for Claude’s context
- Delegate bulk processing tasks
- Use specialized external models
- Manage API quotas across services
hookify
Use when you need to:
- Prevent accidental destructive actions (force push, etc.)
- Enforce coding standards via pattern matching
- Create project-specific behavioral constraints
- Add safety guardrails for automated workflows
Key Capabilities
Context Optimization
/optimize-context
Analyzes current context usage and suggests MECW (Minimum Effective Context Window) strategies.
Growth Analysis
/analyze-growth
Predicts context budget impact of skill growth patterns.
External Delegation
make delegate-auto PROMPT="Summarize" FILES="src/"
Auto-selects the best external service for a task.
Conserve Modes
The conserve plugin supports different modes via environment variables:
| Mode | Command | Behavior |
|---|---|---|
| Normal | claude | Full conservation guidance |
| Quick | CONSERVE_MODE=quick claude | Skip guidance for fast tasks |
| Deep | CONSERVE_MODE=deep claude | Extended resource allowance |
Key Thresholds
Context Usage
- < 30%: LOW - Normal operation
- 30-50%: MODERATE - Consider optimization
- > 50%: CRITICAL - Optimize immediately
Token Quotas
- 5-hour rolling cap
- Weekly cap
- Check with
/status
Installation
# Resource optimization
/plugin install conserve@claude-night-market
# External delegation
/plugin install conjure@claude-night-market
Integration with Other Layers
Utility plugins enhance all other layers:
Domain Specialists
|
v
Utility Layer (optimization, delegation)
|
v
Foundation Layer
For example, conjure can delegate large file processing before sanctum analyzes the results.
conserve
Resource optimization and performance monitoring for context window management.
Overview
Conserve helps you work efficiently within Claude’s context limits. It automatically loads optimization guidance at session start and provides tools for monitoring and reducing context usage.
Installation
/plugin install conserve@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
context-optimization | MECW principles and 50% context rule | When context usage > 30% |
token-conservation | Token usage strategies and quota tracking | Session start, before heavy loads |
cpu-gpu-performance | Resource monitoring and selective testing | Before builds, tests, or training |
mcp-code-execution | MCP patterns for data pipelines | Processing data outside context |
optimizing-large-skills | Large skill optimization | Breaking down oversized skills |
bloat-detector | Detect bloated documentation, dead code, dead wrappers | During documentation reviews, code cleanup |
clear-context | Context window management strategies | When approaching context limits |
Commands
| Command | Description |
|---|---|
/bloat-scan | Detect code bloat, dead code, and dead wrapper scripts |
/unbloat | Remove detected bloat with progressive analysis |
/optimize-context | Analyze and optimize context window usage |
/analyze-growth | Predict context budget impact of skill growth |
Agents
| Agent | Description |
|---|---|
context-optimizer | Autonomous context optimization and MECW compliance |
Hooks
| Hook | Type | Description |
|---|---|---|
session-start.sh | SessionStart | Loads conservation guidance at startup |
Usage Examples
Context Optimization
/optimize-context
# Analyzes:
# - Current context usage
# - Token distribution
# - Compression opportunities
# - MECW compliance
Growth Analysis
/analyze-growth
# Predicts:
# - Skill growth patterns
# - Context budget impact
# - Optimization priorities
Manual Skill Invocation
Skill(conservation:context-optimization)
# Provides:
# - MECW principles
# - 50% context rule
# - Compression strategies
# - Eviction priorities
Bypass Modes
Control conservation behavior via environment variables:
| Mode | Command | Behavior |
|---|---|---|
| Normal | claude | Full conservation guidance |
| Quick | CONSERVATION_MODE=quick claude | Skip guidance for fast processing |
| Deep | CONSERVATION_MODE=deep claude | Extended resource allowance |
Examples
# Quick mode for simple tasks
CONSERVATION_MODE=quick claude
# Deep mode for complex analysis
CONSERVATION_MODE=deep claude
Key Thresholds
Context Usage
| Level | Usage | Action |
|---|---|---|
| LOW | < 30% | Normal operation |
| MODERATE | 30-50% | Consider optimization |
| CRITICAL | > 50% | Optimize immediately |
Token Quotas
- 5-hour rolling cap: Prevents burst usage
- Weekly cap: validates sustainable usage
- Check status: Use
/statusto see current usage
MECW Principles
Minimum Effective Context Window strategies:
- Summarize Early: Compress large outputs before they accumulate
- Load on Demand: Fetch file contents only when needed
- Evict Stale: Remove information no longer relevant
- Prioritize Recent: Weight recent context higher than old
Optimization Strategies
For Large Files
# Don't load entire file
# Instead, use targeted reads
Read file.py --offset 100 --limit 50
For Search Results
# Limit search output
Grep --head_limit 20
For Git Operations
# Use stats instead of full diffs
git diff --stat
git log --oneline -10
CPU/GPU Performance
The cpu-gpu-performance skill monitors resource usage:
Skill(conservation:cpu-gpu-performance)
# Provides:
# - Baseline establishment
# - Resource monitoring
# - Selective test execution
# - Build optimization
MCP Code Execution
For processing data too large for context:
Skill(conservation:mcp-code-execution)
# Patterns for:
# - External data processing
# - Pipeline optimization
# - Result summarization
Superpowers Integration
| Command | Enhancement |
|---|---|
/optimize-context | Uses condition-based-waiting for smart optimization |
Related Plugins
- leyline: Provides MECW pattern implementations
- abstract: Uses conservation for skill optimization
- conjure: Delegates to external services when context limited
conjure
Delegation to external LLM services for long-context or bulk tasks.
Overview
Conjure provides a framework for delegating tasks to external LLM services (Gemini, Qwen) when Claude’s context window is insufficient or when specialized models are better suited.
Installation
/plugin install conjure@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
delegation-core | Framework for delegation decisions | Assessing if tasks should be offloaded |
gemini-delegation | Gemini CLI integration | Processing massive context windows |
qwen-delegation | Qwen MCP integration | Tasks requiring specific privacy needs |
Commands (Makefile)
| Command | Description | Example |
|---|---|---|
make delegate-auto | Auto-select best service | make delegate-auto PROMPT="Summarize" FILES="src/" |
make quota-status | Show current quota usage | make quota-status |
make usage-report | Summarize token usage and costs | make usage-report |
Hooks
| Hook | Type | Description |
|---|---|---|
bridge.on_tool_start | PreToolUse | Suggests delegation when files exceed thresholds |
bridge.after_tool_use | PostToolUse | Suggests delegation if output is truncated |
Usage Examples
Auto-Delegation
make delegate-auto PROMPT="Summarize all files" FILES="src/"
# Conjure will:
# 1. Assess file sizes
# 2. Check quota availability
# 3. Select optimal service
# 4. Execute delegation
# 5. Return results
Check Quota Status
make quota-status
# Output:
# Gemini: 450/1000 tokens used (5h rolling)
# Qwen: 200/500 tokens used (5h rolling)
Usage Report
make usage-report
# Output:
# This week:
# Gemini: 2,500 tokens, $0.05
# Qwen: 800 tokens, $0.02
# Total: 3,300 tokens, $0.07
Manual Service Selection
# Force Gemini for large context
Skill(conjure:gemini-delegation)
# Force Qwen for privacy-sensitive tasks
Skill(conjure:qwen-delegation)
Delegation Decision Framework
The delegation-core skill evaluates:
| Factor | Weight | Description |
|---|---|---|
| Context Size | High | Does input exceed Claude’s context? |
| Task Type | Medium | Is task better suited for another model? |
| Privacy Needs | High | Are there data residency requirements? |
| Quota Available | High | Do we have capacity on target service? |
| Cost | Low | Is delegation cost-effective? |
Service Comparison
| Service | Strengths | Best For |
|---|---|---|
| Gemini | Large context (1M+ tokens) | Bulk file processing, long documents |
| Qwen | Local/private inference | Sensitive data, offline work |
Hook Behavior
Pre-Tool Use Hook
When reading large files:
[Conjure Bridge] File exceeds context threshold
Suggested action: Delegate to Gemini
Estimated tokens: 125,000
Quota available: Yes
Post-Tool Use Hook
When output is truncated:
[Conjure Bridge] Output truncated at 100,000 chars
Suggested action: Re-run with delegation
Recommended service: Gemini
Configuration
Environment Variables
# Gemini API key
export GEMINI_API_KEY=your-key
# Qwen MCP endpoint
export QWEN_MCP_ENDPOINT=http://localhost:8080
Quota Configuration
Edit conjure/config/quotas.yaml:
gemini:
hourly_limit: 1000
daily_limit: 10000
qwen:
hourly_limit: 500
daily_limit: 5000
Integration Patterns
With Conservation
# Conservation detects high context usage
# Suggests delegation via conjure
Skill(conservation:context-optimization)
# -> Recommends: Skill(conjure:delegation-core)
With Sanctum
# Large repo analysis
Skill(sanctum:git-workspace-review)
# If repo too large:
# -> Suggests: make delegate-auto FILES="."
Dependencies
Conjure uses leyline for infrastructure:
conjure
|
v
leyline (quota-management, service-registry)
Best Practices
- Check Quota First: Run
make quota-statusbefore large delegations - Use Auto Mode: Let conjure select the optimal service
- Monitor Costs: Review
make usage-reportweekly - Cache Results: Store delegation results locally to avoid repeat calls
Related Plugins
- leyline: Provides quota management and service registry
- conservation: Detects when delegation is beneficial
hookify
Create custom behavioral rules through markdown configuration files.
Overview
Hookify provides a framework for defining behavioral rules that prevent unwanted actions through pattern matching. Rules are defined in markdown files and can be enabled, disabled, or customized per project.
Installation
/plugin install hookify@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
writing-rules | Guide for authoring behavioral rules | Creating new rules |
rule-catalog | Pre-built behavioral rule templates | Installing common rules |
Commands
| Command | Description |
|---|---|
/hookify | Create behavioral rules to prevent unwanted actions |
/hookify:install | Install hookify rule from catalog |
/hookify:list | List all hookify rules with status |
/hookify:configure | Interactive rule enable/disable interface |
/hookify:help | Display hookify help and documentation |
Usage Examples
Install a Rule
# Install from catalog
/hookify:install no-force-push
# List installed rules
/hookify:list --status
Create Custom Rule
# Create a new rule interactively
/hookify
# Configure existing rule
/hookify:configure no-force-push --disable
Rule Structure
Rules are markdown files with frontmatter:
---
name: no-force-push
trigger: PreToolUse
matcher: Bash
pattern: "git push.*--force"
action: block
message: "Force push blocked. Use --force-with-lease instead."
---
# No Force Push Rule
Prevents accidental force pushes that could overwrite remote history.
Integration
Hookify integrates with:
- abstract: Rule validation and testing
- imbue: Scope guard integration
- sanctum: Git workflow protection
Domain Specialists
Domain specialist plugins provide deep expertise in specific areas of software development.
Purpose
Domain plugins offer:
- Deep Expertise: Specialized knowledge for specific domains
- Workflow Automation: End-to-end processes for common tasks
- Best Practices: Curated patterns and anti-patterns
Plugins
| Plugin | Domain | Key Use Case |
|---|---|---|
| archetypes | Architecture | Paradigm selection |
| pensive | Code Review | Multi-faceted reviews |
| parseltongue | Python | Modern Python development |
| memory-palace | Knowledge | Spatial memory organization |
| spec-kit | Specifications | Spec-driven development |
| minister | Releases | Initiative tracking |
| attune | Projects | Full-cycle project development |
| scry | Media | Documentation recordings |
| scribe | Documentation | AI slop detection and cleanup |
When to Use
archetypes
Use when you need to:
- Choose an architecture for a new system
- Evaluate trade-offs between patterns
- Get implementation guidance for a paradigm
pensive
Use when you need to:
- Conduct thorough code reviews
- Audit security and architecture
- Review APIs, tests, or Makefiles
parseltongue
Use when you need to:
- Write modern Python (3.12+)
- Implement async patterns
- Package projects with uv
- Profile and optimize performance
memory-palace
Use when you need to:
- Organize complex knowledge
- Build spatial memory structures
- Maintain digital gardens
- Cache research efficiently
spec-kit
Use when you need to:
- Define features before implementation
- Generate structured task lists
- Maintain specification consistency
- Track implementation progress
minister
Use when you need to:
- Track GitHub initiatives
- Monitor release readiness
- Generate stakeholder reports
attune
Use when you need to:
- Brainstorm project ideas
- Create specifications from concepts
- Plan architecture and tasks
- Initialize projects with tooling
- Execute systematic implementation
scry
Use when you need to:
- Record terminal demos with VHS
- Capture browser sessions with Playwright
- Generate GIFs for documentation
- Compose multi-source tutorials
scribe
Use when you need to:
- Detect AI-generated content markers
- Clean up documentation slop
- Learn and apply writing styles
- Verify documentation accuracy
Dependencies
Most domain plugins depend on foundation layers:
archetypes (standalone)
pensive --> imbue, sanctum
parseltongue (standalone)
memory-palace (standalone)
spec-kit --> imbue
minister (standalone)
attune --> spec-kit, imbue
scry (standalone)
scribe --> imbue, conserve
Example Workflows
Architecture Decision
Skill(archetypes:architecture-paradigms)
# Interactive paradigm selection
# Returns: Detailed implementation guide
Full Code Review
/full-review
# Runs multiple review types:
# - architecture-review
# - api-review
# - bug-review
# - test-review
Python Project Setup
Skill(parseltongue:python-packaging)
Skill(parseltongue:python-testing)
Feature Development
/speckit-specify Add user authentication
/speckit-plan
/speckit-tasks
/speckit-implement
Full Project Lifecycle
/attune:brainstorm
# Socratic questioning to explore project idea
/attune:specify
# Create specification from brainstorm
/attune:blueprint
# Design architecture and break down tasks
/attune:init
# Initialize project with tooling
/attune:execute
# Execute implementation with TDD
Media Recording
/record-terminal
# Creates VHS tape script and records terminal to GIF
/record-browser
# Records browser session with Playwright
Documentation Cleanup
/slop-scan docs/
# Scans for AI-generated content markers
/doc-polish README.md
# Interactive cleanup of AI slop
/doc-verify README.md
# Validates documentation claims
Installation
Install based on your needs:
# Architecture work
/plugin install archetypes@claude-night-market
# Code review
/plugin install pensive@claude-night-market
# Python development
/plugin install parseltongue@claude-night-market
# Knowledge management
/plugin install memory-palace@claude-night-market
# Specification-driven development
/plugin install spec-kit@claude-night-market
# Release management
/plugin install minister@claude-night-market
# Full-cycle project development
/plugin install attune@claude-night-market
# Media recording
/plugin install scry@claude-night-market
# Documentation review
/plugin install scribe@claude-night-market
archetypes
Architecture paradigm selection and implementation planning.
Overview
Archetypes helps you choose the right architecture for your system. It provides an interactive paradigm selector and detailed implementation guides for 13 architectural patterns.
Installation
/plugin install archetypes@claude-night-market
Skills
Orchestrator
| Skill | Description | When to Use |
|---|---|---|
architecture-paradigms | Interactive paradigm selector | Choosing architecture for new systems |
Paradigm Guides
| Skill | Architecture | Best For |
|---|---|---|
architecture-paradigm-layered | N-tier | Simple web apps, internal tools |
architecture-paradigm-hexagonal | Ports & Adapters | Infrastructure independence |
architecture-paradigm-microservices | Distributed services | Large-scale enterprise |
architecture-paradigm-event-driven | Async communication | Real-time processing |
architecture-paradigm-serverless | Function-as-a-Service | Event-driven with minimal infra |
architecture-paradigm-pipeline | Pipes-and-filters | ETL, media processing |
architecture-paradigm-cqrs-es | CQRS + Event Sourcing | Audit trails, event replay |
architecture-paradigm-microkernel | Plugin-based | Minimal core with extensions |
architecture-paradigm-modular-monolith | Internal boundaries | Module separation without distribution |
architecture-paradigm-space-based | Data-grid | High-scale stateful workloads |
architecture-paradigm-service-based | Coarse-grained SOA | Modular without microservices |
architecture-paradigm-functional-core | Functional Core, Imperative Shell | Superior testability |
architecture-paradigm-client-server | Client-server | Clear client/server responsibilities |
Usage Examples
Interactive Selection
Skill(archetypes:architecture-paradigms)
# Claude will:
# 1. Ask about your requirements
# 2. Evaluate trade-offs
# 3. Recommend paradigms
# 4. Provide implementation guidance
Direct Paradigm Access
# Get specific paradigm details
Skill(archetypes:architecture-paradigm-hexagonal)
# Returns:
# - Core concepts
# - When to use
# - Implementation patterns
# - Example code
# - Trade-offs
Paradigm Comparison
By Complexity
| Level | Paradigms |
|---|---|
| Low | Layered, Client-Server |
| Medium | Modular Monolith, Service-Based, Functional Core |
| High | Microservices, Event-Driven, CQRS-ES, Space-Based |
By Team Size
| Team | Recommended |
|---|---|
| 1-3 | Layered, Functional Core, Modular Monolith |
| 4-10 | Hexagonal, Service-Based, Pipeline |
| 10+ | Microservices, Event-Driven |
By Scalability Need
| Need | Paradigms |
|---|---|
| Single server | Layered, Modular Monolith |
| Horizontal | Microservices, Serverless |
| Extreme | Space-Based, Event-Driven |
Selection Criteria
The paradigm selector evaluates:
- Team size and structure
- Scalability requirements
- Deployment constraints
- Data consistency needs
- Development velocity priorities
- Operational maturity
Example Output
Hexagonal Architecture
## Hexagonal Architecture (Ports & Adapters)
### Core Concepts
- Domain logic at center
- Ports define interfaces
- Adapters implement ports
- Infrastructure is pluggable
### When to Use
- Need to swap databases/frameworks
- Test-driven development focus
- Long-lived applications
- Multiple integration points
### Structure
src/
├── domain/ # Pure business logic
│ ├── models/
│ └── services/
├── ports/ # Interface definitions
│ ├── inbound/
│ └── outbound/
└── adapters/ # Implementations
├── web/
├── persistence/
└── external/
### Trade-offs
+ Easy testing via port mocking
+ Framework-agnostic domain
+ Clear dependency direction
- More initial structure
- Learning curve
Best Practices
- Start Simple: Begin with layered, evolve as needed
- Match Team: Don’t use microservices with a small team
- Consider Ops: Complex architectures need operational maturity
- Plan Evolution: Design for change, not perfection
Decision Tree
Start
|
v
Simple CRUD? --> Yes --> Layered
|
No
|
v
Need testability? --> Yes --> Functional Core or Hexagonal
|
No
|
v
High scale? --> Yes --> Event-Driven or Space-Based
|
No
|
v
Multiple teams? --> Yes --> Microservices or Service-Based
|
No
|
v
Modular Monolith
Related Plugins
- pensive: Architecture review complements paradigm selection
- spec-kit: Use after paradigm selection for implementation planning
pensive
Code review and analysis toolkit with specialized review skills.
Overview
Pensive provides deep code review capabilities across multiple dimensions: architecture, APIs, bugs, tests, and more. It orchestrates reviews intelligently, selecting the right skills for each codebase.
Installation
/plugin install pensive@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
unified-review | Review orchestration | Starting reviews (Claude picks tools) |
api-review | API surface evaluation | Reviewing OpenAPI specs, library exports |
architecture-review | Architecture assessment | Checking ADR alignment, design principles |
bug-review | Bug hunting | Systematic search for logic errors |
rust-review | Rust-specific checking | Auditing unsafe code, borrow patterns |
test-review | Test quality review | Ensuring tests verify behavior |
makefile-review | Makefile best practices | Reviewing Makefile quality |
math-review | Mathematical correctness | Reviewing mathematical logic |
shell-review | Shell script auditing | Exit codes, portability, safety patterns |
fpf-review | FPF architecture review | Functional/Practical/Foundation analysis |
safety-critical-patterns | NASA Power of 10 rules | Robust, verifiable code with context-appropriate rigor |
code-refinement | Code quality analysis | Duplication, efficiency, clean code violations |
Commands
| Command | Description |
|---|---|
/full-review | Unified review with intelligent skill selection |
/api-review | Run API surface review |
/architecture-review | Run architecture assessment |
/bug-review | Run bug hunting |
/rust-review | Run Rust-specific review |
/test-review | Run test quality review |
/makefile-review | Run Makefile review |
/math-review | Run mathematical review |
/shell-review | Run shell script safety review |
/fpf-review | Run FPF architecture review |
/skill-review | Analyze skill runtime metrics and stability gaps (canonical) |
/skill-history | View recent skill executions |
Note: For static skill quality analysis (frontmatter, structure), use
abstract:skill-auditorinstead.
Agents
| Agent | Description |
|---|---|
code-reviewer | Expert code review for bugs, security, quality |
architecture-reviewer | Principal-level architecture specialist |
rust-auditor | Expert Rust security and safety auditor |
Usage Examples
Full Review
/full-review
# Claude will:
# 1. Analyze codebase structure
# 2. Select relevant review skills
# 3. Execute reviews in priority order
# 4. Synthesize findings
# 5. Provide actionable recommendations
Specific Reviews
# Architecture review
/architecture-review
# API review
/api-review
# Bug hunting
/bug-review
# Test quality
/test-review
Manual Skill Invocation
Skill(pensive:architecture-review)
# Checks:
# - ADR compliance
# - Dependency direction
# - Layer violations
# - Design pattern usage
Review Depth
Each review skill operates at multiple levels:
| Level | Description | Time |
|---|---|---|
| Quick | High-level scan | 1-2 min |
| Standard | Thorough review | 5-10 min |
| Deep | Exhaustive analysis | 15+ min |
Specify depth when invoking:
/architecture-review --depth deep
Review Categories
Architecture Review
- ADR alignment
- Dependency analysis
- Layer boundary violations
- Pattern consistency
- Coupling metrics
API Review
- Endpoint consistency
- Error response patterns
- Authentication/authorization
- Versioning strategy
- Documentation completeness
Bug Review
- Logic errors
- Edge cases
- Race conditions
- Resource leaks
- Error handling gaps
Test Review
- Coverage gaps
- Test isolation
- Assertion quality
- Mocking patterns
- Edge case coverage
Rust Review
- Unsafe code audit
- Borrow checker patterns
- Memory safety
- Concurrency safety
- Idiomatic usage
Dependencies
Pensive builds on foundation plugins:
pensive
|
+--> imbue (review-core, evidence-logging)
|
+--> sanctum (git-workspace-review)
Workflow Integration
Pre-PR Review
# Before opening PR
Skill(sanctum:git-workspace-review)
/full-review
# Address findings
# Then create PR
Post-Merge Review
# After merge, deep review
/architecture-review --depth deep
Targeted Review
# Review specific area
/api-review src/api/
Superpowers Integration
| Command | Enhancement |
|---|---|
/full-review | Uses systematic-debugging for four-phase analysis |
/full-review | Uses verification-before-completion for evidence |
Output Format
Reviews produce structured output:
## Review Summary
### Critical Issues
1. [BUG] Race condition in UserService.update()
- Location: src/services/user.ts:45
- Impact: Data corruption under load
- Recommendation: Add mutex lock
### Warnings
1. [ARCH] Layer violation detected
- Controllers importing from repositories
- Recommendation: Add service layer
### Suggestions
1. [TEST] Missing edge case coverage
- UserService.delete() lacks null check test
Related Plugins
- imbue: Provides review scaffolding
- sanctum: Provides workspace context
- archetypes: Paradigm context for architecture review
parseltongue
Modern Python development suite for testing, performance, async patterns, and packaging.
Overview
Parseltongue brings Python 3.12+ best practices to your workflow. It covers the full development lifecycle: testing with pytest, performance optimization, async patterns, and modern packaging with uv.
Installation
/plugin install parseltongue@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
python-testing | Pytest and TDD workflows | Writing and running tests |
python-performance | Profiling and optimization | Debugging slow code |
python-async | Async programming patterns | Implementing asyncio |
python-packaging | Modern packaging with uv | Managing pyproject.toml |
Commands
| Command | Description |
|---|---|
/analyze-tests | Report on test suite health |
/run-profiler | Profile code execution |
/check-async | Validate async patterns |
Agents
| Agent | Description |
|---|---|
python-pro | Master Python 3.12+ with modern features |
python-tester | Expert testing for pytest, TDD, mocking |
python-optimizer | Expert performance optimization |
Usage Examples
Test Analysis
/analyze-tests
# Reports:
# - Coverage metrics
# - Test distribution
# - Slow tests
# - Missing coverage areas
# - Anti-patterns detected
Profiling
/run-profiler src/heavy_function.py
# Outputs:
# - CPU time breakdown
# - Memory usage
# - Hot paths
# - Optimization suggestions
Async Validation
/check-async src/async_module.py
# Checks:
# - Proper await usage
# - Event loop handling
# - Async context managers
# - Concurrency patterns
Skill Invocation
Skill(parseltongue:python-testing)
# Provides:
# - Pytest configuration patterns
# - TDD workflow guidance
# - Mocking strategies
# - Fixture patterns
Python 3.12+ Features
Parseltongue emphasizes modern Python:
Type Hints
# Modern syntax (3.10+)
def process(data: list[str] | None) -> dict[str, int]:
...
Pattern Matching
# Structural pattern matching (3.10+)
match response:
case {"status": "ok", "data": data}:
return data
case {"status": "error", "message": msg}:
raise ValueError(msg)
Exception Groups
# Exception groups (3.11+)
try:
async with asyncio.TaskGroup() as tg:
tg.create_task(task1())
tg.create_task(task2())
except* ValueError as eg:
for exc in eg.exceptions:
handle(exc)
Testing Patterns
TDD Workflow
Skill(parseltongue:python-testing)
# RED-GREEN-REFACTOR:
# 1. Write failing test
# 2. Implement minimal code
# 3. Refactor with tests green
Fixture Patterns
# Recommended patterns
@pytest.fixture
def db_session(tmp_path):
"""Session-scoped database fixture."""
db = Database(tmp_path / "test.db")
yield db
db.close()
@pytest.fixture
def user(db_session):
"""User fixture depending on db."""
return db_session.create_user("test")
Mocking Strategies
# Strategic mocking
def test_api_call(mocker):
mock_response = mocker.patch("requests.get")
mock_response.return_value.json.return_value = {"status": "ok"}
result = fetch_data()
assert result["status"] == "ok"
Performance Optimization
Profiling Tools
# cProfile integration
python -m cProfile -s cumtime script.py
# Memory profiling
from memory_profiler import profile
@profile
def memory_heavy():
...
Optimization Patterns
- Generators over lists: Save memory
- Local variables: Faster lookup
- Built-in functions: C-optimized
- Lazy evaluation: Defer computation
Async Patterns
Recommended Structure
async def main():
async with aiohttp.ClientSession() as session:
tasks = [fetch(session, url) for url in urls]
results = await asyncio.gather(*tasks)
return results
if __name__ == "__main__":
asyncio.run(main())
Anti-Patterns to Avoid
- Blocking calls in async functions
- Creating event loops inside coroutines
- Ignoring exceptions in fire-and-forget tasks
Packaging with uv
pyproject.toml
[project]
name = "my-package"
version = "1.0.0"
dependencies = ["requests>=2.28"]
[project.optional-dependencies]
dev = ["pytest", "ruff", "mypy"]
[tool.uv]
index-url = "https://pypi.org/simple"
Commands
# Install with uv
uv pip install -e ".[dev]"
# Lock dependencies
uv pip compile pyproject.toml -o requirements.lock
# Sync environment
uv pip sync requirements.lock
Superpowers Integration
| Skill | Enhancement |
|---|---|
python-testing | Uses test-driven-development for TDD cycles |
python-testing | Uses testing-anti-patterns for detection |
Related Plugins
- leyline: Provides pytest-config patterns
- sanctum: Test updates integrate with test-updates skill
memory-palace
Knowledge organization using spatial memory techniques.
Overview
Memory Palace applies the ancient method of loci to digital knowledge management. It helps you build “palaces” - structured knowledge repositories that use spatial metaphors for organization and retrieval.
Installation
/plugin install memory-palace@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
memory-palace-architect | Building virtual palaces | Organizing complex concepts |
knowledge-locator | Spatial search | Finding stored information |
knowledge-intake | Intake and curation | Processing new information |
digital-garden-cultivator | Digital garden maintenance | Long-term knowledge base care |
session-palace-builder | Session-specific palaces | Temporary working knowledge |
Commands
| Command | Description |
|---|---|
/palace | Manage memory palaces |
/garden | Manage digital gardens |
/navigate | Search and traverse palaces |
Agents
| Agent | Description |
|---|---|
palace-architect | Designs memory palace architectures |
knowledge-navigator | Searches and retrieves from palaces |
knowledge-librarian | Evaluates and routes knowledge |
garden-curator | Maintains digital gardens |
Hooks
| Hook | Type | Description |
|---|---|---|
research_interceptor.py | PreToolUse | Checks local knowledge before web searches |
url_detector.py | UserPromptSubmit | Detects URLs for intake |
local_doc_processor.py | PostToolUse | Processes local docs after reads |
web_content_processor.py | PostToolUse | Processes web content for storage |
Usage Examples
Create a Palace
/palace create "Python Async Patterns"
# Creates:
# - Palace structure
# - Entry rooms
# - Navigation paths
Add Knowledge
Skill(memory-palace:knowledge-intake)
# Processes:
# - New information
# - Categorization
# - Spatial placement
# - Cross-references
Navigate Knowledge
/navigate "async context managers"
# Returns:
# - Matching rooms
# - Related concepts
# - Cross-references
# - Source citations
Maintain Garden
/garden cultivate
# Performs:
# - Pruning outdated content
# - Strengthening connections
# - Identifying gaps
# - Suggesting additions
Cache Modes
The research interceptor supports four modes:
| Mode | Behavior | Use Case |
|---|---|---|
cache_only | Deny web when no cache match | Offline work, audits |
cache_first | Check cache, fall back to web | Default research |
augment | Blend cache with live results | When freshness matters |
web_only | Bypass Memory Palace | Incident response |
Set mode in hooks/memory-palace-config.yaml:
research_mode: cache_first
Palace Architecture
Palaces use spatial metaphors:
Palace: "Python Async"
├── Entry Hall
│ └── Overview concepts
├── Library Wing
│ ├── asyncio basics
│ ├── coroutines
│ └── event loops
├── Practice Room
│ ├── code examples
│ └── exercises
└── Reference Archive
├── official docs
└── external sources
Knowledge Intake Flow
New Information
|
v
[Novelty Check] --> Duplicate? --> Skip
|
No
v
[Domain Alignment] --> Matches interests? --> Flag for intake
|
Yes
v
[Palace Placement] --> Store in appropriate room
|
v
[Cross-Reference] --> Link to related concepts
Embedding Support
Optional semantic search via embeddings:
# Build embeddings
cd plugins/memory-palace
uv run python scripts/build_embeddings.py --provider local
# Toggle at runtime
export MEMORY_PALACE_EMBEDDINGS_PROVIDER=local
Telemetry
Track research decisions:
# data/telemetry/memory-palace.csv
timestamp,query,decision,novelty_score,domains,duplicates
2025-01-15,async patterns,cache_hit,0.2,python,entry-123
Curation Workflow
Regular maintenance keeps palaces useful:
- Review intake queue:
data/intake_queue.jsonl - Approve/reject items: Based on value and fit
- Update vitality scores: Mark evergreen vs. probationary
- Prune stale content: Archive outdated information
- Document in curation log:
docs/curation-log.md
Digital Gardens
Unlike palaces (structured), gardens are organic:
/garden status
# Shows:
# - Growth rate
# - Connection density
# - Orphan nodes
# - Suggested links
Related Plugins
- conservation: Memory Palace helps reduce redundant web fetches
- imbue: Evidence logging integrates with knowledge intake
spec-kit
Specification-Driven Development (SDD) toolkit for structured feature development.
Overview
Spec-Kit enforces “define before implement” - you write specifications first, generate plans, create tasks, then execute. This reduces wasted effort and validates features match requirements.
Installation
/plugin install spec-kit@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
spec-writing | Specification authoring | Writing requirements from ideas |
task-planning | Task generation | Breaking specs into tasks |
speckit-orchestrator | Workflow coordination | Managing spec-to-code lifecycle |
Commands
| Command | Description |
|---|---|
/speckit-specify | Create a new specification |
/speckit-plan | Generate implementation plan |
/speckit-tasks | Generate ordered tasks |
/speckit-implement | Execute tasks |
/speckit-analyze | Check artifact consistency |
/speckit-checklist | Generate custom checklist |
/speckit-clarify | Ask clarifying questions |
/speckit-constitution | Create project constitution |
/speckit-startup | Bootstrap workflow at session start |
Agents
| Agent | Description |
|---|---|
spec-analyzer | Validates artifact consistency |
task-generator | Creates implementation tasks |
implementation-executor | Executes tasks and writes code |
Usage Examples
Full SDD Workflow
# 1. Create specification
/speckit-specify Add user authentication with OAuth2
# 2. Clarify requirements
/speckit-clarify
# 3. Generate plan
/speckit-plan
# 4. Create tasks
/speckit-tasks
# 5. Execute implementation
/speckit-implement
# 6. Verify consistency
/speckit-analyze
Quick Specification
/speckit-specify Add dark mode toggle
# Claude will:
# 1. Ask clarifying questions
# 2. Generate spec.md
# 3. Identify dependencies
# 4. Suggest next steps
Session Startup
/speckit-startup
# Loads:
# - Existing spec.md
# - Current plan.md
# - Outstanding tasks
# - Progress status
# - Constitution (principles/constraints)
Artifact Structure
Spec-Kit creates three main artifacts:
spec.md
# Feature: User Authentication
## Overview
OAuth2-based authentication for web application.
## Requirements
- [ ] Google OAuth integration
- [ ] Session management
- [ ] Token refresh
## Acceptance Criteria
1. Users can sign in with Google
2. Sessions persist for 7 days
3. Tokens refresh automatically
## Non-Functional Requirements
- Login latency < 2s
- 99.9% availability
plan.md
# Implementation Plan
## Phase 1: OAuth Setup
- Configure Google OAuth credentials
- Implement OAuth callback handler
## Phase 2: Session Management
- Design session schema
- Implement token storage
## Phase 3: Integration
- Connect to frontend
- Add logout functionality
tasks.md
# Tasks
## Phase 1 Tasks
- [ ] Create OAuth config module
- [ ] Implement /auth/login endpoint
- [ ] Implement /auth/callback endpoint
## Phase 2 Tasks
- [ ] Design session table schema
- [ ] Create session service
- [ ] Implement token refresh logic
Constitution
Project constitution defines principles:
/speckit-constitution
# Creates:
# - Coding standards
# - Architecture principles
# - Testing requirements
# - Documentation standards
Consistency Analysis
/speckit-analyze
# Checks:
# - spec.md requirements map to plan.md
# - plan.md phases map to tasks.md
# - No orphan tasks
# - No missing implementations
Checklist Generation
/speckit-checklist
# Generates custom checklist:
# - [ ] All acceptance criteria met
# - [ ] Tests written
# - [ ] Documentation updated
# - [ ] Security reviewed
Dependencies
Spec-Kit uses imbue for analysis:
spec-kit
|
v
imbue (diff-analysis, evidence-logging)
Superpowers Integration
| Command | Enhancement |
|---|---|
/speckit-clarify | Uses brainstorming for questions |
/speckit-plan | Uses writing-plans for structure |
/speckit-tasks | Uses executing-plans, systematic-debugging |
/speckit-implement | Uses executing-plans, systematic-debugging |
/speckit-analyze | Uses systematic-debugging, verification-before-completion |
/speckit-checklist | Uses verification-before-completion |
Best Practices
- Specify First: Never skip the specification phase
- Clarify Ambiguity: Use
/speckit-clarifyliberally - Small Tasks: Break into 1-2 hour chunks
- Verify Often: Run
/speckit-analyzeafter changes - Update Artifacts: Keep spec/plan/tasks in sync
Workflow Tips
Starting New Feature
/speckit-specify [feature description]
/speckit-clarify
/speckit-plan
Resuming Work
/speckit-startup
# Review current state
/speckit-implement
Before PR
/speckit-analyze
/speckit-checklist
Related Plugins
- imbue: Provides analysis patterns
- sanctum: Integrates for PR preparation after implementation
minister
GitHub initiative tracking and release management.
Overview
Minister helps you track project initiatives, monitor release readiness, and generate stakeholder reports. It bridges the gap between development work and project management.
Installation
/plugin install minister@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
github-initiative-pulse | Initiative progress tracking | Weekly status reports |
release-health-gates | Release readiness checks | Before releasing |
Scripts
| Script | Description |
|---|---|
tracker.py | CLI for initiative database and reporting |
Usage Examples
Initiative Tracking
Skill(minister:github-initiative-pulse)
# Generates:
# - Issue completion rates
# - Milestone progress
# - Velocity trends
# - Risk flags
Release Readiness
Skill(minister:release-health-gates)
# Checks:
# - CI status
# - Documentation completeness
# - Breaking change inventory
# - Risk assessment
CLI Usage
# List initiatives
python tracker.py list
# Show initiative details
python tracker.py show auth-v2
# Generate weekly report
python tracker.py report --week
# Update status
python tracker.py update auth-v2 --status in-progress
Initiative Structure
Initiatives track work across issues and PRs:
initiative:
id: auth-v2
title: "Authentication v2"
status: in-progress
milestones:
- name: "OAuth Setup"
due: 2025-01-30
issues: [#42, #43, #44]
- name: "Session Management"
due: 2025-02-15
issues: [#45, #46]
metrics:
velocity: 3.5 issues/week
completion: 65%
risk: low
Health Gates
Release health gates verify readiness:
| Gate | Checks |
|---|---|
| CI | All checks passing, no flaky tests |
| Docs | README updated, CHANGELOG complete |
| Breaking | Breaking changes documented |
| Security | No critical vulnerabilities |
| Coverage | Test coverage above threshold |
Gate Output
## Release Health: v2.0.0
### CI Status: PASS
- All 156 tests passing
- Build time: 3m 42s
- No flaky tests detected
### Documentation: PASS
- README updated
- CHANGELOG has v2.0.0 section
- API docs generated
### Breaking Changes: WARN
- 2 breaking changes identified
- Migration guide needed for UserService API
### Security: PASS
- No critical/high vulnerabilities
- Dependencies up to date
### Coverage: PASS
- 87% coverage (threshold: 80%)
## Recommendation: CONDITIONAL RELEASE
Address breaking change documentation before release.
Reporting
Weekly Report
python tracker.py report --week
# Outputs:
# - Initiatives summary
# - This week's completions
# - Next week's focus
# - Blockers and risks
Stakeholder Summary
python tracker.py report --stakeholder
# Generates executive summary:
# - High-level progress
# - Key achievements
# - Timeline updates
# - Resource needs
Integration with GitHub
Minister reads from GitHub:
# Sync initiative from GitHub milestone
python tracker.py sync --milestone "v2.0"
# Pull issue status
python tracker.py refresh auth-v2
Superpowers Integration
| Skill | Enhancement |
|---|---|
issue-management | Uses systematic-debugging for investigation |
Configuration
tracker.yaml
github:
repo: athola/my-project
token_env: GITHUB_TOKEN
initiatives_dir: .minister/initiatives
reports_dir: .minister/reports
health_gates:
coverage_threshold: 80
max_critical_vulns: 0
require_changelog: true
Workflow Examples
Sprint Planning
# Check initiative status
python tracker.py list
# Update priorities
python tracker.py update auth-v2 --priority high
# Generate planning report
python tracker.py report --planning
Release Preparation
# Run health gates
Skill(minister:release-health-gates)
# Address any failures
# Then re-run until all pass
# Tag release
git tag v2.0.0
Weekly Standup
# Generate pulse report
Skill(minister:github-initiative-pulse)
# Share with team
# Update tracker based on discussion
Related Plugins
- sanctum: PR preparation integrates with release workflow
- imbue: Feature review complements initiative tracking
Attune
Full-cycle project development from ideation to implementation.
Overview
Attune integrates the brainstorm-plan-execute workflow from superpowers with spec-driven development from spec-kit to provide a complete project lifecycle.
Workflow
graph LR
A[Brainstorm] --> B[War Room]
B --> C[Specify]
C --> D[Plan]
D --> E[Initialize]
E --> F[Execute]
style A fill:#e1f5fe
style B fill:#fff9c4
style C fill:#f3e5f5
style D fill:#fff3e0
style E fill:#e8f5e8
style F fill:#fce4ec
Commands
| Command | Phase | Description |
|---|---|---|
/attune:brainstorm | 1. Ideation | Socratic questioning to explore problem space |
/attune:war-room | 2. Deliberation | Multi-LLM expert deliberation with reversibility-based routing |
/attune:specify | 3. Specification | Create detailed specs from war-room decision |
/attune:blueprint | 4. Planning | Design architecture and break down tasks |
/attune:init | 5. Initialization | Generate or update project structure with tooling |
/attune:execute | 6. Implementation | Execute tasks with TDD discipline |
/attune:upgrade-project | Maintenance | Add configs to existing projects |
/attune:mission | Full Cycle | Run entire lifecycle as a single mission with state detection |
/attune:validate | Quality | Validate project structure |
Supported Languages
- Python: uv, pytest, ruff, mypy, pre-commit
- Rust: cargo, clippy, rustfmt, CI workflows
- TypeScript/React: npm/pnpm/yarn, vite, jest, eslint, prettier
What Gets Configured
- Git initialization with detailed .gitignore
- ✅ GitHub Actions workflows (test, lint, typecheck, publish)
- ✅ Pre-commit hooks (formatting, linting, security)
- ✅ Makefile with standard development targets
- ✅ Dependency management (uv/cargo/package managers)
- ✅ Project structure (src/, tests/, README.md)
Quick Start
New Python Project
# Interactive mode
/attune:init
# Non-interactive
/attune:init --lang python --name my-project --author "Your Name"
Full Cycle Workflow
# 1. Brainstorm the idea
/attune:brainstorm
# 2. War room deliberation (auto-routes by complexity)
/attune:war-room --from-brainstorm
# 3. Create specification
/attune:specify
# 4. Plan architecture
/attune:blueprint
# 5. Initialize project
/attune:init
# 6. Execute implementation
/attune:execute
Skills
| Skill | Purpose |
|---|---|
project-brainstorming | Socratic ideation workflow |
war-room | Multi-LLM expert council with Type 1/2 decision routing |
war-room-checkpoint | Inline RS assessment for embedded escalation during workflow |
project-specification | Spec creation from war-room decision |
project-planning | Architecture and task breakdown |
project-init | Interactive project initialization |
project-execution | Systematic implementation |
makefile-generation | Generate language-specific Makefiles |
mission-orchestrator | Unified brainstorm-specify-plan-execute lifecycle orchestrator |
workflow-setup | Configure CI/CD pipelines |
precommit-setup | Set up code quality hooks |
Agents
| Agent | Role |
|---|---|
project-architect | Guides full-cycle workflow (brainstorm → plan) |
project-implementer | Executes implementation with TDD |
Integration
Attune combines capabilities from:
- superpowers: Brainstorming, planning, execution workflows
- spec-kit: Specification-driven development
- abstract: Plugin and skill authoring for plugin projects
War Room Integration
The war room is a mandatory phase after brainstorming. It automatically routes to the appropriate deliberation intensity based on Reversibility Score (RS):
| Mode | RS Range | Duration | Description |
|---|---|---|---|
| Express | ≤ 0.40 | <2 min | Quick decision by Chief Strategist |
| Lightweight | 0.41-0.60 | 5-10 min | 3-expert panel |
| Full Council | 0.61-0.80 | 15-30 min | 7-expert deliberation |
| Delphi | > 0.80 | 30-60 min | Iterative consensus for critical decisions |
The war-room-checkpoint skill can also trigger additional deliberation during planning or execution when high-stakes decisions arise.
Examples
Initialize Python CLI Project
/attune:init --lang python --type cli
Creates:
pyproject.tomlwith uv configurationMakefilewith test/lint/format targets- GitHub Actions workflows
- Pre-commit hooks for ruff and mypy
- Basic CLI structure
Upgrade Existing Project
# Add missing configs
/attune:upgrade-project
# Validate structure
/attune:validate
Configuration
Custom Templates
Place custom templates in:
~/.claude/attune/templates/(user-level).attune/templates/(project-level)$ATTUNE_TEMPLATES_PATH(environment variable)
Reference Projects
Templates sync from reference projects:
simple-resume(Python)skrills(multi-language)importobot(automation)
scribe
Documentation review, cleanup, and generation with AI slop detection.
Overview
Scribe helps maintain high-quality documentation by detecting AI-generated content patterns (“slop”), learning writing styles from exemplars, and generating or remediating documentation. It integrates with sanctum’s documentation workflows.
Installation
/plugin install scribe@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
slop-detector | Detect AI-generated content markers | Scanning docs for AI tells |
style-learner | Extract writing style from exemplar text | Creating style profiles |
doc-generator | Generate/remediate documentation | Writing or fixing docs |
Commands
| Command | Description |
|---|---|
/slop-scan | Scan files for AI slop markers |
/style-learn | Create style profile from examples |
/doc-polish | Clean up AI-generated content |
/doc-generate | Generate new documentation |
/doc-verify | Validate documentation claims with proof-of-work |
Agents
| Agent | Description |
|---|---|
doc-editor | Interactive documentation editing |
slop-hunter | Comprehensive slop detection |
doc-verifier | QA validation using proof-of-work methodology |
Usage Examples
Detect AI Slop
# Scan current directory
/slop-scan
# Scan specific file with fix suggestions
/slop-scan README.md --fix
Clean Up Content
# Interactive polish
/doc-polish docs/guide.md
# Polish all markdown files
/doc-polish **/*.md
Learn a Style
# Create style profile from examples
/style-learn good-examples/*.md --name house-style
# Generate with learned style
/doc-generate readme --style house-style
Verify Documentation
# Verify README claims and commands
/doc-verify README.md
# Verify with strict mode
/doc-verify docs/ --strict --report qa-report.md
AI Slop Detection
Scribe detects patterns that reveal AI-generated content:
Tier 1 Words (Highest Confidence)
Words that appear dramatically more often in AI text: delve, tapestry, realm, embark, beacon, multifaceted, nuanced, pivotal, meticulous, showcasing, leveraging, streamline, comprehensive.
Phrase Patterns
Formulaic constructions like “In today’s fast-paced world,” “cannot be overstated,” “navigate the complexities,” and “treasure trove of.”
Structural Markers
Overuse of em dashes, excessive bullet points, uniform sentence length, perfect grammar without contractions.
Writing Principles
Scribe enforces these principles:
- Ground every claim: Use specifics, not adjectives
- Trim crutches: No formulaic openers or closers
- Show perspective: Include reasoning and trade-offs
- Vary structure: Mix sentence lengths, balance bullets with prose
- Use active voice: Direct statements over passive constructions
Vocabulary Substitutions
| Instead of | Use |
|---|---|
| leverage | use |
| utilize | use |
| comprehensive | thorough |
| robust | solid |
| facilitate | help |
| optimize | improve |
| delve | explore |
| embark | start |
Integration
Scribe integrates with sanctum documentation workflows:
| Sanctum Command | Scribe Integration |
|---|---|
/pr-review | Runs slop-detector on changed .md files |
/update-docs | Runs slop-detector on edited docs |
/update-readme | Runs slop-detector on README |
/prepare-pr | Verifies PR descriptions with slop-detector |
Dependencies
Scribe uses skills from other plugins:
- imbue:proof-of-work: Evidence-based verification (used by
doc-verifier) - conserve:bloat-detector: Token optimization
scry
Media generation for terminal recordings, browser recordings, GIF processing, and media composition.
Overview
Scry creates documentation assets through terminal recordings (VHS), browser automation recordings (Playwright), GIF processing, and multi-source media composition. Use it to build tutorials, demos, and README assets.
Installation
/plugin install scry@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
vhs-recording | Terminal recordings using VHS tape scripts | CLI demos, tool tutorials |
browser-recording | Browser recordings using Playwright | Web UI walkthroughs |
gif-generation | GIF processing and optimization | README assets, docs |
media-composition | Combine multiple media sources | Full tutorials |
Commands
| Command | Description |
|---|---|
/record-terminal | Create terminal recording with VHS |
/record-browser | Record browser session with Playwright |
Usage Examples
Terminal Recording
/record-terminal
# Or use the skill directly
Skill(scry:vhs-recording)
Creates a VHS tape script and records terminal output to GIF or video.
Browser Recording
/record-browser
# Or use the skill directly
Skill(scry:browser-recording)
Records browser sessions with Playwright for web UI documentation.
GIF Generation
Skill(scry:gif-generation)
# Optimizes recordings for documentation:
# - Resize for README display
# - Compress file size
# - Adjust frame rate
Media Composition
Skill(scry:media-composition)
# Combines assets:
# - Terminal + browser recordings
# - Multiple clips into tutorials
# - Add transitions and captions
VHS Tape Script Example
VHS uses tape scripts to define recordings:
# demo.tape
Output demo.gif
Set FontSize 16
Set Width 1200
Set Height 600
Type "echo 'Hello, World!'"
Sleep 500ms
Enter
Sleep 2s
Run with:
vhs demo.tape
Dependencies
VHS (Terminal Recording)
macOS:
brew install charmbracelet/tap/vhs
brew install ttyd ffmpeg
Linux (Debian/Ubuntu):
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://repo.charm.sh/apt/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/charm.gpg
echo "deb [signed-by=/etc/apt/keyrings/charm.gpg] https://repo.charm.sh/apt/ * *" | sudo tee /etc/apt/sources.list.d/charm.list
sudo apt update && sudo apt install vhs
sudo apt install ffmpeg
Playwright (Browser Recording)
npm install -g playwright
npx playwright install
FFmpeg (Media Processing)
Required for GIF generation and media composition.
# macOS
brew install ffmpeg
# Linux
sudo apt install ffmpeg
Workflow Patterns
Tutorial Creation
- Record terminal demo with
vhs-recording - Record web UI walkthrough with
browser-recording - Combine with
media-composition - Optimize output with
gif-generation
Quick Demo
/record-terminal
# Creates demo.gif ready for README
Documentation Assets
# Generate multiple GIFs for docs
Skill(scry:vhs-recording)
Skill(scry:gif-generation)
# Move outputs to docs/images/
Integration with sanctum
Scry integrates with sanctum for PR and documentation workflows:
# Generate demo for PR description
/record-terminal
# Include in PR body
/sanctum:pr
Related Plugins
- sanctum: PR preparation uses scry for demo assets
- memory-palace: Store and organize media assets
Tutorials
Step-by-step guides for common workflows and advanced features.
Available Tutorials
| Tutorial | Description | Level |
|---|---|---|
| Skills Showcase | Discover, validate, and use skills in Claude Code | Beginner |
| Cache Modes | Memory Palace cache mode configuration | Intermediate |
| Embedding Upgrade | Adding semantic search to Memory Palace | Advanced |
| Memory Palace Curation | Knowledge intake and curation workflow | Intermediate |
| Error Handling | Error handling patterns and recovery strategies | Intermediate |
| Cross-Plugin Collaboration | Using skills across multiple plugins | Intermediate |
Tutorial Structure
Each tutorial includes:
- Prerequisites: What you need before starting
- Objectives: What you’ll learn
- Step-by-step instructions: Detailed walkthrough
- Verification: How to confirm success
- Troubleshooting: Common issues and solutions
Skill Levels
| Level | Description |
|---|---|
| Beginner | New to Claude Night Market |
| Intermediate | Familiar with basic plugin usage |
| Advanced | Comfortable with configuration and customization |
Suggested Learning Path
For New Users
- Complete Getting Started first
- Follow Skills Showcase to understand the skill system
- Read plugin documentation for plugins you’ve installed
- Return here for deeper dives
For Memory Palace Users
- Cache Modes - Understand interception behavior
- Memory Palace Curation - Manage knowledge intake
- Embedding Upgrade - Add semantic search
For Plugin Developers
- Skills Showcase - Understand skill architecture
- Cross-Plugin Collaboration - Learn skill dependencies
- Error Handling - Implement error handling
Achievement Progress
| Tutorial | Status |
|---|---|
| Cache Modes | |
| Embedding Upgrade | |
| Memory Palace Curation |
Skills Showcase - Claude Code Development Workflows
This tutorial demonstrates the foundational concept of skills in the claude-night-market ecosystem. Skills are the primary abstraction that transforms Claude Code from a general-purpose assistant into a specialized development partner.

A detailed walkthrough of skill discovery, structure, validation, and composition patterns.
Overview
The claude-night-market contains 105+ skills across 14 plugins, each skill representing a reusable, composable unit of functionality. This tutorial explores:
- Skill Discovery: How to find and catalog available skills
- Skill Anatomy: Understanding the structure and metadata of skills
- Skill Validation: Verifying that skills follow proper conventions
- Skill Composition: How skills chain together into workflows
Part 1: Skill Discovery and Cataloging
Exploring Plugin Skills
Skills are organized within plugin directories under a skills/ subdirectory. Each skill is a directory containing:
SKILL.md- The skill definition with frontmatter and workflow instructionsmodules/(optional) - Modular components loaded progressivelyscripts/(optional) - Executable scripts for automation
To explore available skills in a plugin:
ls plugins/abstract/skills/
Output:
dogfood/ plugin-auditor/ plugin-validator/ skill-auditor/ skill-creator/
Each of these directories represents a meta-skill for plugin development.
Counting Total Skills
To get a project-wide count of all skills:
find plugins -name 'SKILL.md' -type f | wc -l
Output:
105
This count represents the total capability surface of the marketplace. Each skill is:
- Self-contained: Can be invoked independently
- Documented: Includes description, usage, and examples
- Testable: Follows structured patterns for validation
Part 2: Skill Anatomy and Structure
Skill Definition Format
Skills follow a two-part structure:
- YAML Frontmatter - Metadata and configuration
- Markdown Body - Workflow instructions and context
Let’s examine a real skill:
head -30 plugins/abstract/skills/plugin-validator/SKILL.md
Sample Output:
---
name: plugin-validator
description: |
Validate plugin structure, metadata, and skill definitions.
Checks frontmatter, dependencies, and file organization.
category: validation
tags: [plugin, validation, quality]
tools: [Read, Glob, Bash]
complexity: medium
estimated_tokens: 800
dependencies:
- abstract:shared
---
# Plugin Validator Skill
Validates that a plugin follows the claude-night-market conventions...
Frontmatter Fields
| Field | Purpose | Example |
|---|---|---|
name | Unique identifier | plugin-validator |
description | What the skill does | Multi-line description |
category | Skill category | validation, workflow, analysis |
tags | Searchable keywords | [plugin, validation] |
tools | Required Claude Code tools | [Read, Write, Bash] |
complexity | Complexity level | low, medium, high |
estimated_tokens | Approximate token usage | 800 |
dependencies | Required skills | [abstract:shared] |
Progressive Loading
Some skills use progressive loading to reduce initial token cost:
progressive_loading: true
modules:
- manifest-parsing
- markdown-generation
- tape-validation
Modules are loaded on-demand when specific functionality is needed.
Part 3: Skill Validation
Why Validate Skills?
The abstract:plugin-validator skill verifies that skills follow project conventions. This validation checks for structural integrity by confirming required files exist, ensures that YAML frontmatter is well-formed, and resolves dependencies between skills. It also assesses documentation quality by checking for clear descriptions and examples.
Using the Validator
In Claude Code, invoke with:
Skill(abstract:plugin-validator, plugin_name='sanctum')
The validator performs these checks:
- Plugin structure: Confirms
skills/,commands/,.claude-plugin/exist - Skill frontmatter: Validates YAML syntax and required fields
- Command definitions: Checks command markdown files are valid
- Dependencies: Verifies all referenced skills exist
Example Validation Output:
Plugin structure valid
19 skills found with valid frontmatter
12 commands defined correctly
All dependencies resolved
WARNING: skill-x missing 'estimated_tokens' field
Part 4: Skills in Real Workflows
Example: Git Workspace Review
The sanctum:git-workspace-review skill is commonly invoked at the start of development sessions:
Skill(sanctum:git-workspace-review)
What it does:
- Repository State: Runs
git statusto identify uncommitted changes - Commit History: Runs
git logto show recent commits and context - File Analysis: Analyzes changed files to understand impact areas
- Session Context: Provides Claude Code with a full view of the current work
Value Proposition:
- Context Recovery: Quickly understand what’s in progress
- Change Impact: See which parts of the codebase are affected
- Commit Quality: Understand recent work to maintain consistency
Example: PR Preparation Workflow
Complex workflows compose multiple skills sequentially:
PR Preparation Workflow:
1. Skill(sanctum:git-workspace-review) - Understand changes
2. Skill(imbue:scope-guard) - Check scope drift
3. Skill(sanctum:commit-messages) - Generate commit message
4. Skill(sanctum:pr-prep) - Prepare PR description
Benefits of Skill Composition
Composing skills into workflows provides several advantages. Each skill maintains a focus on a single responsibility, which increases reusability across different projects and tasks. This modular approach maintains a consistent standard for complex operations like PR preparation and integrates quality gates that automatically check for scope drift and code quality issues.
Part 5: Skills Enable Workflow Automation
The Skills Philosophy
Skills transform the assistant’s capabilities by encoding team best practices directly into the workflow. This automation removes the need to manually describe repetitive tasks such as code review steps or documentation updates. By following the same process every time, skills maintain consistency across the project and provide the assistant with the necessary context to understand specific project structures and conventions.
Skill Composition Patterns
Sequential Composition
Skills execute in order, each building on the previous:
Skill(A) → Skill(B) → Skill(C)
Conditional Composition
Skills invoke others based on context:
if scope_drift_detected:
Skill(imbue:scope-guard)
Parallel Composition
Independent skills can run in parallel (conceptually):
Skill(pensive:api-review) + Skill(pensive:architecture-review)
Key Insights
Design Principles
- Single Responsibility: Each skill does one thing well
- Clear Dependencies: Skills declare what they need
- Progressive Disclosure: Complex skills load modules on-demand
- Self-Documentation: Skills explain their purpose and usage
Quality Metrics
- 105 skills across 14 plugins
- Structured workflows for git, review, specs, testing
- Composable and reusable across projects
- Self-documenting with clear dependencies
- Validated structure supports overall quality
Workflow Value
- Git Operations: 19 skills in sanctum for branch management, commits, PRs
- Code Review: 12 skills in pensive for multi-discipline review
- Specification: 8 skills in spec-kit for spec-driven development
- Testing: 6 skills in parseltongue for Python test analysis
- Meta-Development: 5 skills in abstract for plugin creation
Further Reading
- Plugin Overview: Deep dive into plugin design
- Skills Reference: How skills work and skill catalog
- Workflows Reference: Common skill composition patterns
- Capabilities Reference: Full catalog of all capabilities
Duration: ~90 seconds Difficulty: Beginner Prerequisites: Basic understanding of Claude Code Tags: skills, workflows, claude-code, development, getting-started, architecture
Error Handling Tutorial
This tutorial provides practical guidance for implementing production-grade error handling in Claude Code skills and plugins. It covers real-world scenarios, code examples, and best practices.
Table of Contents
- Understanding Error Types
- Error Classification System
- Practical Error Handling Patterns
- Real-World Examples
- Debugging Techniques
- Testing Error Scenarios
- Monitoring and Observability
- Common Pitfalls and Solutions
Understanding Error Types
1. System Errors
These are errors caused by the underlying system environment:
- Network failures
- File system issues
- Memory exhaustion
- Database connection problems
2. Logic Errors
Errors in the program’s logic or flow:
- Invalid input handling
- Incorrect assumptions
- Boundary condition failures
- State inconsistencies
3. Integration Errors
Errors when interacting with external services:
- API failures
- Authentication issues
- Rate limiting
- Service unavailability
4. User Errors
Errors caused by user actions or input:
- Invalid configuration
- Incorrect usage patterns
- Permission issues
- Resource conflicts
Error Classification System
Based on the leyline:error-patterns standard:
Critical Errors (Halt Execution)
# E001-E099: Critical system failures
class CriticalError(Exception):
"""Error that requires immediate halt of execution"""
pass
class AuthenticationError(CriticalError):
"""Authentication has permanently failed"""
def __init__(self, service, message="Authentication failed"):
self.service = service
self.code = "E001"
super().__init__(f"[{self.code}] {service}: {message}")
Recoverable Errors (Retry or Secondary Strategy)
# E010-E099: Recoverable errors
class RecoverableError(Exception):
"""Error that might be resolved with retry or secondary strategy"""
pass
class NetworkTimeoutError(RecoverableError):
"""Network operation timed out"""
def __init__(self, operation, timeout):
self.operation = operation
self.timeout = timeout
self.code = "E010"
super().__init__(f"[{self.code}] {operation} timed out after {timeout}s")
Warnings (Continue with Logging)
# E020-E099: Warning conditions
class WarningError(Exception):
"""Warning condition that should be logged but doesn't halt execution"""
pass
class PerformanceWarning(WarningError):
"""Operation is slower than expected"""
def __init__(self, operation, duration, threshold):
self.operation = operation
self.duration = duration
self.threshold = threshold
self.code = "E020"
super().__init__(f"[{self.code}] {operation} took {duration:.2f}s (threshold: {threshold}s)")
Practical Error Handling Patterns
1. The Try-Except-Else-Finally Pattern
import logging
logger = logging.getLogger(__name__)
def robust_file_operation(filepath):
"""Pattern for file operations with detailed error handling"""
try:
# Try to open and process file
with open(filepath, 'r') as f:
data = f.read()
except FileNotFoundError:
logger.error(f"File not found: {filepath}")
raise FileNotFoundError(f"E002 File not found: {filepath}")
except PermissionError:
logger.error(f"Permission denied: {filepath}")
raise PermissionError(f"E006 Permission denied: {filepath}")
except UnicodeDecodeError as e:
logger.error(f"Encoding error in {filepath}: {e}")
# Try alternative encoding
try:
with open(filepath, 'r', encoding='utf-8-sig') as f:
data = f.read()
logger.warning(f"Used alternative encoding for {filepath}")
except Exception:
raise ValueError(f"E012 Cannot decode file: {filepath}")
else:
# File opened successfully
logger.info(f"Successfully read {filepath}")
return data
finally:
# Cleanup (if needed)
pass
2. Retry with Exponential Backoff
import time
import random
import asyncio
from typing import Callable, Any
async def retry_with_backoff(
operation: Callable,
max_retries: int = 3,
base_delay: float = 1.0,
max_delay: float = 60.0,
jitter: bool = True
) -> Any:
"""
Execute operation with exponential backoff retry logic
"""
last_exception = None
for attempt in range(max_retries + 1):
try:
return await operation()
except (ConnectionError, TimeoutError) as e:
last_exception = e
if attempt == max_retries:
break
# Calculate delay with exponential backoff
delay = min(base_delay * (2 ** attempt), max_delay)
# Add jitter to prevent thundering herd
if jitter:
delay *= (0.5 + random.random() * 0.5)
logger.warning(
f"Attempt {attempt + 1} failed, retrying in {delay:.2f}s: {e}"
)
await asyncio.sleep(delay)
except Exception as e:
# Don't retry non-transient errors
logger.error(f"Non-retryable error: {e}")
raise
raise last_exception
3. Circuit Breaker Pattern
import time
from enum import Enum
from typing import Callable, Any
class CircuitState(Enum):
CLOSED = "closed"
OPEN = "open"
HALF_OPEN = "half_open"
class CircuitBreaker:
"""Circuit breaker to prevent cascading failures"""
def __init__(
self,
failure_threshold: int = 5,
timeout: float = 60.0,
expected_exception: type = Exception
):
self.failure_threshold = failure_threshold
self.timeout = timeout
self.expected_exception = expected_exception
self.failure_count = 0
self.last_failure_time = None
self.state = CircuitState.CLOSED
def __call__(self, func: Callable) -> Callable:
async def wrapper(*args, **kwargs):
if self.state == CircuitState.OPEN:
if time.time() - self.last_failure_time > self.timeout:
self.state = CircuitState.HALF_OPEN
else:
raise Exception("E015 Circuit breaker is OPEN")
try:
result = await func(*args, **kwargs)
if self.state == CircuitState.HALF_OPEN:
self.state = CircuitState.CLOSED
self.failure_count = 0
return result
except self.expected_exception as e:
self.failure_count += 1
self.last_failure_time = time.time()
if self.failure_count >= self.failure_threshold:
self.state = CircuitState.OPEN
raise
return wrapper
4. Graceful Degradation Pattern
from typing import Optional, Dict, Any
class GracefulDegradation:
"""Implement graceful degradation when services fail"""
def __init__(self):
self.secondary_actions = {}
def register_secondary(self, operation: str, secondary_func: Callable):
"""Register a secondary function for an operation"""
self.secondary_actions[operation] = secondary_func
async def execute(self, operation: str, primary_func: Callable, *args, **kwargs) -> Any:
"""
Execute primary function with secondary logic if primary fails
"""
try:
return await primary_func(*args, **kwargs)
except Exception as e:
logger.error(f"Primary operation failed: {e}")
if operation in self.secondary_actions:
logger.info(f"Using secondary logic for {operation}")
try:
return await self.secondary_actions[operation](*args, **kwargs)
except Exception as secondary_error:
logger.error(f"Secondary logic also failed: {secondary_error}")
raise Exception(f"E016 Both primary and secondary failed for {operation}")
else:
raise
# Usage example
degradation = GracefulDegradation()
# Register secondary logic
degradation.register_secondary(
"fetch_data",
lambda: fetch_from_cache() # Secondary: fetch from cache
)
# Execute with secondary logic
data = await degradation.execute(
"fetch_data",
fetch_from_api # Primary function
)
Real-World Examples
Example 1: API Client with Relevant Error Handling
import aiohttp
import asyncio
from typing import Optional, Dict, Any
class RobustAPIClient:
"""API client with relevant error handling"""
def __init__(self, base_url: str, timeout: float = 30.0):
self.base_url = base_url
self.timeout = aiohttp.ClientTimeout(total=timeout)
self.session = None
async def __aenter__(self):
self.session = aiohttp.ClientSession(
timeout=self.timeout,
connector=aiohttp.TCPConnector(limit=10)
)
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
if self.session:
await self.session.close()
@retry_with_backoff(max_retries=3)
async def request(
self,
method: str,
endpoint: str,
**kwargs
) -> Dict[str, Any]:
"""Make HTTP request with detailed error handling"""
url = f"{self.base_url}/{endpoint}"
try:
async with self.session.request(method, url, **kwargs) as response:
# Handle HTTP status codes
if response.status == 200:
return await response.json()
elif response.status == 401:
raise AuthenticationError("API", "Invalid credentials")
elif response.status == 403:
raise PermissionError("E006 Access forbidden")
elif response.status == 429:
retry_after = int(response.headers.get('Retry-After', 60))
raise RateLimitError("API", retry_after)
elif response.status >= 500:
raise ServerError(f"E017 Server error: {response.status}")
else:
raise APIError(f"E018 Unexpected status: {response.status}")
except asyncio.TimeoutError:
raise NetworkTimeoutError(f"{method} {url}", self.timeout.total)
except aiohttp.ClientError as e:
raise ConnectionError(f"E019 Connection error: {e}")
except Exception as e:
raise APIError(f"E020 Unexpected error: {e}")
# Usage
async def fetch_user_data(user_id: int):
try:
async with RobustAPIClient("https://api.example.com") as client:
return await client.request("GET", f"users/{user_id}")
except AuthenticationError:
logger.error("API authentication failed")
return {"error": "authentication_required"}
except RateLimitError as e:
logger.warning(f"Rate limited, retry after {e.retry_after}s")
return {"error": "rate_limited", "retry_after": e.retry_after}
except NetworkTimeoutError:
logger.error("Network timeout")
return {"error": "timeout"}
except Exception as e:
logger.error(f"Failed to fetch user data: {e}")
return {"error": "unknown"}
Example 2: Data Processing Pipeline
import asyncio
import logging
from typing import List, Any, Optional
from dataclasses import dataclass
logger = logging.getLogger(__name__)
@dataclass
class ProcessingResult:
success: bool
data: Optional[Any] = None
error: Optional[str] = None
warnings: List[str] = None
def __post_init__(self):
if self.warnings is None:
self.warnings = []
class DataProcessor:
"""Production-grade data processing pipeline"""
def __init__(self, max_workers: int = 4):
self.max_workers = max_workers
self.processed_count = 0
self.error_count = 0
async def process_batch(self, items: List[Any]) -> List[ProcessingResult]:
"""Process a batch of items with error isolation"""
semaphore = asyncio.Semaphore(self.max_workers)
async def process_with_isolation(item):
async with semaphore:
return await self.process_item(item)
# Process all items concurrently
tasks = [process_with_isolation(item) for item in items]
results = await asyncio.gather(*tasks, return_exceptions=True)
# Convert exceptions to error results
processed_results = []
for i, result in enumerate(results):
if isinstance(result, Exception):
processed_results.append(
ProcessingResult(
success=False,
error=f"E021 Processing failed: {str(result)}"
)
)
self.error_count += 1
else:
processed_results.append(result)
if result.success:
self.processed_count += 1
else:
self.error_count += 1
return processed_results
async def process_item(self, item: Any) -> ProcessingResult:
"""Process single item with detailed error handling"""
warnings = []
try:
# Validate input
if not self.validate_input(item):
return ProcessingResult(
success=False,
error="E022 Invalid input format"
)
# Transform data
try:
transformed = await self.transform_data(item)
except TransformationError as e:
return ProcessingResult(
success=False,
error=f"E023 Transformation failed: {e}"
)
# Validate transformation
validation_warnings = self.validate_output(transformed)
warnings.extend(validation_warnings)
# Store result
try:
await self.store_result(transformed)
except StorageError as e:
# Try alternative storage
try:
await self.store_alternatively(transformed)
warnings.append("W001 Used alternative storage")
except Exception:
return ProcessingResult(
success=False,
error=f"E024 Storage failed: {e}"
)
return ProcessingResult(
success=True,
data=transformed,
warnings=warnings
)
except Exception as e:
logger.error(f"Unexpected error processing item: {e}")
return ProcessingResult(
success=False,
error=f"E025 Unexpected error: {e}"
)
def validate_input(self, item: Any) -> bool:
"""Validate input data"""
# Implementation depends on your data structure
return item is not None
async def transform_data(self, item: Any) -> Any:
"""Transform data with error handling"""
# Your transformation logic here
return item
def validate_output(self, data: Any) -> List[str]:
"""Validate output and return warnings"""
warnings = []
# Your validation logic here
return warnings
async def store_result(self, data: Any) -> None:
"""Store result"""
# Your storage logic here
pass
async def store_alternatively(self, data: Any) -> None:
"""Alternative storage method"""
# Fallback storage logic here
pass
Debugging Techniques
1. Structured Logging
import logging
import json
from datetime import datetime
from typing import Dict, Any
class StructuredLogger:
"""Logger for structured error reporting"""
def __init__(self, name: str):
self.logger = logging.getLogger(name)
def log_error(
self,
error: Exception,
context: Dict[str, Any] = None,
user_id: str = None,
request_id: str = None
):
"""Log error with structured context"""
error_data = {
"timestamp": datetime.utcnow().isoformat(),
"error_type": type(error).__name__,
"error_message": str(error),
"error_code": getattr(error, 'code', 'UNKNOWN'),
"context": context or {},
"user_id": user_id,
"request_id": request_id,
"traceback": traceback.format_exc()
}
self.logger.error(json.dumps(error_data))
def log_warning(
self,
message: str,
context: Dict[str, Any] = None,
warning_code: str = "W000"
):
"""Log warning with context"""
warning_data = {
"timestamp": datetime.utcnow().isoformat(),
"message": message,
"warning_code": warning_code,
"context": context or {}
}
self.logger.warning(json.dumps(warning_data))
2. Debug Decorator
import functools
import time
import traceback
from typing import Callable, Any
def debug_errors(
log_args: bool = True,
log_result: bool = True,
log_traceback: bool = True
):
"""Decorator for debugging function errors"""
def decorator(func: Callable) -> Callable:
@functools.wraps(func)
async def async_wrapper(*args, **kwargs):
start_time = time.time()
try:
if log_args:
logger.debug(f"Calling {func.__name__} with args={args}, kwargs={kwargs}")
result = await func(*args, **kwargs)
if log_result:
logger.debug(f"{func.__name__} returned: {type(result)}")
return result
except Exception as e:
execution_time = time.time() - start_time
error_info = {
"function": func.__name__,
"execution_time": execution_time,
"error": str(e),
"error_type": type(e).__name__
}
if log_args:
error_info["args"] = args
error_info["kwargs"] = kwargs
if log_traceback:
error_info["traceback"] = traceback.format_exc()
logger.error(f"Error in {func.__name__}: {json.dumps(error_info)}")
raise
@functools.wraps(func)
def sync_wrapper(*args, **kwargs):
start_time = time.time()
try:
if log_args:
logger.debug(f"Calling {func.__name__} with args={args}, kwargs={kwargs}")
result = func(*args, **kwargs)
if log_result:
logger.debug(f"{func.__name__} returned: {type(result)}")
return result
except Exception as e:
execution_time = time.time() - start_time
error_info = {
"function": func.__name__,
"execution_time": execution_time,
"error": str(e),
"error_type": type(e).__name__
}
if log_args:
error_info["args"] = args
error_info["kwargs"] = kwargs
if log_traceback:
error_info["traceback"] = traceback.format_exc()
logger.error(f"Error in {func.__name__}: {json.dumps(error_info)}")
raise
if asyncio.iscoroutinefunction(func):
return async_wrapper
else:
return sync_wrapper
return decorator
# Usage
@debug_errors()
async def problematic_function(data):
# This function will have detailed error logging
return await process_data(data)
Testing Error Scenarios
1. Error Injection Testing
import pytest
from unittest.mock import patch, AsyncMock
from contextlib import asynccontextmanager
class ErrorInjector:
"""Inject errors for testing purposes"""
def __init__(self):
self.errors = {}
def inject_error(self, function_name: str, error: Exception):
"""Inject error for specific function"""
self.errors[function_name] = error
def should_error(self, function_name: str) -> bool:
"""Check if function should error"""
return function_name in self.errors
def get_error(self, function_name: str) -> Exception:
"""Get injected error"""
return self.errors[function_name]
# Test example
@pytest.mark.asyncio
async def test_api_client_with_errors():
injector = ErrorInjector()
# Test network timeout
injector.inject_error("request", asyncio.TimeoutError())
with patch('aiohttp.ClientSession.request') as mock_request:
mock_request.side_effect = injector.get_error("request")
async with RobustAPIClient("https://api.example.com") as client:
with pytest.raises(NetworkTimeoutError):
await client.request("GET", "test")
# Test server error
injector.errors = {}
mock_response = AsyncMock()
mock_response.status = 500
with patch('aiohttp.ClientSession.request') as mock_request:
mock_request.return_value.__aenter__.return_value = mock_response
async with RobustAPIClient("https://api.example.com") as client:
with pytest.raises(ServerError):
await client.request("GET", "test")
2. Property-Based Testing
import hypothesis
from hypothesis import given, strategies as st
@given(st.lists(st.integers(), min_size=1, max_size=100))
def test_sort_with_error_handling(numbers):
"""Test sorting function with various inputs"""
try:
result = robust_sort(numbers)
assert result == sorted(numbers)
except ValueError as e:
# Should handle invalid inputs gracefully
assert "invalid" in str(e).lower()
except Exception as e:
# No other exceptions should occur
pytest.fail(f"Unexpected exception: {e}")
Monitoring and Observability
1. Error Metrics Collection
from collections import defaultdict, deque
import time
from typing import Dict, List
class ErrorMetrics:
"""Collect and analyze error metrics"""
def __init__(self, window_size: int = 3600): # 1 hour window
self.window_size = window_size
self.error_counts = defaultdict(int)
self.error_history = deque()
self.recent_errors = deque(maxlen=100)
def record_error(
self,
error_code: str,
error_type: str,
context: Dict[str, Any] = None
):
"""Record an error occurrence"""
timestamp = time.time()
# Update counts
self.error_counts[error_code] += 1
self.error_counts[f"{error_type}_{error_code}"] += 1
# Add to history
error_record = {
"timestamp": timestamp,
"error_code": error_code,
"error_type": error_type,
"context": context or {}
}
self.error_history.append(error_record)
self.recent_errors.append(error_record)
# Clean old records
cutoff = timestamp - self.window_size
while self.error_history and self.error_history[0]["timestamp"] < cutoff:
self.error_history.popleft()
def get_error_rate(self, duration: float = 300) -> float:
"""Get error rate in the last duration (seconds)"""
cutoff = time.time() - duration
recent_errors = [
e for e in self.error_history
if e["timestamp"] > cutoff
]
return len(recent_errors) / duration
def get_top_errors(self, limit: int = 10) -> List[tuple]:
"""Get most frequent errors"""
return sorted(
self.error_counts.items(),
key=lambda x: x[1],
reverse=True
)[:limit]
def check_error_spike(self, threshold: float = 2.0, window: int = 300) -> bool:
"""Check if error rate has spiked"""
current_rate = self.get_error_rate(window)
baseline_rate = self.get_error_rate(window * 2) / 2
return current_rate > baseline_rate * threshold
2. Health Check System
from typing import Dict, List, Callable
from dataclasses import dataclass
from enum import Enum
class HealthStatus(Enum):
HEALTHY = "healthy"
DEGRADED = "degraded"
UNHEALTHY = "unhealthy"
@dataclass
class HealthCheck:
name: str
check_func: Callable
timeout: float = 5.0
critical: bool = True
class HealthMonitor:
"""Monitor system health"""
def __init__(self):
self.checks: Dict[str, HealthCheck] = {}
self.metrics = ErrorMetrics()
def register_check(self, health_check: HealthCheck):
"""Register a health check"""
self.checks[health_check.name] = health_check
async def run_check(self, check_name: str) -> Dict[str, Any]:
"""Run a specific health check"""
if check_name not in self.checks:
return {
"status": HealthStatus.UNHEALTHY,
"error": f"E026 Unknown health check: {check_name}"
}
check = self.checks[check_name]
try:
async with asyncio.timeout(check.timeout):
result = await check.check_func()
return {
"status": HealthStatus.HEALTHY,
"result": result,
"timestamp": time.time()
}
except asyncio.TimeoutError:
error_code = "E027"
self.metrics.record_error(error_code, "timeout", {"check": check_name})
return {
"status": HealthStatus.UNHEALTHY if check.critical else HealthStatus.DEGRADED,
"error": f"[{error_code}] Health check timed out",
"timestamp": time.time()
}
except Exception as e:
error_code = "E028"
self.metrics.record_error(error_code, "health_check", {
"check": check_name,
"error": str(e)
})
return {
"status": HealthStatus.UNHEALTHY if check.critical else HealthStatus.DEGRADED,
"error": f"[{error_code}] Health check failed: {e}",
"timestamp": time.time()
}
async def run_all_checks(self) -> Dict[str, Any]:
"""Run all health checks"""
results = {}
overall_status = HealthStatus.HEALTHY
for check_name in self.checks:
result = await self.run_check(check_name)
results[check_name] = result
# Update overall status
if result["status"] == HealthStatus.UNHEALTHY:
overall_status = HealthStatus.UNHEALTHY
elif result["status"] == HealthStatus.DEGRADED and overall_status == HealthStatus.HEALTHY:
overall_status = HealthStatus.DEGRADED
return {
"overall_status": overall_status,
"checks": results,
"timestamp": time.time(),
"error_rate": self.metrics.get_error_rate()
}
Common Pitfalls and Solutions
| Pitfall | Problem | Solution |
|---|---|---|
| Swallowing Exceptions | except: pass hides failures | Log and re-raise: logger.error(e); raise |
| Overly Broad Catching | except Exception catches everything | Catch specific types, re-raise unexpected |
| Missing Context | raise ValueError("Invalid") | Include field/value: f"E022 Invalid {field}: {value}" |
| Resource Leaks | Files/connections left open on error | Use with statements or try/finally |
| Inconsistent Handling | Mix of return None and raise | Define base exception, use consistent pattern |
Summary
Effective error handling classifies errors consistently and manages them through meaningful messages and recovery options. Include error scenarios in tests and monitor error patterns in production.
Cross-Plugin Collaboration Guide
This guide demonstrates how plugins in the Claude Night Market ecosystem work together through shared superpowers to create workflows that combine multiple domain specializations.
Overview
The Night Market plugin ecosystem is designed for collaboration. Each plugin specializes in a domain and exposes capabilities through skills:
- Abstract - Meta-infrastructure for skills, validation, and quality
- Sanctum - Git workflows, PR generation, and documentation
- Scry - Media generation (VHS terminal recordings, Playwright browser recordings)
- Conservation - Context optimization and resource management
Common Collaboration Patterns
Pattern 1: Quality Assurance Chain
Abstract (TDD/Bulletproofing) -> Sanctum (Quality Gates) -> Production
Abstract enforces TDD and skill structure, Sanctum validates before integration.
Pattern 2: Resource Optimization Loop
Conservation (Monitor) -> Abstract (Refactor) -> Conservation (Validate)
Conservation identifies issues, Abstract provides patterns to fix them.
Pattern 3: Automated Workflow Enhancement
Sanctum (Detect Need) -> Conservation (Optimize) -> Sanctum (Execute)
Sanctum recognizes resource constraints, Conservation optimizes, Sanctum proceeds.
Pattern 4: Media-Enhanced Documentation
Sanctum (Detect Tutorial Need) -> Scry (Generate Media) -> Sanctum (Update Docs)
Sanctum identifies documentation gaps, Scry generates GIFs, Sanctum integrates them.
Pattern 5: Cross-Plugin Dependencies
# Skills can depend on other plugins' capabilities
dependencies:
plugins:
- conservation:context-optimization
- sanctum:git-workspace-review
- abstract:modular-skills
- scry:vhs-recording
- scry:gif-generation
Two-Plugin Collaborations
Abstract + Sanctum: Skill-Driven PR Workflow
Use Case: Creating and integrating new skills with automated PR generation
Workflow Steps
Step 1: Create and Validate the Skill (Abstract)
/abstract:create-skill "my-awesome-skill"
# Creates: skill directory, implementation, tests, documentation
Step 2: Test and Bulletproof (Abstract)
/abstract:test-skill my-awesome-skill
/abstract:bulletproof-skill my-awesome-skill
# Validates: best practices, TDD methodology, edge case resistance
Step 3: Estimate Token Usage (Abstract)
/abstract:estimate-tokens my-awesome-skill
# Output: skill tokens, dependencies, total impact
Step 4: Generate PR (Sanctum)
/sanctum:pr
# Automatically: reviews workspace, runs quality gates, generates PR description
Generated PR Example
## Summary
- Add new `my-awesome-skill` skill for processing workflow data
- Implements TDD methodology with 95%+ test coverage
- Validated through Abstract's skill evaluation framework
## Testing
- Skill validation passed: `/abstract:validate-skill my-awesome-skill`
- TDD workflow passed: `/abstract:test-skill my-awesome-skill`
- Bulletproofing completed: `/abstract:bulletproof-skill my-awesome-skill`
- Project linting passed, all tests passing
Benefits
- Quality Assurance: Abstract validates skills are well-structured and tested
- Security: Bulletproofing prevents edge cases and bypass attempts
- Automation: Sanctum handles mechanical PR creation
- Consistency: Standardized PR format with all necessary information
Conservation + Abstract: Optimizing Meta-Skills
Use Case: Reducing context usage of complex evaluation skills without losing functionality
Initial Problem
# Skill consuming too much context
name: detailed-skill-eval
token_budget: 2500 # Too high!
estimated_tokens: 2300
progressive_loading: false # Loading everything at once
Optimization Workflow
Step 1: Analyze Context Usage (Conservation)
/conservation:analyze-growth
# Output: Current context usage: 45% (CRITICAL)
# Top consumer: detailed-skill-eval: 2300 tokens
Step 2: Estimate Token Impact (Abstract)
/abstract:estimate-tokens detailed-skill-eval
# Breakdown: Core logic 900, Examples 600, Validation 400, Error handling 300
Step 3: Optimize Context Structure (Conservation)
/conservation:optimize-context
# Suggestions: Enable progressive loading, split modules, lazy load examples
Step 4: Refactor with Abstract’s Patterns
/abstract:analyze-skill detailed-skill-eval
# Provides: Modular decomposition strategy, shared pattern extraction
Results Comparison
| Metric | Before | After | Improvement |
|---|---|---|---|
| Token Usage | 2300 | 750 | 67% reduction |
| Load Time | 2.3s | 0.8s | 65% faster |
| Memory Usage | 45% | 15% | Within MECW limits |
| Test Coverage | 95% | 95% | Maintained |
Optimized Skill Structure
name: skill-eval-hub
token_budget: 800 # 65% reduction!
progressive_loading: true
modules:
- core-eval
- validation-rules
- example-library
- error-handlers
shared_patterns:
- token-efficient-validation
- lazy-loading-examples
Conservation + Sanctum: Optimized Git Workflows
Use Case: Managing multiple large feature branches efficiently
Challenge
# Multiple active branches need processing
- feature/auth-refactor (2,340 files changed)
- feature/performance-boost (1,890 files changed)
- feature/ui-redesign (3,210 files changed)
# Traditional approach would exceed context limits
Workflow Steps
Step 1: Analyze Resource Requirements (Conservation)
/conservation:optimize-context
# Output: Context Status: CRITICAL (68% usage)
# Available for git operations: 32%
# Recommended: Process branches sequentially with optimization
Step 2: Process Branch with Optimization (Sanctum + Conservation)
/git-catchup feature/auth-refactor --context-optimized
# Optimization applied:
# 1. Use summary mode for large diffs
# 2. Progressive loading of file details
# 3. Focus on critical changes only
Step 3: Generate Optimized PR (Sanctum)
/sanctum:pr --optimize-context
# Applies: Compressed summaries, token-efficient descriptions, progressive loading
Performance Comparison
Without Conservation:
Total context used: 124%
Result: Context overflow, incomplete processing
Success rate: 33% (1/3 branches)
With Conservation:
Total context used: 38%
Result: All branches processed successfully
Success rate: 100% (3/3 branches)
Advanced Features
Adaptive Detail Loading
Initial PR: 200 tokens (summary only)
/sanctum:show-details src/auth/ # +150 tokens
/sanctum:show-details src/auth/token.js # +50 tokens
Total: 400 tokens (vs 2,000 without optimization)
Cross-Branch Pattern Recognition
# Conservation identifies patterns across branches
# Common changes consolidated into single documentation item
# Estimated savings: 800 tokens
Sanctum + Scry: Tutorial Generation Pipeline
Use Case: Creating and updating documentation tutorials with animated GIFs
Challenge
# Documentation needs visual demos
- Installation tutorials need terminal recordings
- Web UI guides need browser screen captures
- Combined workflows need multi-source compositions
# Manual process is time-consuming and inconsistent
Workflow Steps
Step 1: Identify Tutorial Needs (Sanctum)
/sanctum:update-tutorial --list
# Output:
# Available tutorials:
# quickstart assets/tapes/quickstart.tape
# mcp assets/tapes/mcp.manifest.yaml (terminal + browser)
# skill-debug assets/tapes/skill-debug.tape
Step 2: Generate Terminal Recordings (Scry)
# Sanctum's tutorial-updates skill orchestrates scry:vhs-recording
Skill(scry:vhs-recording) assets/tapes/quickstart.tape
# VHS processes tape file, generates optimized GIF
# Output: assets/gifs/quickstart.gif (1.2MB)
Step 3: Generate Browser Recordings (Scry)
# For web UI tutorials, Playwright captures video
Skill(scry:browser-recording) specs/dashboard.spec.ts
# Output: test-results/dashboard/video.webm
# Convert to optimized GIF
Skill(scry:gif-generation) --input video.webm --output dashboard.gif
# Output: assets/gifs/dashboard.gif (980KB)
Step 4: Compose Multi-Source Tutorials (Scry)
# For combined terminal + browser tutorials
Skill(scry:media-composition)
# Reads manifest, combines components
# Output: assets/gifs/mcp-combined.gif
Step 5: Generate Documentation (Sanctum)
/sanctum:update-tutorial quickstart mcp
# Sanctum generates dual-tone markdown:
# - docs/tutorials/quickstart.md (project docs, concise)
# - book/src/tutorials/quickstart.md (technical book, detailed)
# - Updates README.md demo section with GIF embeds
Manifest-Driven Composition
# assets/tapes/mcp.manifest.yaml
name: mcp
title: "MCP Server Integration"
components:
- type: tape
source: mcp.tape
output: assets/gifs/mcp-terminal.gif
- type: playwright
source: browser/mcp-browser.spec.ts
output: assets/gifs/mcp-browser.gif
requires:
- "npm run dev" # Start server before recording
combine:
output: assets/gifs/mcp-combined.gif
layout: vertical
options:
padding: 10
background: "#1a1a2e"
Results Comparison
| Metric | Manual | Automated |
|---|---|---|
| Time per tutorial | 30-60 min | 2-5 min |
| Consistency | Variable | 100% consistent |
| GIF optimization | Often skipped | Always optimized |
| Documentation sync | Often outdated | Always current |
Benefits
- Automation: End-to-end tutorial generation from tape files
- Consistency: All GIFs use same quality settings and themes
- Dual-Tone Output: Both project docs and technical book content
- Manifest-Driven: Declarative composition for complex tutorials
Three-Way Ecosystem: Complete Development Lifecycle
Use Case: End-to-end enterprise plugin development with full optimization
Phase 1: Planning and Analysis
# 1. Analyze current resource state (Conservation)
/conservation:analyze-growth
# Output: Context at 25%, optimal for new development
# 2. Plan skill architecture (Abstract)
/abstract:analyze-skill ecosystem-orchestrator
# Output: Recommends modular architecture with 5 interconnected skills
# 3. Check git workspace (Sanctum)
/git-catchup
# Output: Clean workspace, ready for new feature branch
Phase 2: Skill Creation with Built-in Optimization
# Create orchestrator with Conservation awareness
/abstract:create-skill ecosystem-orchestrator
# Abstract creates with Conservation-suggested limits:
{
"name": "ecosystem-orchestrator",
"token_budget": 800,
"progressive_loading": true,
"modules": [
"skill-discovery",
"dependency-resolution",
"execution-planning",
"resource-monitoring"
]
}
Phase 3: Development and Testing
# TDD development cycle for all skills (Abstract)
for skill in ecosystem-orchestrator skill-discovery dependency-resolution; do
/abstract:test-skill $skill
/abstract:bulletproof-skill $skill
done
# Conservation monitors and optimizes during development
/conservation:optimize-context
# Output: Applied shared patterns, saved 1,200 tokens total
# Validate entire plugin structure (Abstract)
/abstract:validate-plugin
Phase 4: Integration and Performance Tuning
# Estimate total impact (Abstract + Conservation)
/abstract:estimate-tokens ecosystem-orchestrator --include-dependencies
# Output: Core 750 tokens, Dependencies 1,100, Total 1,850 (within limits)
# Performance analysis (Conservation)
/conservation:analyze-growth
# Output: Growth pattern optimal, MECW compliant
Phase 5: Documentation and PR Generation
# Generate detailed PR (Sanctum)
/sanctum:pr --include-performance-report
# Sanctum automatically includes:
# - Change summary
# - Test results from Abstract's testing
# - Performance metrics from Conservation
# - Context optimization details
Real-World Impact
| Metric | Before Integration | After Integration |
|---|---|---|
| Development time | 2-3 weeks | 3-5 days |
| Quality issues | Frequent, discovered late | Caught early |
| Resource problems | Context overflow common | Eliminated |
| Documentation | Manual, often incomplete | Automatic, detailed |
Measurable Improvements:
- Development speed: 70% faster
- Bug reduction: 85% fewer production issues
- Resource efficiency: 42% less token usage
- Documentation quality: 100% compliance with standards
Integration Techniques
Shared State Management
# Conservation sets context budget
export CONTEXT_BUDGET=0.4
# Abstract respects budget in skill creation
/abstract:create-skill my-skill --context-limit $CONTEXT_BUDGET
# Sanctum generates PRs within budget
/sanctum:pr --respect-context-limit
Progressive Loading Framework
# Conservation provides framework
def progressive_load(module, priority):
if context_available():
load_module(module, priority)
else:
queue_for_later(module)
# Abstract implements for skills
# Sanctum implements for git operations
Quality Gates Integration
quality_gates:
- abstract:validate-skill
- abstract:test-skill
- sanctum:lint-check
- sanctum:security-scan
Measuring Collaboration Success
Development Metrics
- Speed: Time from idea to production
- Quality: Bug rates and test coverage
- Consistency: Code style and pattern adherence
- Documentation: Completeness and accuracy
Resource Metrics
- Context Usage: Token consumption optimization
- Performance: Response times and throughput
- Scalability: Concurrent operation capacity
- Efficiency: Resource utilization percentage
Collaboration Metrics
- Interoperability: How well plugins work together
- Integration: Clean handoffs between plugins
- Flexibility: Ability to adapt to different scenarios
- Maintainability: Long-term sustainability
Commands Reference
Abstract Commands
/abstract:create-skill- Create new skill with proper structure/abstract:test-skill- Run TDD validation workflow/abstract:bulletproof-skill- Harden skill against edge cases/abstract:estimate-tokens- Calculate context impact/abstract:analyze-skill- Get optimization recommendations/abstract:validate-plugin- validate quality after optimization
Sanctum Commands
/git-catchup- Efficient git branch analysis/sanctum:pr- Generate detailed PR description/sanctum:show-details <path>- Progressive detail loading/sanctum:update-tutorial- Generate tutorials with media (uses Scry)
Conservation Commands
/conservation:analyze-growth- Monitor resource usage trends/conservation:optimize-context- Apply MECW optimization principles
Scry Commands
/scry:record-terminal- Record terminal sessions using VHS tape files/scry:record-browser- Record browser sessions using Playwright specs
Key Takeaways
- Synergy Over Silos: Plugins working together create more value than separate usage
- Complementary Strengths: Each plugin specializes in a domain, combined they cover the development lifecycle
- Adaptive Workflows: Collaboration enables workflows that adapt to constraints
- Quality at Scale: Maintain high quality even with complex, multi-plugin workflows
- Resource Efficiency: Optimize for both development speed and operational cost
The Claude Night Market ecosystem is designed for collaboration. Combining plugin superpowers creates workflows that are efficient and maintainable by composing specialized capabilities.
See Also
- Superpowers Integration - Technical skill integration details
- Plugin Overview - Creating new plugins
- Skills Reference - Skill workflow patterns
Memory Palace Cache Modes
Learn how to configure Memory Palace’s research interceptor for different use cases.
Prerequisites
- Memory Palace plugin installed
- Familiarity with Memory Palace concepts
Objectives
By the end of this tutorial, you’ll:
- Understand the four cache modes
- Configure modes for different scenarios
- Debug interceptor decisions
- Monitor cache performance
Mode Overview
The research interceptor supports four modes:
| Mode | Behavior | Use Case |
|---|---|---|
cache_only | Block web when no confident match | Offline work, policy audits |
cache_first | Check cache, fall back to web | Default research (recommended) |
augment | Blend cache with live results | When freshness matters |
web_only | Bypass Memory Palace entirely | Incident response, debugging |
Step 1: Check Current Mode
View your current configuration:
cat plugins/memory-palace/hooks/memory-palace-config.yaml
Look for the research_mode setting:
research_mode: cache_first
Step 2: Understanding the Decision Matrix
The interceptor evaluates queries using:
Freshness Detection
Queries containing temporal keywords trigger augmentation:
latest,2025,today,this week- Even with strong cache hits
Match Strength
| Score | Classification | Action |
|---|---|---|
| > 0.8 | Strong match | Use cache |
| 0.4-0.8 | Partial match | Mode-dependent |
| < 0.4 | Weak/no match | Fall back to web |
Autonomy Overrides
When autonomy level >= 2, partial matches auto-approve without flagging the intake queue.
Step 3: Changing Modes
Edit the configuration file:
# hooks/memory-palace-config.yaml
# For offline work
research_mode: cache_only
# For normal research (default)
research_mode: cache_first
# For real-time topics
research_mode: augment
# To bypass completely
research_mode: web_only
Restart Claude Code for changes to take effect.
Step 4: Monitoring Decisions
The interceptor logs decisions to telemetry:
cat plugins/memory-palace/data/telemetry/memory-palace.csv
Fields include:
decision: cache_hit, cache_miss, augmented, blockednovelty_score: 0-1 score for new informationintake_delta_reasoning: Why intake was triggered/skipped
Troubleshooting
Hook never fires
Check: Is cache_intercept enabled?
feature_flags:
cache_intercept: true
Check: Is mode not web_only?
Legitimate query blocked in cache_only
Solution: Add missing entry to corpus
# Inspect keyword index
cat plugins/memory-palace/data/indexes/keyword-index.yaml
# Rebuild indexes
uv run python plugins/memory-palace/scripts/build_indexes.py
Too many augmentation messages
Solution: Adjust thresholds
# Raise intake threshold
intake_threshold: 0.6
# Or increase autonomy
autonomy_level: 2
Intake queue spam
Solution: Review duplicates
Check intakeFlagPayload.duplicate_entry_ids in telemetry and tidy corpus entries.
Operational Checklist
After configuring modes:
- Update
docs/curation-log.mddocumenting mode choice - Keep
data/indexes/vitality-scores.yamlfresh - When changing defaults, gate with feature flag
- Run interceptor tests:
pytest tests/hooks/test_research_interceptor.py
Verification
Confirm your configuration works:
# Make a test query
# In Claude, ask about a topic in your corpus
# Check telemetry for decision
tail -1 plugins/memory-palace/data/telemetry/memory-palace.csv
Expected output shows cache_hit for known topics.
Next Steps
- Embedding Upgrade for semantic search
- Memory Palace Curation for intake workflow
Embedding Upgrade Guide
Add semantic search capabilities to Memory Palace for improved knowledge retrieval.
Prerequisites
- Memory Palace plugin installed
- Python environment with uv
- (Optional)
sentence-transformersfor high-quality embeddings
Objectives
By the end of this tutorial, you’ll:
- Build embedding indexes for your corpus
- Toggle between embedding providers
- Benchmark retrieval quality
- Configure production settings
Step 1: Choose an Embedding Provider
Memory Palace supports multiple providers:
| Provider | Quality | Dependencies | Use Case |
|---|---|---|---|
hash | Basic | None | CI, constrained environments |
local | High | sentence-transformers | Production, quality focus |
Step 2: Build Provider Slices
Navigate to the plugin directory:
cd plugins/memory-palace
Build Hash Embeddings (No Dependencies)
uv run python scripts/build_embeddings.py --provider hash
This creates deterministic 16-dimensional vectors using hashing.
Build Local Embeddings (High Quality)
First, install sentence-transformers:
uv pip install sentence-transformers
Then build:
uv run python scripts/build_embeddings.py --provider local
This creates 384-dimensional vectors using a local transformer model.
Step 3: Verify the Build
Check the generated index:
cat data/indexes/embeddings.yaml
Expected structure:
providers:
hash:
embeddings: {...}
vector_dimension: 16
local:
embeddings: {...}
vector_dimension: 384
metadata:
default_provider: hash
Both providers are stored, so you can switch without rebuilding.
Step 4: Toggle at Runtime
Set the provider via environment variable:
# Use hash embeddings
export MEMORY_PALACE_EMBEDDINGS_PROVIDER=hash
# Use local embeddings
export MEMORY_PALACE_EMBEDDINGS_PROVIDER=local
The hooks automatically use the environment variable.
Step 5: Benchmark Quality
Run retrieval benchmarks:
uv run python scripts/build_embeddings.py \
--provider local \
--benchmark fixtures/semantic_queries.json \
--benchmark-top-k 3 \
--benchmark-only
The benchmark file should contain test queries:
{
"queries": [
{
"query": "async context managers in Python",
"expected": ["async-patterns.md", "context-managers.md"]
}
]
}
Step 6: Using in Code
The CacheLookup class handles provider selection:
from memory_palace.cache_lookup import CacheLookup
lookup = CacheLookup(
corpus_dir="plugins/memory-palace/docs/knowledge-corpus",
index_dir="plugins/memory-palace/data/indexes",
embedding_provider="env", # Reads from environment
)
# Semantic search
results = lookup.search("gradient descent", mode="embeddings")
Fallback Strategy
If sentence-transformers is missing:
- System automatically falls back to hash provider
- CI environments always have a working provider
- Set
MEMORY_PALACE_EMBEDDINGS_PROVIDER=hashto guarantee fallback
Adding Custom Providers
Extend with additional providers:
# In your custom builder
def build_custom_embeddings(corpus):
# Call external API
embeddings = external_api.embed(corpus)
return embeddings
# Builder preserves existing providers
uv run python scripts/build_embeddings.py \
--provider custom \
--custom-builder my_builder.py
Performance Considerations
| Provider | Memory | Latency | Accuracy |
|---|---|---|---|
| hash | ~1MB | <10ms | ~60% |
| local | ~500MB | ~100ms | ~90% |
For production:
- Use
localfor quality-critical retrieval - Use
hashfor quick lookups and CI
Troubleshooting
sentence-transformers installation fails
# Try with specific version
uv pip install sentence-transformers==2.2.2
Embeddings not updating
# Force rebuild
rm data/indexes/embeddings.yaml
uv run python scripts/build_embeddings.py --provider local
Provider not found
validate environment variable is set correctly:
echo $MEMORY_PALACE_EMBEDDINGS_PROVIDER
Verification
Confirm semantic search works:
# In Python
from memory_palace.cache_lookup import CacheLookup
lookup = CacheLookup(
corpus_dir="docs/knowledge-corpus",
index_dir="data/indexes",
embedding_provider="local"
)
results = lookup.search("your test query", mode="embeddings")
print(results)
Next Steps
- Memory Palace Curation for intake workflow
- Cache Modes for retrieval configuration
Memory Palace Curation Workflow
Learn how the research interceptor collaborates with knowledge-intake for effective curation.
Prerequisites
- Memory Palace plugin installed
- Understanding of cache modes (see Cache Modes)
Objectives
By the end of this tutorial, you’ll:
- Understand the intake flag payload
- Process the intake queue
- Use dual-output workflows
- Maintain curation quality
The Intake Flow
When a research query runs, Memory Palace evaluates whether new information should be captured:
Query Execution
|
v
[Hook Evaluation]
|
+-- Build IntakeFlagPayload
| - should_flag_for_intake
| - novelty_score
| - domain_alignment
| - duplicate_entry_ids
|
v
[Decision]
|
+-- High novelty + domain match --> Flag for intake
|
+-- Low novelty or duplicate --> Skip intake
|
v
[Output]
- Telemetry row
- Queue entry (if flagged)
Step 1: Understanding IntakeFlagPayload
The IntakeFlagPayload dataclass tracks three signals:
| Field | Description |
|---|---|
should_flag_for_intake | Should this query be queued? |
novelty_score | Heuristic for new information (0-1) |
domain_alignment | Matches against interests config |
# memory_palace/curation/models.py
@dataclass
class IntakeFlagPayload:
should_flag_for_intake: bool
novelty_score: float
domain_alignment: List[str]
duplicate_entry_ids: List[str]
intake_delta_reasoning: str
Step 2: Monitoring the Hook
The hook outputs intake context to the runtime transcript:
[Memory Palace Intake]
Novelty: 0.75
Domains: python, async
Duplicates: []
Flag: True
Reasoning: High novelty content in aligned domain
Step 3: Processing the Intake Queue
When should_flag_for_intake=True, the hook writes to:
data/intake_queue.jsonl
Process the queue:
# View pending items
cat data/intake_queue.jsonl | jq .
# Process with CLI
uv run python skills/knowledge-intake/scripts/intake_cli.py \
--queue data/intake_queue.jsonl \
--review
Step 4: Intake Decision Options
For each queued item:
| Action | When to Use |
|---|---|
| Accept | High value, unique information |
| Merge | Similar to existing entry |
| Reject | Low value or duplicate |
| Defer | Need more context |
# Accept item
uv run python intake_cli.py --item abc123 --accept
# Merge with existing
uv run python intake_cli.py --item abc123 --merge entry-456
# Reject
uv run python intake_cli.py --item abc123 --reject
Step 5: Using Dual-Output Mode
Generate both palace entry and developer documentation:
uv run python intake_cli.py \
--candidate /tmp/candidate.json \
--dual-output \
--prompt-pack marginal-value-dual \
--auto-accept
This creates:
- Palace entry: Stored in corpus
- Developer doc: Added to
docs/ - Prompt artifact: Saved to
docs/prompts/<pack>.md
Step 6: Telemetry Review
Check telemetry for intake patterns:
# View recent decisions
tail -20 data/telemetry/memory-palace.csv
Columns include:
novelty_scorealigned_domainsintake_delta_reasoningduplicate_entry_ids
Curation Best Practices
Regular Review Cadence
- Daily: Check intake queue
- Weekly: Review telemetry patterns
- Monthly: KonMari session (prune low-value entries)
Document Decisions
Update docs/curation-log.md after each session:
## 2025-01-15 Curation Session
### Promoted
- async-patterns.md: High usage, evergreen
### Merged
- context-managers.md + async-context.md: Redundant
### Archived
- python-3.8-features.md: Outdated
Maintain Vitality Scores
Keep data/indexes/vitality-scores.yaml current:
entries:
async-patterns:
vitality: evergreen
last_accessed: 2025-01-15
python-3.8-features:
vitality: probationary
last_accessed: 2024-06-01
Troubleshooting
Too many intake flags
Solution: Raise intake threshold
# In config
intake_threshold: 0.7 # Higher = fewer flags
Missing domain alignment
Solution: Update domains of interest
# hooks/shared/config.py
domains_of_interest:
- python
- async
- testing
- architecture
Duplicate detection failing
Solution: Rebuild indexes
uv run python scripts/build_indexes.py
Verification
Confirm the workflow:
- Make a research query
- Check telemetry for decision
- View intake queue if flagged
- Process queue item
- Verify corpus update
# After processing
ls docs/knowledge-corpus/ | grep new-entry
Next Steps
- Return to Plugin Overview
- Explore Capabilities Reference
Capabilities Reference
Quick lookup table of all skills, commands, agents, and hooks in the Claude Night Market.
For full flag documentation and workflow examples: See Capabilities Reference Details.
Quick Reference Index
All Skills (Alphabetical)
| Skill | Plugin | Description |
|---|---|---|
api-review | pensive | API surface evaluation |
architecture-paradigm-client-server | archetypes | Client-server communication |
architecture-paradigm-cqrs-es | archetypes | CQRS and Event Sourcing |
architecture-paradigm-event-driven | archetypes | Asynchronous communication |
architecture-paradigm-functional-core | archetypes | Functional Core, Imperative Shell |
architecture-paradigm-hexagonal | archetypes | Ports & Adapters architecture |
architecture-paradigm-layered | archetypes | Traditional N-tier architecture |
architecture-paradigm-microkernel | archetypes | Plugin-based extensibility |
architecture-paradigm-microservices | archetypes | Independent distributed services |
architecture-paradigm-modular-monolith | archetypes | Single deployment with internal boundaries |
architecture-paradigm-pipeline | archetypes | Pipes-and-filters model |
architecture-paradigm-serverless | archetypes | Function-as-a-Service |
architecture-paradigm-service-based | archetypes | Coarse-grained SOA |
architecture-paradigm-space-based | archetypes | Data-grid architecture |
architecture-paradigms | archetypes | Orchestrator for paradigm selection |
agent-teams | conjure | Coordinate Claude Code Agent Teams through filesystem-based protocol |
architecture-aware-init | attune | Architecture-aware project initialization with research |
architecture-review | pensive | Architecture assessment |
authentication-patterns | leyline | Auth flow patterns |
bloat-detector | conserve | Detection algorithms for dead code, God classes, documentation duplication |
browser-recording | scry | Playwright browser recordings |
bug-review | pensive | Bug hunting |
catchup | imbue | Context recovery |
clear-context | conserve | Auto-clear workflow with session state persistence |
code-quality-principles | conserve | Core principles for AI-assisted code quality |
commit-messages | sanctum | Conventional commits |
context-optimization | conserve | MECW principles and 50% context rule |
cpu-gpu-performance | conserve | Resource monitoring and selective testing |
decisive-action | conserve | Decisive action patterns for efficient workflows |
delegation-core | conjure | Framework for delegation decisions |
diff-analysis | imbue | Semantic changeset analysis |
digital-garden-cultivator | memory-palace | Digital garden maintenance |
doc-consolidation | sanctum | Document merging |
doc-generator | scribe | Generate and remediate documentation |
doc-updates | sanctum | Documentation maintenance |
error-patterns | leyline | Standardized error handling |
escalation-governance | abstract | Model escalation decisions |
evaluation-framework | leyline | Decision thresholds |
evidence-logging | imbue | Capture methodology |
feature-review | imbue | Feature prioritization and gap analysis |
file-analysis | sanctum | File structure analysis |
do-issue | sanctum | GitHub issue resolution workflow |
fpf-review | pensive | FPF architecture review (Functional/Practical/Foundation) |
gemini-delegation | conjure | Gemini CLI integration |
gif-generation | scry | GIF processing and optimization |
git-platform | leyline | Cross-platform git forge detection and command mapping |
git-workspace-review | sanctum | Repo state analysis |
github-initiative-pulse | minister | Initiative progress tracking |
hook-authoring | abstract | Security-first hook development |
hooks-eval | abstract | Hook security scanning |
knowledge-intake | memory-palace | Intake and curation |
knowledge-locator | memory-palace | Spatial search |
makefile-dogfooder | abstract | Makefile analysis and enhancement |
makefile-generation | attune | Generate language-specific Makefiles |
makefile-review | pensive | Makefile best practices |
math-review | pensive | Mathematical correctness |
mcp-code-execution | conserve | MCP patterns for data pipelines |
methodology-curator | abstract | Surface expert frameworks for skill development |
media-composition | scry | Multi-source media stitching |
mission-orchestrator | attune | Unified lifecycle orchestrator for project development |
mecw-patterns | leyline | MECW implementation |
memory-palace-architect | memory-palace | Building virtual palaces |
modular-skills | abstract | Modular design patterns |
optimizing-large-skills | conserve | Large skill optimization |
performance-optimization | abstract | Progressive loading, token budgeting, and context-aware content delivery |
code-refinement | pensive | Duplication, algorithms, and clean code analysis |
damage-control | leyline | Agent-level error recovery for multi-agent coordination |
pr-prep | sanctum | PR preparation |
pr-review | sanctum | PR review workflows |
precommit-setup | attune | Set up pre-commit hooks |
progressive-loading | leyline | Dynamic content loading |
project-brainstorming | attune | Socratic ideation workflow |
project-execution | attune | Systematic implementation |
project-init | attune | Interactive project initialization |
project-planning | attune | Architecture and task breakdown |
project-specification | attune | Spec creation from brainstorm |
proof-of-work | imbue | Evidence-based work validation |
python-async | parseltongue | Async patterns |
python-packaging | parseltongue | Packaging with uv |
python-performance | parseltongue | Profiling and optimization |
python-testing | parseltongue | Pytest/TDD workflows |
pytest-config | leyline | Pytest configuration patterns |
qwen-delegation | conjure | Qwen MCP integration |
quota-management | leyline | Rate limiting and quotas |
release-health-gates | minister | Release readiness checks |
review-chamber | memory-palace | PR review knowledge capture and retrieval |
response-compression | conserve | Response compression patterns |
review-core | imbue | Scaffolding for detailed reviews |
risk-classification | leyline | Inline 4-tier risk classification for agent tasks |
rigorous-reasoning | imbue | Anti-sycophancy guardrails |
rule-catalog | hookify | Pre-built behavioral rule templates |
rust-review | pensive | Rust-specific checking |
safety-critical-patterns | pensive | NASA Power of 10 rules for robust code |
scope-guard | imbue | Anti-overengineering |
service-registry | leyline | Service discovery patterns |
session-management | sanctum | Session naming, checkpointing, and resume strategies |
session-palace-builder | memory-palace | Session-specific palaces |
shared-patterns | abstract | Reusable plugin development patterns |
shell-review | pensive | Shell script auditing for safety and portability |
slop-detector | scribe | Detect AI-generated content markers |
smart-sourcing | conserve | Balance accuracy with token efficiency |
skill-authoring | abstract | TDD methodology for skill creation |
skills-eval | abstract | Skill quality assessment |
spec-writing | spec-kit | Specification authoring |
speckit-orchestrator | spec-kit | Workflow coordination |
storage-templates | leyline | Storage abstraction patterns |
style-learner | scribe | Extract writing style from exemplar text |
structured-output | imbue | Formatting patterns |
task-planning | spec-kit | Task generation |
test-review | pensive | Test quality review |
subagent-testing | abstract | Testing patterns for subagent interactions |
test-updates | sanctum | Test maintenance |
testing-quality-standards | leyline | Test quality guidelines |
token-conservation | conserve | Token usage strategies |
tutorial-updates | sanctum | Tutorial maintenance and updates |
unified-review | pensive | Review orchestration |
update-readme | sanctum | README modernization |
usage-logging | leyline | Telemetry tracking |
version-updates | sanctum | Version bumping |
vhs-recording | scry | Terminal recordings with VHS |
war-room | attune | Multi-LLM expert council with Type 1/2 reversibility routing |
war-room-checkpoint | attune | Inline reversibility assessment for embedded escalation |
workflow-improvement | sanctum | Workflow retrospectives |
workflow-monitor | imbue | Workflow execution monitoring and issue creation |
workflow-setup | attune | Configure CI/CD pipelines |
writing-rules | hookify | Guide for authoring behavioral rules |
All Commands (Alphabetical)
| Command | Plugin | Description |
|---|---|---|
/ai-hygiene-audit | conserve | Audit codebase for AI-generated code quality issues (vibe coding, Tab bloat, slop) |
/aggregate-logs | abstract | Generate LEARNINGS.md from skill execution logs |
/analyze-growth | conserve | Analyzes skill growth patterns |
/analyze-hook | abstract | Analyzes hook for security/performance |
/bloat-scan | conserve | Progressive bloat detection (3-tier scan) |
/analyze-skill | abstract | Skill complexity analysis |
/analyze-tests | parseltongue | Test suite health report |
/api-review | pensive | API surface review |
/attune:brainstorm | attune | Brainstorm project ideas using Socratic questioning |
/attune:execute | attune | Execute implementation tasks systematically |
/attune:init | attune | Initialize new project with development infrastructure |
/attune:mission | attune | Run full project lifecycle as a single mission with state detection and recovery |
/attune:blueprint | attune | Plan architecture and break down tasks |
/attune:specify | attune | Create detailed specifications from brainstorm |
/attune:upgrade-project | attune | Add or update configurations in existing project |
/attune:validate | attune | Validate project structure against best practices |
/attune:war-room | attune | Multi-LLM expert deliberation with reversibility-based routing |
/architecture-review | pensive | Architecture assessment |
/bug-review | pensive | Bug hunting review |
/bulletproof-skill | abstract | Anti-rationalization workflow |
/catchup | imbue | Quick context recovery |
/check-async | parseltongue | Async pattern validation |
/close-issue | minister | Analyze if GitHub issues can be closed based on commits |
/commit-msg | sanctum | Generate commit message |
/context-report | abstract | Context optimization report |
/create-tag | sanctum | Create git tags for releases |
/create-command | abstract | Scaffold new command |
/create-hook | abstract | Scaffold new hook |
/create-issue | minister | Create GitHub issue with labels and references |
/create-skill | abstract | Scaffold new skill |
/doc-generate | scribe | Generate new documentation |
/doc-polish | scribe | Clean up AI-generated content |
/doc-verify | scribe | Validate documentation claims with proof-of-work |
/estimate-tokens | abstract | Token usage estimation |
/evaluate-skill | abstract | Evaluate skill execution quality |
/feature-review | imbue | Feature prioritization |
/do-issue | sanctum | Fix GitHub issues |
/fix-pr | sanctum | Address PR review comments |
/fix-workflow | sanctum | Workflow retrospective with automatic improvement context gathering |
/full-review | pensive | Unified code review |
/garden | memory-palace | Manage digital gardens |
/git-catchup | sanctum | Git repository catchup |
/hookify | hookify | Create behavioral rules to prevent unwanted actions |
/hookify:configure | hookify | Interactive rule enable/disable interface |
/hookify:help | hookify | Display hookify help and documentation |
/hookify:install | hookify | Install hookify rule from catalog |
/hookify:list | hookify | List all hookify rules with status |
/hooks-eval | abstract | Hook evaluation |
/improve-skills | abstract | Auto-improve skills from observability data |
/make-dogfood | abstract | Makefile enhancement |
/makefile-review | pensive | Makefile review |
/math-review | pensive | Mathematical review |
/merge-docs | sanctum | Consolidate ephemeral docs |
/navigate | memory-palace | Search palaces |
/optimize-context | conserve | Context optimization |
/palace | memory-palace | Manage palaces |
/plugin-review | abstract | Comprehensive plugin architecture review |
/pr | sanctum | Prepare pull request |
/prepare-pr | sanctum | Complete PR preparation with updates and validation |
/pr-review | sanctum | Enhanced PR review |
/record-browser | scry | Record browser session |
/record-terminal | scry | Create terminal recording |
/reinstall-all-plugins | leyline | Refresh all plugins |
/resolve-threads | sanctum | Resolve PR review threads |
/review-room | memory-palace | Manage PR review knowledge in palaces |
/run-profiler | parseltongue | Profile code execution |
/rust-review | pensive | Rust-specific review |
/shell-review | pensive | Shell script safety and portability review |
/skill-history | pensive | View recent skill executions with context |
/skill-logs | memory-palace | View and manage skill execution memories |
/skill-review | pensive | Analyze skill metrics and stability gaps |
/slop-scan | scribe | Scan files for AI slop markers |
/skills-eval | abstract | Skill quality assessment |
/speckit-analyze | spec-kit | Check artifact consistency |
/speckit-checklist | spec-kit | Generate checklist |
/speckit-clarify | spec-kit | Clarifying questions |
/speckit-constitution | spec-kit | Project constitution |
/speckit-implement | spec-kit | Execute tasks |
/speckit-plan | spec-kit | Generate plan |
/speckit-specify | spec-kit | Create specification |
/speckit-startup | spec-kit | Bootstrap workflow |
/speckit-tasks | spec-kit | Generate tasks |
/structured-review | imbue | Structured review workflow |
/style-learn | scribe | Create style profile from examples |
/test-review | pensive | Test quality review |
/test-skill | abstract | Skill testing workflow |
/unbloat | conserve | Safe bloat remediation with interactive approval |
/update-all-plugins | leyline | Update all plugins |
/update-dependencies | sanctum | Update project dependencies |
/update-docs | sanctum | Update documentation |
/update-labels | minister | Reorganize GitHub issue labels with professional taxonomy |
/update-plugins | sanctum | Audit plugin registrations + automatic performance analysis and improvement recommendations |
/update-readme | sanctum | Modernize README |
/update-tests | sanctum | Maintain tests |
/update-tutorial | sanctum | Update tutorial content |
/update-version | sanctum | Bump versions |
/validate-hook | abstract | Validate hook compliance |
/validate-plugin | abstract | Check plugin structure |
All Agents (Alphabetical)
| Agent | Plugin | Description |
|---|---|---|
ai-hygiene-auditor | conserve | Audit codebases for AI-generation warning signs |
architecture-reviewer | pensive | Principal-level architecture review |
bloat-auditor | conserve | Orchestrates bloat detection scans |
code-reviewer | pensive | Expert code review |
commit-agent | sanctum | Commit message generator |
context-optimizer | conserve | Context optimization |
continuation-agent | conserve | Continue work from session state checkpoint |
doc-editor | scribe | Interactive documentation editing |
doc-verifier | scribe | QA validation using proof-of-work methodology |
dependency-updater | sanctum | Dependency version management |
garden-curator | memory-palace | Digital garden maintenance |
git-workspace-agent | sanctum | Repository state analyzer |
implementation-executor | spec-kit | Task executor |
knowledge-librarian | memory-palace | Knowledge routing |
knowledge-navigator | memory-palace | Palace search |
media-recorder | scry | Autonomous media generation for demos and GIFs |
meta-architect | abstract | Plugin ecosystem design |
palace-architect | memory-palace | Palace design |
plugin-validator | abstract | Plugin validation |
pr-agent | sanctum | PR preparation |
project-architect | attune | Guides full-cycle workflow (brainstorm → plan) |
project-implementer | attune | Executes implementation with TDD |
python-linter | parseltongue | Strict ruff linting without bypasses |
python-optimizer | parseltongue | Performance optimization |
python-pro | parseltongue | Python 3.12+ expertise |
python-tester | parseltongue | Testing expertise |
review-analyst | imbue | Structured reviews |
rust-auditor | pensive | Rust security audit |
skill-auditor | abstract | Skill quality audit |
skill-evaluator | abstract | Skill execution evaluator |
skill-improver | abstract | Implements skill improvements from observability |
slop-hunter | scribe | Comprehensive AI slop detection |
spec-analyzer | spec-kit | Spec consistency |
task-generator | spec-kit | Task creation |
unbloat-remediator | conserve | Executes safe bloat remediation |
workflow-improvement-* | sanctum | Workflow improvement pipeline |
workflow-recreate-agent | sanctum | Workflow reconstruction |
All Hooks (Alphabetical)
| Hook | Plugin | Type | Description |
|---|---|---|---|
bridge.after_tool_use | conjure | PostToolUse | Suggests delegation for large output |
bridge.on_tool_start | conjure | PreToolUse | Suggests delegation for large input |
context_warning.py | conserve | PreToolUse | Context utilization monitoring |
detect-git-platform.sh | leyline | SessionStart | Detect git forge platform from remote URL |
local_doc_processor.py | memory-palace | PostToolUse | Processes local docs |
permission_request.py | conserve | PermissionRequest | Permission automation |
post-evaluation.json | abstract | Config | Quality scoring config |
post_implementation_policy.py | sanctum | SessionStart | Requires docs/tests updates |
pre-skill-load.json | abstract | Config | Pre-load validation |
homeostatic_monitor.py | abstract | PostToolUse | Stability gap monitoring, queues degrading skills for improvement |
pre_skill_execution.py | abstract | PreToolUse | Skill execution tracking |
research_interceptor.py | memory-palace | PreToolUse | Cache lookup before web |
security_pattern_check.py | sanctum | PreToolUse | Security anti-pattern detection |
session_complete_notify.py | sanctum | Stop | Cross-platform toast notifications |
session-start.sh | conserve/imbue | SessionStart | Session initialization |
skill_execution_logger.py | abstract | PostToolUse | Skill metrics logging |
skill_tracker_pre.py | memory-palace | PreToolUse | Skill execution start tracking |
skill_tracker_post.py | memory-palace | PostToolUse | Skill execution completion |
tdd_bdd_gate.py | imbue | PreToolUse | Iron Law enforcement at write-time |
url_detector.py | memory-palace | UserPromptSubmit | URL detection |
user-prompt-submit.sh | imbue | UserPromptSubmit | Scope validation |
verify_workflow_complete.py | sanctum | Stop | End-of-session workflow verification |
web_content_processor.py | memory-palace | PostToolUse | Web content processing |
Command Reference — Core Plugins
Flag and option documentation for core plugin commands (abstract, attune, conserve, imbue, sanctum).
Extended plugins: Memory Palace, Pensive, Parseltongue, Spec-Kit, Scribe, Scry, Hookify, Leyline
See also: Capabilities Reference | Skills | Agents | Hooks | Workflows
Command Syntax
/<plugin>:<command-name> [--flags] [positional-args]
Common Flag Patterns:
| Flag Pattern | Description | Example |
|---|---|---|
--verbose | Enable detailed output | /bloat-scan --verbose |
--dry-run | Preview without executing | /unbloat --dry-run |
--force | Skip confirmation prompts | /attune:init --force |
--report FILE | Output to file | /bloat-scan --report audit.md |
--level N | Set intensity/depth | /bloat-scan --level 3 |
--skip-X | Skip specific phase | /prepare-pr --skip-updates |
Abstract Plugin
/abstract:validate-plugin
Validate plugin structure against ecosystem conventions.
# Usage
/abstract:validate-plugin [plugin-name] [--strict] [--fix]
# Options
--strict Fail on warnings (not just errors)
--fix Auto-fix correctable issues
--report FILE Output validation report
# Examples
/abstract:validate-plugin sanctum
/abstract:validate-plugin --strict conserve
/abstract:validate-plugin memory-palace --fix
/abstract:create-skill
Scaffold a new skill with proper frontmatter and structure.
# Usage
/abstract:create-skill <plugin>:<skill-name> [--template basic|modular] [--category]
# Options
--template Skill template type (basic or modular with modules/)
--category Skill category for classification
--interactive Guided creation flow
# Examples
/abstract:create-skill pensive:shell-review --template modular
/abstract:create-skill imbue:new-methodology --category workflow-methodology
/abstract:create-command
Scaffold a new command with hooks and documentation.
# Usage
/abstract:create-command <plugin>:<command-name> [--hooks] [--extends]
# Options
--hooks Include lifecycle hook templates
--extends Base command or skill to extend
--aliases Comma-separated command aliases
# Examples
/abstract:create-command sanctum:new-workflow --hooks
/abstract:create-command conserve:deep-clean --extends "conserve:bloat-scan"
/abstract:create-hook
Scaffold a new hook with security-first patterns.
# Usage
/abstract:create-hook <plugin>:<hook-name> [--type] [--lang]
# Options
--type Hook event type (PreToolUse|PostToolUse|SessionStart|Stop|UserPromptSubmit)
--lang Implementation language (bash|python)
--matcher Tool matcher pattern
# Examples
/abstract:create-hook memory-palace:cache-check --type PreToolUse --lang python
/abstract:create-hook sanctum:commit-validator --type PreToolUse --matcher "Bash"
/abstract:analyze-skill
Analyze skill complexity and optimization opportunities.
# Usage
/abstract:analyze-skill <plugin>:<skill-name> [--metrics] [--suggest]
# Options
--metrics Show detailed token/complexity metrics
--suggest Generate optimization suggestions
--compare Compare against skill baselines
# Examples
/abstract:analyze-skill imbue:proof-of-work --metrics
/abstract:analyze-skill sanctum:pr-prep --suggest
/abstract:make-dogfood
Update Makefile demonstration targets to reflect current features.
# Usage
/abstract:make-dogfood [--check] [--update]
# Options
--check Verify Makefile is current (exit 1 if stale)
--update Apply updates to Makefile
--dry-run Show what would change
# Examples
/abstract:make-dogfood --check
/abstract:make-dogfood --update
/abstract:skills-eval
Evaluate skill quality across the ecosystem.
# Usage
/abstract:skills-eval [--plugin PLUGIN] [--threshold SCORE]
# Options
--plugin Limit to specific plugin
--threshold Minimum quality score (default: 70)
--output Output format (table|json|markdown)
# Examples
/abstract:skills-eval --plugin sanctum
/abstract:skills-eval --threshold 80 --output markdown
/abstract:hooks-eval
Evaluate hook security and performance.
# Usage
/abstract:hooks-eval [--plugin PLUGIN] [--security]
# Options
--plugin Limit to specific plugin
--security Focus on security patterns
--perf Focus on performance impact
# Examples
/abstract:hooks-eval --security
/abstract:hooks-eval --plugin memory-palace --perf
/abstract:evaluate-skill
Evaluate skill execution quality.
# Usage
/abstract:evaluate-skill <plugin>:<skill-name> [--metrics] [--suggestions]
# Options
--metrics Show detailed execution metrics
--suggestions Generate improvement suggestions
--compare Compare against baseline metrics
# Examples
/abstract:evaluate-skill imbue:proof-of-work --metrics
/abstract:evaluate-skill sanctum:pr-prep --suggestions
Attune Plugin
/attune:init
Initialize project with complete development infrastructure.
# Usage
/attune:init [--lang LANGUAGE] [--name NAME] [--author AUTHOR]
# Options
--lang LANGUAGE Project language: python|rust|typescript|go
--name NAME Project name (default: directory name)
--author AUTHOR Author name
--email EMAIL Author email
--python-version VER Python version (default: 3.10)
--description TEXT Project description
--path PATH Project path (default: .)
--force Overwrite existing files without prompting
--no-git Skip git initialization
# Examples
/attune:init --lang python --name my-cli
/attune:init --lang rust --author "Your Name" --force
/attune:brainstorm
Brainstorm project ideas using Socratic questioning.
# Usage
/attune:brainstorm [TOPIC] [--output FILE]
# Options
--output FILE Save brainstorm results to file
--rounds N Number of question rounds (default: 5)
--focus AREA Focus area: features|architecture|ux|technical
# Examples
/attune:brainstorm "CLI tool for data processing"
/attune:brainstorm --focus architecture --rounds 3
/attune:blueprint
Plan architecture and break down tasks.
# Usage
/attune:blueprint [--from BRAINSTORM] [--output FILE]
# Options
--from FILE Use brainstorm results as input
--output FILE Save plan to file
--depth LEVEL Planning depth: high|detailed|exhaustive
--include Include specific aspects: tests|ci|docs
# Examples
/attune:blueprint --from brainstorm.md --depth detailed
/attune:blueprint --include tests,ci
/attune:specify
Create detailed specifications from brainstorm or plan.
# Usage
/attune:specify [--from FILE] [--type TYPE]
# Options
--from FILE Input file (brainstorm or plan)
--type TYPE Spec type: technical|functional|api|data-model
--output DIR Output directory for specs
# Examples
/attune:specify --from plan.md --type technical
/attune:specify --type api --output .specify/
/attune:execute
Execute implementation tasks systematically.
# Usage
/attune:execute [--plan FILE] [--phase PHASE] [--task ID]
# Options
--plan FILE Task plan file (default: .specify/tasks.md)
--phase PHASE Execute specific phase: setup|tests|core|integration|polish
--task ID Execute specific task by ID
--parallel Enable parallel execution where marked [P]
--continue Resume from last checkpoint
# Examples
/attune:execute --plan tasks.md --phase setup
/attune:execute --task T1.2 --parallel
/attune:validate
Validate project structure against best practices.
# Usage
/attune:validate [--strict] [--fix]
# Options
--strict Fail on warnings
--fix Auto-fix correctable issues
--config Path to custom validation config
# Examples
/attune:validate --strict
/attune:validate --fix
/attune:upgrade-project
Add or update configurations in existing project.
# Usage
/attune:upgrade-project [--component COMPONENT] [--force]
# Options
--component Specific component: makefile|precommit|workflows|gitignore
--force Overwrite existing without prompting
--diff Show diff before applying
# Examples
/attune:upgrade-project --component makefile
/attune:upgrade-project --component workflows --force
Conserve Plugin
/conserve:bloat-scan
Progressive bloat detection for dead code and duplication.
# Usage
/bloat-scan [--level 1|2|3] [--focus TYPE] [--report FILE] [--dry-run]
# Options
--level 1|2|3 Scan tier: 1=quick, 2=targeted, 3=deep audit
--focus TYPE Focus area: code|docs|deps|all (default: all)
--report FILE Save report to file
--dry-run Preview findings without taking action
--exclude PATTERN Additional exclude patterns
# Scan Tiers
# Tier 1 (2-5 min): Large files, stale files, commented code, old TODOs
# Tier 2 (10-20 min): Dead code, duplicate patterns, import bloat
# Tier 3 (30-60 min): All above + cyclomatic complexity, dependency graphs
# Examples
/bloat-scan # Quick Tier 1 scan
/bloat-scan --level 2 --focus code # Targeted code analysis
/bloat-scan --level 3 --report Q1-audit.md # Deep audit with report
/conserve:unbloat
Safe bloat remediation with interactive approval.
# Usage
/unbloat [--approve LEVEL] [--dry-run] [--backup]
# Options
--approve LEVEL Auto-approve level: high|medium|low|all
--dry-run Show what would be removed
--backup Create backup branch before changes
--interactive Prompt for each item (default)
# Examples
/unbloat --dry-run # Preview all removals
/unbloat --approve high --backup # Auto-approve high priority, backup first
/unbloat --interactive # Approve each item manually
/conserve:optimize-context
Optimize context window usage.
# Usage
/optimize-context [--target PERCENT] [--scope PATH]
# Options
--target PERCENT Target context utilization (default: 50%)
--scope PATH Limit to specific directory
--suggest Only show suggestions, don't apply
--aggressive Apply all optimizations
# Examples
/optimize-context --target 40%
/optimize-context --scope plugins/sanctum/ --suggest
/conserve:analyze-growth
Analyze skill growth patterns.
# Usage
/analyze-growth [--plugin PLUGIN] [--days N] [--trend]
# Options
--plugin PLUGIN Limit to specific plugin
--days N Analysis period (default: 30)
--trend Show growth trend predictions
--alert Alert if growth exceeds threshold
# Examples
/analyze-growth --plugin conserve --days 60
/analyze-growth --trend --alert
Imbue Plugin
/imbue:catchup
Quick context recovery after session restart.
# Usage
/catchup [--depth LEVEL] [--focus AREA]
# Options
--depth LEVEL Recovery depth: shallow|standard|deep (default: standard)
--focus AREA Focus on: git|docs|issues|all
--since DATE Catch up from specific date
# Examples
/catchup # Standard recovery
/catchup --depth deep # Full context recovery
/catchup --focus git --since "3 days ago"
/imbue:feature-review
Feature prioritization and gap analysis.
# Usage
/feature-review [--scope BRANCH] [--against BASELINE]
# Options
--scope BRANCH Review specific branch
--against BASELINE Compare against baseline (main|tag|commit)
--gaps Focus on gap analysis
--priorities Generate priority rankings
# Examples
/feature-review --scope feature/new-api
/feature-review --gaps --against main
/imbue:structured-review
Structured review workflow with methodology options.
# Usage
/structured-review PATH [--methodology METHOD]
# Options
--methodology METHOD Review methodology: evidence-based|checklist|formal
--todos Generate TodoWrite items
--summary Include executive summary
# Examples
/structured-review plugins/sanctum/ --methodology evidence-based
/structured-review . --todos --summary
Sanctum Plugin
/sanctum:prepare-pr (alias: /pr)
Complete PR preparation workflow.
# Usage
/prepare-pr [--no-code-review] [--reviewer-scope SCOPE] [--skip-updates] [FILE]
/pr [options...] # Alias
# Options
--no-code-review Skip automated code review (faster)
--reviewer-scope SCOPE Review strictness: strict|standard|lenient
--skip-updates Skip documentation/test updates (Phase 0)
FILE Output file for PR description (default: pr_description.md)
# Reviewer Scope Levels
# strict - All suggestions must be addressed
# standard - Critical issues must be fixed, suggestions are recommendations
# lenient - Focus on blocking issues only
# Examples
/prepare-pr # Full workflow
/pr # Alias for full workflow
/prepare-pr --skip-updates # Skip Phase 0 updates
/prepare-pr --no-code-review # Skip code review
/prepare-pr --reviewer-scope strict # Strict review for critical changes
/prepare-pr --skip-updates --no-code-review # Fastest (legacy behavior)
/sanctum:commit-msg
Generate commit message.
# Usage
/commit-msg [--type TYPE] [--scope SCOPE]
# Options
--type TYPE Force commit type: feat|fix|docs|refactor|test|chore
--scope SCOPE Force commit scope
--breaking Include breaking change footer
--issue N Reference issue number
# Examples
/commit-msg
/commit-msg --type feat --scope api
/commit-msg --breaking --issue 42
/sanctum:do-issue
Fix GitHub issues.
# Usage
/do-issue ISSUE_NUMBER [--branch NAME]
# Options
--branch NAME Branch name (default: issue-N)
--auto-merge Attempt auto-merge after PR
--draft Create draft PR
# Examples
/do-issue 42
/do-issue 123 --branch fix/auth-bug
/do-issue 99 --draft
/sanctum:fix-pr
Address PR review comments.
# Usage
/fix-pr [PR_NUMBER] [--auto-resolve]
# Options
PR_NUMBER PR number (default: current branch's PR)
--auto-resolve Auto-resolve addressed comments
--batch Address all comments in batch
--interactive Address one comment at a time
# Examples
/fix-pr 42
/fix-pr --auto-resolve
/fix-pr 42 --batch
/sanctum:fix-workflow
Workflow retrospective with automatic improvement context.
# Usage
/fix-workflow [WORKFLOW_NAME] [--context]
# Options
WORKFLOW_NAME Specific workflow to analyze
--context Gather improvement context automatically
--lessons Generate lessons learned
--improvements Suggest workflow improvements
# Examples
/fix-workflow pr-review --context
/fix-workflow --lessons --improvements
/sanctum:pr-review
Enhanced PR review.
# Usage
/pr-review [PR_NUMBER] [--thorough]
# Options
PR_NUMBER PR to review (default: current)
--thorough Deep review with all checks
--quick Fast review of critical issues only
--security Security-focused review
# Examples
/pr-review 42
/pr-review --thorough
/pr-review --quick --security
/sanctum:update-docs
Update project documentation.
# Usage
/update-docs [--scope SCOPE] [--check]
# Options
--scope SCOPE Scope: all|api|readme|guides
--check Check only, don't modify
--sync Sync with code changes
# Examples
/update-docs
/update-docs --scope api
/update-docs --check
/sanctum:update-readme
Modernize README.
# Usage
/update-readme [--badges] [--toc]
# Options
--badges Update/add badges
--toc Update table of contents
--examples Update code examples
--full Full README refresh
# Examples
/update-readme
/update-readme --badges --toc
/update-readme --full
/sanctum:update-tests
Maintain tests.
# Usage
/update-tests [PATH] [--coverage]
# Options
PATH Test path to update
--coverage Ensure coverage targets
--missing Add missing tests
--modernize Update to modern patterns
# Examples
/update-tests tests/
/update-tests --missing --coverage
/sanctum:update-version
Bump versions.
# Usage
/update-version [VERSION] [--type TYPE]
# Options
VERSION Explicit version (e.g., 1.2.3)
--type TYPE Bump type: major|minor|patch|prerelease
--tag Create git tag
--push Push tag to remote
# Examples
/update-version 2.0.0
/update-version --type minor --tag
/update-version --type patch --tag --push
/sanctum:update-dependencies
Update project dependencies.
# Usage
/update-dependencies [--type TYPE] [--dry-run]
# Options
--type TYPE Dependency type: all|prod|dev|security
--dry-run Preview updates without applying
--major Include major version updates
--security Security updates only
# Examples
/update-dependencies
/update-dependencies --dry-run
/update-dependencies --type security
/update-dependencies --major
/sanctum:git-catchup
Git repository catchup.
# Usage
/git-catchup [--since DATE] [--author AUTHOR]
# Options
--since DATE Start date for catchup
--author AUTHOR Filter by author
--branch BRANCH Specific branch
--format FORMAT Output format: summary|detailed|log
# Examples
/git-catchup --since "1 week ago"
/git-catchup --author "user@example.com"
/sanctum:create-tag
Create git tags for releases.
# Usage
/create-tag VERSION [--message MSG] [--sign]
# Options
VERSION Tag version (e.g., v1.0.0)
--message MSG Tag message
--sign Create signed tag
--push Push tag to remote
# Examples
/create-tag v1.0.0
/create-tag v1.0.0 --message "Release 1.0.0" --sign --push
Extended plugins: Memory Palace, Pensive, Parseltongue, Spec-Kit, Scribe, Scry, Hookify, Leyline
See also: Skills | Agents | Hooks | Workflows
Command Reference — Extended Plugins
Flag and option documentation for extended plugin commands (memory-palace, parseltongue, pensive, spec-kit, scribe, scry, hookify, leyline).
Core plugins: Abstract, Attune, Conserve, Imbue, Sanctum
See also: Capabilities Reference | Skills | Agents | Hooks | Workflows
Memory Palace Plugin
/memory-palace:garden
Manage digital gardens.
# Usage
/garden [ACTION] [--path PATH]
# Actions
tend Review and update garden entries
prune Remove stale/low-value entries
cultivate Add new entries from queue
status Show garden health metrics
# Options
--path PATH Garden path (default: docs/knowledge-corpus/)
--dry-run Preview changes
--score N Minimum score threshold for cultivation
# Examples
/garden tend # Review garden entries
/garden prune --dry-run # Preview what would be removed
/garden cultivate --score 70 # Add high-quality entries
/garden status # Show health metrics
/memory-palace:navigate
Search across knowledge palaces.
# Usage
/navigate QUERY [--scope SCOPE] [--type TYPE]
# Options
--scope SCOPE Search scope: local|corpus|all
--type TYPE Content type: docs|code|web|all
--limit N Maximum results (default: 10)
--relevance N Minimum relevance score
# Examples
/navigate "authentication patterns" --scope corpus
/navigate "pytest fixtures" --type docs --limit 5
/memory-palace:palace
Manage knowledge palaces.
# Usage
/palace [ACTION] [PALACE_NAME]
# Actions
create NAME Create new palace
list List all palaces
status NAME Show palace status
archive NAME Archive palace
# Options
--template TEMPLATE Palace template: session|project|topic
--from FILE Initialize from existing content
# Examples
/palace create project-x --template project
/palace list
/palace status project-x
/palace archive old-project
/memory-palace:review-room
Review items in the knowledge queue.
# Usage
/review-room [--status STATUS] [--source SOURCE]
# Options
--status STATUS Filter by status: pending|approved|rejected
--source SOURCE Filter by source: webfetch|websearch|manual
--batch N Review N items at once
--auto-score Auto-generate scores
# Examples
/review-room --status pending --batch 10
/review-room --source webfetch --auto-score
Parseltongue Plugin
/parseltongue:analyze-tests
Test suite health report.
# Usage
/analyze-tests [PATH] [--coverage] [--flaky]
# Options
--coverage Include coverage analysis
--flaky Detect potentially flaky tests
--slow N Flag tests slower than N seconds
--missing Find untested code
# Examples
/analyze-tests tests/ --coverage
/analyze-tests --flaky --slow 5
/analyze-tests src/api/ --missing
/parseltongue:run-profiler
Profile code execution.
# Usage
/run-profiler [COMMAND] [--type TYPE]
# Options
--type TYPE Profiler type: cpu|memory|line|call
--output FILE Output file for profile data
--flame Generate flame graph
--top N Show top N hotspots
# Examples
/run-profiler "python main.py" --type cpu
/run-profiler "pytest tests/" --type memory --flame
/run-profiler --type line --top 20
/parseltongue:check-async
Async pattern validation.
# Usage
/check-async [PATH] [--strict]
# Options
--strict Strict async compliance
--suggest Suggest async improvements
--blocking Find blocking calls in async code
# Examples
/check-async src/ --strict
/check-async --blocking --suggest
Pensive Plugin
/pensive:full-review
Unified code review.
# Usage
/full-review [PATH] [--scope SCOPE] [--output FILE]
# Options
--scope SCOPE Review scope: changed|staged|all
--output FILE Save review to file
--severity MIN Minimum severity: critical|high|medium|low
--categories Include categories: bugs|security|style|perf
# Examples
/full-review src/ --scope staged
/full-review --scope changed --severity high
/full-review . --output review.md --categories bugs,security
/pensive:code-review
Expert code review.
# Usage
/code-review [FILES...] [--focus FOCUS]
# Options
--focus FOCUS Focus area: bugs|api|tests|security|style
--evidence Include evidence logging
--lsp Enable LSP-enhanced review (requires ENABLE_LSP_TOOL=1)
# Examples
/code-review src/api.py --focus bugs
/code-review --focus security --evidence
ENABLE_LSP_TOOL=1 /code-review src/ --lsp
/pensive:architecture-review
Architecture assessment.
# Usage
/architecture-review [PATH] [--depth DEPTH]
# Options
--depth DEPTH Analysis depth: surface|standard|deep
--patterns Identify architecture patterns
--anti-patterns Flag anti-patterns
--suggestions Generate improvement suggestions
# Examples
/architecture-review src/ --depth deep
/architecture-review --patterns --anti-patterns
/pensive:rust-review
Rust-specific review.
# Usage
/rust-review [PATH] [--safety]
# Options
--safety Focus on unsafe code analysis
--lifetimes Analyze lifetime patterns
--memory Memory safety review
--perf Performance-focused review
# Examples
/rust-review src/lib.rs --safety
/rust-review --lifetimes --memory
/pensive:test-review
Test quality review.
# Usage
/test-review [PATH] [--coverage]
# Options
--coverage Include coverage analysis
--patterns Review test patterns (AAA, BDD)
--flaky Detect flaky test patterns
--gaps Find testing gaps
# Examples
/test-review tests/ --coverage
/test-review --patterns --gaps
/pensive:shell-review
Shell script safety and portability review.
# Usage
/shell-review [FILES...] [--strict]
# Options
--strict Strict POSIX compliance
--security Security-focused review
--portability Check cross-shell compatibility
# Examples
/shell-review scripts/*.sh --strict
/shell-review --security install.sh
/pensive:skill-review
Analyze skill runtime metrics and stability. This is the canonical command for skill performance analysis (execution counts, success rates, stability gaps).
For static quality analysis (frontmatter, structure), use abstract:skill-auditor.
# Usage
/skill-review [--plugin PLUGIN] [--recommendations]
# Options
--plugin PLUGIN Limit to specific plugin
--all-plugins Aggregate metrics across all plugins
--unstable-only Only show skills with stability_gap > 0.3
--skill NAME Deep-dive specific skill
--recommendations Generate improvement recommendations
# Examples
/skill-review --plugin sanctum
/skill-review --unstable-only
/skill-review --skill imbue:proof-of-work
/skill-review --all-plugins --recommendations
Spec-Kit Plugin
/speckit-startup
Bootstrap specification workflow.
# Usage
/speckit-startup [--dir DIR]
# Options
--dir DIR Specification directory (default: .specify/)
--template Use template structure
--minimal Minimal specification setup
# Examples
/speckit-startup
/speckit-startup --dir specs/
/speckit-startup --minimal
/speckit-clarify
Generate clarifying questions.
# Usage
/speckit-clarify [TOPIC] [--rounds N]
# Options
TOPIC Topic to clarify
--rounds N Number of question rounds
--depth Deep clarification
--technical Technical focus
# Examples
/speckit-clarify "user authentication"
/speckit-clarify --rounds 3 --technical
/speckit-specify
Create specification.
# Usage
/speckit-specify [--from FILE] [--output DIR]
# Options
--from FILE Input source (brainstorm, requirements)
--output DIR Output directory
--type TYPE Spec type: full|api|data|ui
# Examples
/speckit-specify --from requirements.md
/speckit-specify --type api --output .specify/
/speckit-plan
Generate implementation plan.
# Usage
/speckit-plan [--from SPEC] [--phases]
# Options
--from SPEC Source specification
--phases Include phase breakdown
--estimates Include time estimates
--dependencies Show task dependencies
# Examples
/speckit-plan --from .specify/spec.md
/speckit-plan --phases --estimates
/speckit-tasks
Generate task breakdown.
# Usage
/speckit-tasks [--from PLAN] [--parallel]
# Options
--from PLAN Source plan
--parallel Mark parallelizable tasks
--granularity Task granularity: coarse|medium|fine
--assignable Make tasks assignable
# Examples
/speckit-tasks --from .specify/plan.md
/speckit-tasks --parallel --granularity fine
/speckit-implement
Execute implementation plan.
# Usage
/speckit-implement [--phase PHASE] [--task ID] [--continue]
# Options
--phase PHASE Execute specific phase
--task ID Execute specific task
--continue Resume from checkpoint
--parallel Enable parallel execution
# Examples
/speckit-implement --phase setup
/speckit-implement --task T1.2
/speckit-implement --continue
/speckit-checklist
Generate implementation checklist.
# Usage
/speckit-checklist [--type TYPE] [--output FILE]
# Options
--type TYPE Checklist type: ux|test|security|deployment
--output FILE Output file
--interactive Interactive completion mode
# Examples
/speckit-checklist --type security
/speckit-checklist --type ux --output checklists/ux.md
/speckit-analyze
Check artifact consistency.
# Usage
/speckit-analyze [--strict] [--fix]
# Options
--strict Strict consistency checking
--fix Auto-fix inconsistencies
--report Generate consistency report
# Examples
/speckit-analyze
/speckit-analyze --strict --report
Scribe Plugin
/slop-scan
Scan files for AI-generated content markers.
# Usage
/slop-scan [PATH] [--fix] [--report FILE]
# Options
PATH File or directory to scan (default: current directory)
--fix Show fix suggestions
--report FILE Output to report file
# Examples
/slop-scan
/slop-scan docs/
/slop-scan README.md --fix
/slop-scan **/*.md --report slop-report.md
/style-learn
Create style profile from examples.
# Usage
/style-learn [FILES] --name NAME
# Options
FILES Example files to learn from
--name NAME Profile name
--merge Merge with existing profile
# Examples
/style-learn good-examples/*.md --name house-style
/style-learn docs/api.md --name api-docs --merge
/doc-polish
Clean up AI-generated content.
# Usage
/doc-polish [FILES] [--style NAME] [--dry-run]
# Options
FILES Files to polish
--style NAME Apply learned style
--dry-run Preview changes without writing
# Examples
/doc-polish README.md
/doc-polish docs/*.md --style house-style
/doc-polish **/*.md --dry-run
/doc-generate
Generate new documentation.
# Usage
/doc-generate TYPE [--style NAME] [--output FILE]
# Options
TYPE Document type: readme|api|changelog|usage
--style NAME Apply learned style
--output FILE Output file path
# Examples
/doc-generate readme
/doc-generate api --style api-docs
/doc-generate changelog --output CHANGELOG.md
/doc-verify
Validate documentation claims with proof-of-work.
# Usage
/doc-verify [FILES] [--strict] [--report FILE]
# Options
FILES Files to verify
--strict Treat warnings as errors
--report FILE Output QA report
# Examples
/doc-verify README.md
/doc-verify docs/ --strict
/doc-verify **/*.md --report qa-report.md
Scry Plugin
/scry:record-terminal
Create terminal recording.
# Usage
/record-terminal [COMMAND] [--output FILE] [--format FORMAT]
# Options
COMMAND Command to record
--output FILE Output file (default: recording.gif)
--format FORMAT Output format: gif|svg|mp4|tape
--width N Terminal width
--height N Terminal height
--speed N Playback speed multiplier
# Examples
/record-terminal "make test" --output demo.gif
/record-terminal --format svg --width 80 --height 24
/scry:record-browser
Record browser session.
# Usage
/record-browser [URL] [--output FILE] [--actions FILE]
# Options
URL Starting URL
--output FILE Output file
--actions FILE Playwright actions script
--headless Run headless
--viewport WxH Viewport size
# Examples
/record-browser "http://localhost:3000" --output demo.mp4
/record-browser --actions test-flow.js --headless
Hookify Plugin
/hookify:install
Install hooks.
# Usage
/hookify:install [HOOK_NAME] [--plugin PLUGIN]
# Options
HOOK_NAME Specific hook to install
--plugin PLUGIN Install hooks from plugin
--all Install all available hooks
--dry-run Preview installation
# Examples
/hookify:install memory-palace-web-processor
/hookify:install --plugin conserve
/hookify:install --all --dry-run
/hookify:configure
Configure hook settings.
# Usage
/hookify:configure [HOOK_NAME] [--enable|--disable] [--set KEY=VALUE]
# Options
HOOK_NAME Hook to configure
--enable Enable hook
--disable Disable hook
--set KEY=VALUE Set configuration value
--reset Reset to defaults
# Examples
/hookify:configure memory-palace --set research_mode=cache_first
/hookify:configure context-warning --disable
/hookify:list
List installed hooks.
# Usage
/hookify:list [--plugin PLUGIN] [--status]
# Options
--plugin PLUGIN Filter by plugin
--status Show enabled/disabled status
--verbose Show full configuration
# Examples
/hookify:list
/hookify:list --plugin memory-palace --status
Leyline Plugin
/leyline:reinstall-all-plugins
Refresh all plugins.
# Usage
/reinstall-all-plugins [--force] [--clean]
# Options
--force Force reinstall even if up-to-date
--clean Clean install (remove then reinstall)
--verify Verify installation after reinstall
# Examples
/reinstall-all-plugins
/reinstall-all-plugins --clean --verify
/leyline:update-all-plugins
Update all plugins.
# Usage
/update-all-plugins [--check] [--exclude PLUGINS]
# Options
--check Check for updates only
--exclude PLUGINS Comma-separated plugins to skip
--major Include major version updates
# Examples
/update-all-plugins
/update-all-plugins --check
/update-all-plugins --exclude "experimental,beta"
Core plugins: Abstract, Attune, Conserve, Imbue, Sanctum
See also: Skills | Agents | Hooks | Workflows
Superpowers Integration
How Claude Night Market plugins integrate with the superpowers skills.
Overview
Many Night Market capabilities achieve their full potential when used alongside superpowers. While all plugins work standalone, superpowers provides foundational methodology skills that enhance workflows.
Installation
# Add the superpowers marketplace
/plugin marketplace add obra/superpowers
# Install the superpowers plugin
/plugin install superpowers@superpowers-marketplace
Dependency Matrix
| Plugin | Component | Type | Superpowers Dependency | Enhancement |
|---|---|---|---|---|
| abstract | /create-skill | Command | brainstorming | Socratic questioning |
| abstract | /create-command | Command | brainstorming | Concept development |
| abstract | /create-hook | Command | brainstorming | Security design |
| abstract | /test-skill | Command | test-driven-development | TDD methodology |
| sanctum | /pr | Command | receiving-code-review | PR validation |
| sanctum | /pr-review | Command | receiving-code-review | PR analysis |
| sanctum | /fix-pr | Command | receiving-code-review | Comment resolution |
| sanctum | /do-issue | Command | Multiple | Full workflow |
| spec-kit | /speckit-clarify | Command | brainstorming | Clarification |
| spec-kit | /speckit-plan | Command | writing-plans | Planning |
| spec-kit | /speckit-tasks | Command | executing-plans, systematic-debugging | Task breakdown |
| spec-kit | /speckit-implement | Command | executing-plans, systematic-debugging | Execution |
| spec-kit | /speckit-analyze | Command | systematic-debugging, verification-before-completion | Consistency |
| spec-kit | /speckit-checklist | Command | verification-before-completion | Validation |
| pensive | /full-review | Command | systematic-debugging, verification-before-completion | Debugging + evidence |
| parseltongue | python-testing | Skill | test-driven-development, testing-anti-patterns | TDD + anti-patterns |
| imbue | scope-guard | Skill | brainstorming, writing-plans, execute-plan | Anti-overengineering |
| imbue | /feature-review | Command | brainstorming | Feature prioritization |
| conservation | /optimize-context | Command | condition-based-waiting | Smart waiting |
| minister | issue-management | Skill | systematic-debugging | Bug investigation |
Superpowers Skills Referenced
| Skill | Purpose | Used By |
|---|---|---|
brainstorming | Socratic questioning for idea refinement | abstract, spec-kit, imbue |
test-driven-development | RED-GREEN-REFACTOR TDD cycle | abstract, sanctum, parseltongue |
receiving-code-review | Technical rigor for evaluating suggestions | sanctum |
requesting-code-review | Quality gates for code submission | sanctum |
writing-plans | Structured implementation planning | spec-kit, imbue |
executing-plans | Task execution with checkpoints | spec-kit |
systematic-debugging | Four-phase debugging framework | spec-kit, pensive, minister |
verification-before-completion | Evidence-based review standards | spec-kit, pensive, imbue |
testing-anti-patterns | Common testing mistake prevention | parseltongue |
condition-based-waiting | Smart polling/waiting strategies | conservation |
subagent-driven-development | Autonomous subagent orchestration | sanctum |
finishing-a-development-branch | Branch cleanup and finalization | sanctum |
Graceful Degradation
All Night Market plugins work without superpowers:
Without Superpowers
- Commands: Execute core functionality
- Skills: Provide standalone guidance
- Agents: Function with reduced automation
With Superpowers
- Commands: Enhanced methodology phases
- Skills: Integrated methodology patterns
- Agents: Full automation depth
Example: /do-issue Workflow
Without Superpowers
1. Parse issue
2. Analyze codebase
3. Implement fix
4. Create PR
With Superpowers
1. Parse issue
2. [subagent-driven-development] Plan subagent tasks
3. [writing-plans] Create structured plan
4. [test-driven-development] Write failing test
5. Implement fix
6. [requesting-code-review] Self-review
7. [finishing-a-development-branch] Cleanup
8. Create PR
Recommended Setup
For the full Night Market experience:
# 1. Add marketplaces
/plugin marketplace add obra/superpowers
/plugin marketplace add athola/claude-night-market
# 2. Install superpowers (foundational)
/plugin install superpowers@superpowers-marketplace
# 3. Install Night Market plugins
/plugin install sanctum@claude-night-market
/plugin install spec-kit@claude-night-market
/plugin install pensive@claude-night-market
Checking Integration
Verify superpowers is available:
/plugin list
# Should show superpowers@superpowers-marketplace
Commands will automatically detect and use superpowers when available.
Function Extraction Guidelines
Last Updated: 2025-12-06
Overview
This document provides standards and guidelines for function extraction and refactoring in the Claude Night Market plugin ecosystem. Following these guidelines validates maintainable, testable, and readable code.
Principles
1. Single Responsibility Principle (SRP)
A function should have only one reason to change.
2. Keep Functions Small
- Ideal: 10-20 lines of code
- Acceptable: 20-30 lines with clear logic
- Maximum: 50 lines with strong justification
- Never exceed 100 lines without splitting
3. Limited Parameters
- Ideal: 0-3 parameters
- Acceptable: 4-5 parameters with clear types
- Consider object parameter if 6+ parameters
4. Clear Naming
- Functions should be verbs that describe their action
- Use consistent naming conventions across the codebase
- Avoid abbreviations unless widely understood
When to Extract Functions
Immediate Extraction Required
-
Function exceeds 30 lines
# BAD - Too long def process_large_content(content): lines = content.split('\n') filtered_lines = [] for line in lines: if line.strip(): if not line.startswith('#'): if len(line) < 100: filtered_lines.append(line.strip()) # ... 20 more lines -
Function has multiple responsibilities
# BAD - Multiple responsibilities def analyze_and_optimize(content): # Analysis part complexity = calculate_complexity(content) quality = assess_quality(content) # Optimization part optimized = remove_redundancy(content) optimized = shorten_sentences(optimized) return optimized, complexity, quality -
Nested function depth exceeds 3 levels
# BAD - Too nested def process_data(data): if data: for item in data: if item.valid: for subitem in item.children: if subitem.active: # Deep nesting - extract this process_subitem(subitem)
Consider Extraction
-
Function has 4+ parameters
# CONSIDER - Many parameters def create_report(title, content, author, date, format, include_header, include_footer): pass # BETTER - Use configuration object @dataclass class ReportConfig: title: str content: str author: str date: datetime format: str = "pdf" include_header: bool = True include_footer: bool = True def create_report(config: ReportConfig): pass -
Complex conditional logic
# CONSIDER - Complex conditions def calculate_rate(user, product, time, location, special_offer): if user.premium and product.category in ["electronics", "books"]: if time.hour < 12 and location.country == "US": if special_offer and not user.used_recently: return 0.9 # ... more conditions # BETTER - Extract condition checks def _is_eligible_for_discount(user, product, time, location, special_offer): return (user.premium and product.category in ["electronics", "books"] and time.hour < 12 and location.country == "US" and special_offer and not user.used_recently)
Extraction Patterns
1. Extract Method Pattern
Before:
def generate_report(data):
# Validate data
if not data:
raise ValueError("Data cannot be empty")
if not all(isinstance(item, dict) for item in data):
raise TypeError("All items must be dictionaries")
# Process data
processed = []
for item in data:
processed_item = {
'id': item.get('id'),
'name': item.get('name', '').title(),
'value': float(item.get('value', 0))
}
processed.append(processed_item)
# Calculate totals
total = sum(item['value'] for item in processed)
average = total / len(processed) if processed else 0
return {
'items': processed,
'summary': {
'total': total,
'average': average,
'count': len(processed)
}
}
After:
def generate_report(data):
"""Generate a report from data items."""
_validate_data(data)
processed_items = _process_data_items(data)
summary = _calculate_summary(processed_items)
return {
'items': processed_items,
'summary': summary
}
def _validate_data(data):
"""Validate input data."""
if not data:
raise ValueError("Data cannot be empty")
if not all(isinstance(item, dict) for item in data):
raise TypeError("All items must be dictionaries")
def _process_data_items(data):
"""Process individual data items."""
return [
{
'id': item.get('id'),
'name': item.get('name', '').title(),
'value': float(item.get('value', 0))
}
for item in data
]
def _calculate_summary(items):
"""Calculate summary statistics."""
total = sum(item['value'] for item in items)
return {
'total': total,
'average': total / len(items) if items else 0,
'count': len(items)
}
2. Strategy Pattern for Complex Logic
Before:
def optimize_content(content, strategy_type):
if strategy_type == "aggressive":
# Remove all emphasis
lines = content.split('\n')
cleaned = []
for line in lines:
if not line.strip().startswith('**'):
cleaned.append(line)
return '\n'.join(cleaned)
elif strategy_type == "moderate":
# Shorten code blocks
# ... 20 lines of logic
elif strategy_type == "gentle":
# Only remove images
# ... 20 lines of logic
After:
from abc import ABC, abstractmethod
class OptimizationStrategy(ABC):
"""Base class for content optimization strategies."""
@abstractmethod
def optimize(self, content: str) -> str:
"""Optimize content according to strategy."""
pass
class AggressiveOptimizationStrategy(OptimizationStrategy):
"""Aggressive content optimization."""
def optimize(self, content: str) -> str:
lines = content.split('\n')
cleaned = [
line for line in lines
if not line.strip().startswith('**')
]
return '\n'.join(cleaned)
class ModerateOptimizationStrategy(OptimizationStrategy):
"""Moderate content optimization."""
def optimize(self, content: str) -> str:
# Implementation for moderate optimization
pass
class GentleOptimizationStrategy(OptimizationStrategy):
"""Gentle content optimization."""
def optimize(self, content: str) -> str:
# Implementation for gentle optimization
pass
# Strategy registry
OPTIMIZATION_STRATEGIES = {
"aggressive": AggressiveOptimizationStrategy(),
"moderate": ModerateOptimizationStrategy(),
"gentle": GentleOptimizationStrategy()
}
def optimize_content(content: str, strategy_type: str) -> str:
"""Optimize content using specified strategy."""
if strategy_type not in OPTIMIZATION_STRATEGIES:
raise ValueError(f"Unknown strategy: {strategy_type}")
strategy = OPTIMIZATION_STRATEGIES[strategy_type]
return strategy.optimize(content)
3. Builder Pattern for Complex Construction
Before:
def create_complex_object(name, type, config, options, metadata):
obj = ComplexObject()
obj.name = name
obj.type = type
# Complex configuration
if config.get('enabled', True):
obj.enabled = True
obj.timeout = config.get('timeout', 30)
obj.retries = config.get('retries', 3)
# Options processing
for key, value in options.items():
if key.startswith('custom_'):
obj.custom_fields[key[7:]] = value
else:
setattr(obj, key, value)
# Metadata handling
obj.created_at = metadata.get('created_at', datetime.now())
obj.created_by = metadata.get('created_by', 'system')
return obj
After:
class ComplexObjectBuilder:
"""Builder for ComplexObject instances."""
def __init__(self):
self._object = ComplexObject()
def with_name(self, name: str) -> 'ComplexObjectBuilder':
self._object.name = name
return self
def with_type(self, obj_type: str) -> 'ComplexObjectBuilder':
self._object.type = obj_type
return self
def with_config(self, config: Dict[str, Any]) -> 'ComplexObjectBuilder':
self._object.enabled = config.get('enabled', True)
self._object.timeout = config.get('timeout', 30)
self._object.retries = config.get('retries', 3)
return self
def with_options(self, options: Dict[str, Any]) -> 'ComplexObjectBuilder':
for key, value in options.items():
if key.startswith('custom_'):
self._object.custom_fields[key[7:]] = value
else:
setattr(self._object, key, value)
return self
def with_metadata(self, metadata: Dict[str, Any]) -> 'ComplexObjectBuilder':
self._object.created_at = metadata.get('created_at', datetime.now())
self._object.created_by = metadata.get('created_by', 'system')
return self
def build(self) -> ComplexObject:
return self._object
# Usage
def create_complex_object(name, type, config, options, metadata):
return (ComplexObjectBuilder()
.with_name(name)
.with_type(type)
.with_config(config)
.with_options(options)
.with_metadata(metadata)
.build())
Testing Extracted Functions
1. Unit Test Each Extracted Function
# Test for _validate_data
def test_validate_data_valid():
data = [{'id': 1, 'name': 'test'}]
# Should not raise
_validate_data(data)
def test_validate_data_empty():
with pytest.raises(ValueError, match="Data cannot be empty"):
_validate_data([])
def test_validate_data_invalid_type():
with pytest.raises(TypeError, match="All items must be dictionaries"):
_validate_data([{'id': 1}, "invalid"])
2. Test Strategy Implementations
def test_aggressive_optimization():
content = "**Bold text**\nNormal text\n**More bold**"
strategy = AggressiveOptimizationStrategy()
result = strategy.optimize(content)
assert "Normal text" in result
assert "**" not in result
3. Integration Tests
def test_generate_report_integration():
data = [
{'id': 1, 'name': 'test item', 'value': 100},
{'id': 2, 'name': 'another item', 'value': 200}
]
report = generate_report(data)
assert report['summary']['total'] == 300
assert report['summary']['average'] == 150
assert len(report['items']) == 2
Code Review Checklist
When reviewing code for function extraction:
Function Size
- Function is under 30 lines
- If over 30 lines, there’s a clear justification
- No function exceeds 100 lines
Responsibilities
- Function has a single, clear purpose
- Function name describes its purpose accurately
- Function doesn’t mix abstraction levels
Parameters
- Function has 0-5 parameters
- Parameters are well-typed
- Related parameters are grouped into objects
Complexity
- Cyclomatic complexity is under 10
- Nesting depth is under 4 levels
- No deeply nested ternary operators
Testability
- Function can be tested independently
- Function has no hidden dependencies
- Side effects are clearly documented
Documentation
- Function has a clear docstring
- Parameters are documented
- Return value is documented
- Exceptions are documented
Refactoring Workflow
1. Identify Refactoring Candidates
# Find long functions
find . -name "*.py" -exec wc -l {} \; | sort -n | tail -20
# Find complex functions (manual code review)
# Look for functions with:
# - Multiple return statements
# - Deep nesting
# - Many parameters
# - Mixed responsibilities
2. Create Tests First
# Write failing tests for the current behavior
def test_existing_behavior():
# Test the function as it exists now
pass
3. Extract Incrementally
- Extract small, private helper functions
- Run tests after each extraction
- Gradually extract larger functions
- Keep the public API stable
4. Optimize Imports and Dependencies
- Remove unused imports
- Group related imports
- Consider circular dependency issues
5. Update Documentation
- Update function docstrings
- Update API documentation
- Add examples for complex functions
Tools and Automation
1. Complexity Analysis
# Using radon (complexity analyzer)
pip install radon
radon cc your_file.py -a
# Using flake8 with complexity plugin
pip install flake8-mccabe
flake8 --max-complexity 10 your_file.py
2. Automated Refactoring Tools
# Using rope (refactoring library)
pip install rope
rope refactor.py -e
# Using black for formatting (maintains consistency)
pip install black
black your_file.py
3. Pre-commit Hooks
# .pre-commit-config.yaml
repos:
- repo: https://github.com/PyCQA/flake8
rev: 4.0.1
hooks:
- id: flake8
args: [--max-complexity=10, --max-line-length=100]
- repo: https://github.com/psf/black
rev: 22.3.0
hooks:
- id: black
language_version: python3
Examples from the Codebase
Before: GrowthController.generate_control_strategies()
The original function was 60+ lines and handled multiple responsibilities.
After Refactoring:
def generate_control_strategies(self, growth_rate: float) -> StrategyPlan:
"""Generate detailed control strategies for growth management."""
strategies = self._select_control_strategies(growth_rate)
monitoring = self._define_monitoring_needs(strategies)
implementation = self._plan_implementation(strategies, monitoring)
return StrategyPlan(strategies, monitoring, implementation)
def _select_control_strategies(self, growth_rate: float) -> List[Strategy]:
"""Select appropriate control strategies based on growth rate."""
# Extracted strategy selection logic
def _define_monitoring_needs(self, strategies: List[Strategy]) -> MonitoringPlan:
"""Define monitoring requirements for selected strategies."""
# Extracted monitoring logic
def _plan_implementation(self, strategies: List[Strategy],
monitoring: MonitoringPlan) -> ImplementationPlan:
"""Plan implementation steps for strategies and monitoring."""
# Extracted implementation planning
This refactoring:
- Reduced main function to 5 lines
- Created three focused helper functions
- Made each function independently testable
- Improved readability and maintainability
Conclusion
Following these function extraction guidelines will:
- Improve Maintainability: Smaller, focused functions are easier to understand and modify
- Enhance Testability: Each function can be tested in isolation
- Increase Reusability: Extracted functions can be reused in different contexts
- Reduce Bugs: Simpler functions have fewer edge cases and are easier to verify
- Improve Code Review: Smaller functions are easier to review and understand
Remember: The goal is not just to make functions smaller, but to make the code more readable, maintainable, and testable.
Achievement System
Track your learning progress through the Claude Night Market documentation.
How It Works
As you explore the documentation, complete tutorials, and try plugins, you earn achievements. Progress is saved in your browser’s local storage.
Your Progress
Available Achievements
Getting Started
| Achievement | Description | Status |
|---|---|---|
| Marketplace Pioneer | Add the Night Market marketplace | |
| Skill Apprentice | Use your first skill | |
| PR Pioneer | Prepare your first pull request |
Documentation Explorer
| Achievement | Description | Status |
|---|---|---|
| Plugin Explorer | Read all plugin documentation pages | |
| Domain Master | Use all domain specialist plugins |
Tutorial Completion
| Achievement | Description | Status |
|---|---|---|
| Cache Commander | Complete the Cache Modes tutorial | |
| Semantic Scholar | Complete the Embedding Upgrade tutorial | |
| Knowledge Curator | Complete the Curation tutorial | |
| Tutorial Master | Complete all tutorials |
Plugin Mastery
| Achievement | Description | Status |
|---|---|---|
| Foundation Builder | Install all foundation layer plugins | |
| Utility Expert | Install all utility layer plugins | |
| Full Stack | Install all plugins |
Advanced
| Achievement | Description | Status |
|---|---|---|
| Spec Master | Complete a full spec-kit workflow | |
| Review Expert | Complete a full pensive review | |
| Palace Architect | Build your first memory palace |
Reset Progress
Warning: This cannot be undone.
Achievement Tiers
| Tier | Achievements | Badge |
|---|---|---|
| Bronze | 1-5 | Night Market Visitor |
| Silver | 6-10 | Night Market Regular |
| Gold | 11-14 | Night Market Expert |
| Platinum | 15 | Night Market Master |