Claude Night Market
Claude Night Market contains 16 plugins for Claude Code that automate git operations, code review, and specification-driven development. Each plugin operates independently, allowing you to install only the components required for your specific workflow.
Architecture
The ecosystem uses a layered architecture to manage dependencies and token usage.
- Domain Specialists: Plugins like
pensive(code review) andminister(issue tracking) provide high-level task automation. - Utility Layer: Provides resource management services,
such as token conservation in
conserve. - Foundation Layer: Implements core mechanics used across the ecosystem,
including permission handling in
sanctum. - Meta Layer:
abstractprovides tools for cross-plugin validation and enforcement of project standards.
Design Philosophy
The project prioritizes token efficiency through shallow dependency chains. Progressive loading ensures that plugin logic enters the system prompt only when a specific feature is active. We enforce a “specification-first” workflow, requiring a written design phase before code generation begins.
Claude Code Integration
Plugins require Claude Code 2.1.0 or later to use features like:
- Hot-reloading: Skills update immediately upon file modification.
- Context Forking: Risky operations run in isolated context windows.
- Lifecycle Hooks: Frontmatter hooks execute logic at specific execution points.
- Wildcard Permissions: Pre-approved tool access reduces manual confirmation prompts.
Integration with Superpowers
These plugins integrate with the superpowers marketplace. While Night Market handles high-level process and workflow orchestration, superpowers provides the underlying methodology for TDD, debugging, and execution analysis.
Quick Start
# 1. Add the marketplace
/plugin marketplace add athola/claude-night-market
# 2. Install a plugin
/plugin install sanctum@claude-night-market
# 3. Use a command
/pr
# 4. Invoke a skill
Skill(sanctum:git-workspace-review)
Getting Started
This section will guide you through setting up Claude Night Market and using your first plugins.
Overview
This section covers:
- Installing the marketplace and plugins
- Invoking skills, commands, and agents
- Plugin dependency structure
Prerequisites
- Claude Code installed and configured.
- A terminal.
- Git (for version control workflows).
Quick Overview
The Claude Night Market provides three types of capabilities:
| Type | Description | How to Use |
|---|---|---|
| Skills | Reusable methodology guides | Skill(plugin:skill-name) |
| Commands | Quick actions with slash syntax | /command-name |
| Agents | Autonomous task executors | Referenced in skill workflows |
Sections
- Installation: Add the marketplace and install plugins
- Your First Plugin: Hands-on tutorial with sanctum
- Quick Start Guide: Common workflows and patterns
Achievement: Getting Started
Complete the installation steps to unlock the Marketplace Pioneer badge.
Installation
This guide walks you through adding the Claude Night Market to your Claude Code setup.
Prerequisites
- Claude Code 2.1.16+ (2.1.32+ for agent teams features)
- Python 3.9+ — required for hook execution.
macOS ships Python 3.9.6 as the system interpreter;
hooks run under this rather than virtual environments.
Plugin packages may target higher versions (3.10+, 3.12+) via
uv.
Step 1: Add the Marketplace
Open Claude Code and run:
/plugin marketplace add athola/claude-night-market
This registers the marketplace, making all plugins available for installation.
Step 2: Browse Available Plugins
View the marketplace contents:
/plugin marketplace list
You’ll see plugins organized by layer:
| Layer | Plugins | Purpose |
|---|---|---|
| Meta | abstract | Plugin infrastructure |
| Foundation | imbue, sanctum, leyline | Core workflows |
| Utility | conserve, conjure | Resource optimization |
| Domain | archetypes, pensive, parseltongue, memory-palace, spec-kit, minister, attune | Specialized tasks |
Step 3: Install Individual Plugins
Install plugins based on your needs:
# Git and workspace operations
/plugin install sanctum@claude-night-market
# Specification-driven development
/plugin install spec-kit@claude-night-market
# Code review toolkit
/plugin install pensive@claude-night-market
# Python development
/plugin install parseltongue@claude-night-market
Step 4: Verify Installation
Check that plugins loaded correctly:
/plugin list
Installed plugins appear with their available skills and commands.
Optional: Install Superpowers
For enhanced methodology integration:
# Add superpowers marketplace
/plugin marketplace add obra/superpowers
# Install superpowers
/plugin install superpowers@superpowers-marketplace
Superpowers provides TDD, debugging, and review patterns that enhance Night Market plugins.
Alternative: opkg (OpenPackage)
Each plugin ships an openpackage.yml manifest for installation via opkg:
opkg i gh@athola/claude-night-market --plugins sanctum
opkg i gh@athola/claude-night-market --plugins pensive,spec-kit
Plugins that depend on shared runtime skills (attune, conjure, imbue,
memory-palace, parseltongue,
sanctum) automatically pull packages/core as a dependency.
Recommended Plugin Sets
Minimal Setup
For basic git workflows:
/plugin install sanctum@claude-night-market
Development Setup
For active feature development:
/plugin install sanctum@claude-night-market
/plugin install imbue@claude-night-market
/plugin install spec-kit@claude-night-market
Full Setup
For detailed workflow coverage:
/plugin install abstract@claude-night-market
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
/plugin install leyline@claude-night-market
/plugin install conserve@claude-night-market
/plugin install pensive@claude-night-market
/plugin install spec-kit@claude-night-market
Post-Installation Setup
Several plugins register Setup hooks that run one-time initialization (directory creation, index building, configuration). Trigger them after installing:
# One-time initialization
claude --init
# Periodic maintenance (weekly or monthly)
claude --maintenance
--init runs setup tasks like creating knowledge garden directories
(memory-palace) and initializing caches (conserve).
--maintenance handles heavier operations like rebuilding indexes,
cleaning stale captures, and rotating logs.
Neither runs automatically on every session.
Troubleshooting
Plugin not loading?
- Verify marketplace was added:
/plugin marketplace list - Check for typos in plugin name
- Restart Claude Code session
Conflicts between plugins?
Plugins are composable. If you experience issues:
- Check the plugin’s README for dependency requirements
- Validate foundation plugins (imbue, leyline) are installed if using domain plugins
Next Steps
Continue to Your First Plugin for a hands-on tutorial.
Your First Plugin: sanctum
This hands-on tutorial walks you through using the sanctum plugin for git and workspace operations.
What You’ll Build
By the end of this tutorial, you’ll:
- Review your git workspace state
- Generate a conventional commit message
- Prepare a pull request description
Prerequisites
- sanctum plugin installed:
/plugin install sanctum@claude-night-market - A git repository with some uncommitted changes
Part 1: Workspace Review
Before any git operation, understand your current state.
Invoke the Skill
Skill(sanctum:git-workspace-review)
This skill runs a preflight checklist:
- Current branch and remote tracking
- Staged vs unstaged changes
- Recent commit history
- Untracked files
What to Expect
Claude will analyze your repository and report:
Repository: my-project
Branch: feature/add-login
Tracking: origin/feature/add-login (up to date)
Staged Changes:
M src/auth/login.ts
A src/auth/types.ts
Unstaged Changes:
M README.md
Untracked:
src/auth/tests/login.test.ts
Part 2: Commit Message Generation
Now generate a conventional commit message for your staged changes.
Using the Command
/commit-msg
Or invoke the skills directly:
Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)
Understanding the Output
Claude analyzes staged changes and generates:
feat(auth): add login form with validation
- Implement LoginForm component with email/password fields
- Add form validation using zod schema
- Create auth types for login request/response
Closes #42
The commit follows Conventional Commits format:
- Type: feat, fix, docs, style, refactor, test, chore
- Scope: Optional context (auth, api, ui)
- Description: Imperative mood, present tense
- Body: Bullet points explaining what changed
- Footer: Issue references
Part 3: PR Preparation
Finally, prepare a pull request description.
Using the Command
/pr
This runs the full PR preparation workflow:
- Workspace review
- Quality gates check
- Change summarization
- PR description generation
Quality Gates
Before generating the PR, Claude checks:
Quality Gates:
[x] Code compiles
[x] Tests pass
[x] Linting clean
[x] No console.log statements
[x] Documentation updated
Generated PR Description
## Summary
Add user authentication with login form validation.
## Changes
- **New Feature**: Login form component with email/password validation
- **Types**: Auth request/response type definitions
- **Tests**: Unit tests for login validation logic
## Testing
- [x] Manual testing of form submission
- [x] Unit tests pass (15 new tests)
- [x] Integration tests pass
## Screenshots
[Add screenshots if UI changes]
## Checklist
- [x] Tests added
- [x] Documentation updated
- [x] No breaking changes
Workflow Chaining
These skills work together. The recommended flow:
git-workspace-review (foundation)
├── commit-messages (depends on workspace state)
├── pr-prep (depends on workspace state)
├── doc-updates (depends on workspace state)
└── version-updates (depends on workspace state)
Always run git-workspace-review first to establish context.
Common Patterns
Pre-Commit Workflow
# Stage your changes
git add -p
# Review and commit
Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)
# Apply the message
git commit -m "<generated message>"
Pre-PR Workflow
# Run quality checks
make fmt && make lint && make test
# Prepare PR
/pr
# Create on GitHub
gh pr create --title "<title>" --body "<generated body>"
Next Steps
- Read the Quick Start Guide for more workflow patterns
- Explore other plugins in the Plugin Overview
- Check the Capabilities Reference for all available skills
Achievements Earned
- Skill Apprentice: Used your first skill
- PR Pioneer: Prepared your first PR
Quick Start Guide
Common workflows and patterns for Claude Night Market plugins.
Workflow Recipes
Feature Development
Start features with a specification:
# (Optional) Resume persistent speckit context for this repo/session
/speckit-startup
# Create specification from idea
/speckit-specify Add user authentication with OAuth2
# Generate implementation plan
/speckit-plan
# Create ordered tasks
/speckit-tasks
# Execute tasks
/speckit-implement
# Verify artifacts stay consistent
/speckit-analyze
Code Review
Run a detailed code review:
# Full review with intelligent skill selection
/full-review
# Or specific review types
/architecture-review # Architecture assessment
/api-review # API surface evaluation
/bug-review # Bug hunting
/test-review # Test quality
/rust-review # Rust-specific (if applicable)
Context Recovery
Get up to speed on changes:
# Quick catchup on recent changes
/catchup
# Or with sanctum's git-specific variant
/git-catchup
Context Optimization
Monitor and optimize context usage:
# Analyze context window usage
/optimize-context
# Check skill growth patterns (consolidated into bloat-scan)
/bloat-scan
Skill Invocation Patterns
Basic Skill Usage
# Standard format
Skill(plugin:skill-name)
# Examples
Skill(sanctum:git-workspace-review)
Skill(imbue:diff-analysis)
Skill(conservation:context-optimization)
Skill Chaining
Some skills depend on others:
# Pensive depends on imbue and sanctum
Skill(sanctum:git-workspace-review)
Skill(imbue:review-core)
Skill(pensive:architecture-review)
Skill with Dependencies
Check a plugin’s README for dependency chains:
spec-kit depends on imbue
pensive depends on imbue + sanctum
sanctum depends on imbue (for some skills)
Command Quick Reference
Git Operations (sanctum)
| Command | Purpose |
|---|---|
/commit-msg | Generate commit message |
/pr | Prepare pull request |
/fix-pr | Address PR review comments |
/do-issue | Fix GitHub issues |
/update-docs | Update documentation |
/update-docs | Update documentation (includes README) |
/update-tests | Maintain tests |
/update-version | Bump versions |
Specification (spec-kit)
| Command | Purpose |
|---|---|
/speckit-specify | Create specification |
/speckit-plan | Generate plan |
/speckit-tasks | Create tasks |
/speckit-implement | Execute tasks |
/speckit-analyze | Check consistency |
/speckit-clarify | Ask clarifying questions |
Review (pensive)
| Command | Purpose |
|---|---|
/full-review | Unified review |
/architecture-review | Architecture check |
/api-review | API surface review |
/bug-review | Bug hunting |
/test-review | Test quality |
Analysis (imbue)
| Command | Purpose |
|---|---|
/catchup | Quick context recovery |
/structured-review | Structured review with evidence |
Skill(imbue:scope-guard) | Feature prioritization (consolidated into scope-guard) |
Plugin Management (leyline)
| Command | Purpose |
|---|---|
/reinstall-all-plugins | Refresh all plugins |
/update-all-plugins | Update all plugins |
Environment Variables
Some plugins support configuration via environment variables:
Conservation
# Skip optimization guidance for fast processing
CONSERVATION_MODE=quick claude
# Full guidance with extended allowance
CONSERVATION_MODE=deep claude
Memory Palace
# Set embedding provider
MEMORY_PALACE_EMBEDDINGS_PROVIDER=hash # or local
Tips
1. Start with Foundation
Install foundation plugins first:
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
Then add domain specialists as needed.
2. Use TodoWrite Integration
Most skills output TodoWrite items for tracking:
git-review:repo-confirmed
git-review:status-overview
pr-prep:quality-gates
Monitor these for workflow progress.
3. Chain Skills Intentionally
Don’t invoke all skills at once. Build understanding incrementally:
# First: understand state
Skill(sanctum:git-workspace-review)
# Then: perform action
Skill(sanctum:commit-messages)
4. Use Superpowers
If superpowers is installed, commands gain enhanced capabilities:
/create-skilluses brainstorming/test-skilluses TDD methodology/pruses code review patterns
Next Steps
- Explore individual plugins in the Plugins section
- Reference all capabilities in Capabilities Reference
Common Workflows Guide
When and how to use commands, skills, and subagents for typical development tasks.
Quick Reference
| Task | Primary Tool | Plugin |
|---|---|---|
| Initialize a project | /attune:arch-init | attune |
| Review a PR | /full-review | pensive |
| Fix PR feedback | /fix-pr | sanctum |
| Prepare a PR | /pr | sanctum |
| Catch up on changes | /catchup | imbue |
| Write specifications | /speckit-specify | spec-kit |
| Improve system | /speckit-analyze | spec-kit |
| Debug an issue | Skill(superpowers:systematic-debugging) | superpowers |
| Manage knowledge | /palace | memory-palace |
Initializing a New Project
When: Starting a new project from scratch or setting up a new codebase.
Step 1: Architecture-Aware Initialization
Start with an architecture-aware initialization to select the right project structure based on team size and domain complexity. This process guides you through project type selection, online research into best practices, and template customization.
# Interactive architecture selection with research
/attune:arch-init --name my-project
Output: Complete project structure with ARCHITECTURE.md, ADR, and paradigm-specific directories.
Step 2: Standard Initialization
If the architecture is decided, use standard initialization to generate language-specific boilerplate including Makefiles, CI/CD pipelines, and pre-commit hooks.
# Quick initialization when you know the architecture
/attune:init --lang python --name my-project
Step 3: Establish Persistent State
Establish a persistent state to manage artifacts and constraints across sessions. This maintains non-negotiable principles and supports consistent progress tracking.
# (Once) Define non-negotiable principles for the project
/speckit-constitution
# (Each Claude session) Load speckit context + progress tracking
/speckit-startup
Optional enhancements:
- Install spec-kit for spec-driven artifacts:
/plugin install spec-kit@claude-night-market - Install superpowers for rigorous methodology loops:
/plugin marketplace add obra/superpowers
/plugin install superpowers@superpowers-marketplace
Alternative: Brainstorming Workflow
For complex projects requiring exploration, begin by brainstorming the problem space and creating a detailed specification before planning the architecture and tasks.
# 1. Brainstorm the problem space
/attune:brainstorm --domain "my problem area"
# 2. Create detailed specification
/attune:specify
# 3. Plan architecture and tasks
/attune:blueprint
# 4. Initialize with chosen architecture
/attune:arch-init --name my-project
# 5. Execute implementation
/attune:execute
What You Get
| Artifact | Description |
|---|---|
pyproject.toml / Cargo.toml / package.json | Build configuration |
Makefile | Development targets (test, lint, format) |
.pre-commit-config.yaml | Code quality hooks |
.github/workflows/ | CI/CD pipelines |
ARCHITECTURE.md | Architecture overview |
docs/adr/ | Architecture decision records |
Reviewing a Pull Request
When: Reviewing code changes in a PR or before merging.
Full Multi-Discipline Review
# Full review with skill selection
/full-review
This orchestrates multiple specialized reviews:
- Architecture assessment
- API surface evaluation
- Bug hunting
- Test quality analysis
Specific Review Types
# Architecture-focused review
/architecture-review
# API surface evaluation
/api-review
# Bug hunting
/bug-review
# Test quality assessment
/test-review
# Rust-specific review (for Rust projects)
/rust-review
Using Skills Directly
For more control, invoke skills:
# First: understand the workspace state
Skill(sanctum:git-workspace-review)
# Then: run specific review
Skill(pensive:architecture-review)
Skill(pensive:api-review)
Skill(pensive:bug-review)
External PR Review
# Review a GitHub PR by URL
/pr-review https://github.com/org/repo/pull/123
# Or just the PR number in current repo
/pr-review 123
Fixing PR Feedback
When: Addressing review comments on your PR.
Quick Fix
# Address PR review comments
/fix-pr
# Or with specific PR reference
/fix-pr 123
This:
- Reads PR review comments
- Identifies actionable feedback
- Applies fixes systematically
- Prepares follow-up commit
Manual Workflow
# 1. Review the feedback
Skill(sanctum:git-workspace-review)
# 2. Apply fixes
# (make your changes)
# 3. Prepare commit message
/commit-msg
# 4. Update PR
git push
Preparing a Pull Request
When: Code is complete and ready for review.
Pre-PR Checklist
Run these commands before creating a PR:
# 1. Update documentation
/sanctum:update-docs
# 2. Update README if needed (consolidated into update-docs)
/sanctum:update-docs
# 3. Review and update tests
/sanctum:update-tests
# 4. Update Makefile demo targets (for plugins)
/abstract:make-dogfood
# 5. Final quality check
make lint && make test
Create the PR
# Full PR preparation
/pr
# This handles:
# - Branch status check
# - Commit message quality
# - Documentation updates
# - PR description generation
Using Skills for PR Prep
# Review workspace before PR
Skill(sanctum:git-workspace-review)
# Generate quality commit message
Skill(sanctum:commit-messages)
# Check PR readiness
Skill(sanctum:pr-preparation)
Catching Up on Changes
When: Returning to a project after time away, or joining an ongoing project.
Quick Catchup
# Standard catchup on recent changes
/catchup
# Git-specific catchup
/git-catchup
Detailed Understanding
# 1. Review workspace state
Skill(sanctum:git-workspace-review)
# 2. Analyze recent diffs
Skill(imbue:diff-analysis)
# 3. Understand branch context
Skill(sanctum:branch-comparison)
Session Recovery
# Resume a previous Claude session
claude --resume
# Or continue with context
claude --continue
Writing Specifications
When: Planning a feature before implementation.
Spec-Driven Development Workflow
# 1. Create specification from idea
/speckit-specify Add user authentication with OAuth2
# 2. Generate implementation plan
/speckit-plan
# 3. Create ordered tasks
/speckit-tasks
# 4. Execute tasks with tracking
/speckit-implement
Persistent Presence Loop (World Model + Agent Model)
Treat SDD artifacts as a self-modeling architecture where the repo state serves as the world model and the loaded skills as the agent model. Experiments are run with small diffs and verified through rigorous loops (tests, linters, repro scripts), while model updates refine both the code artifacts and the orchestration methodology to optimize future loops.
Curriculum generation via /speckit-tasks keeps actions grounded
and dependency-ordered, while the skill library
and iterative refinement ensure the plan adapts to reality.
The cycle moves from planning to action to reflection via /speckit-plan,
/speckit-implement, and /speckit-analyze.
Background reading:
- MineDojo: https://minedojo.org/ (internet-scale knowledge + benchmarks)
- Voyager: https://voyager.minedojo.org/ (arXiv: https://arxiv.org/abs/2305.16291) (automatic curriculum + skill library)
- GTNH_Agent: https://github.com/sefiratech/GTNH_Agent (persistent, modular Minecraft automation)
Clarification and Analysis
# Ask clarifying questions about requirements
/speckit-clarify
# Analyze specification consistency
/speckit-analyze
Using Skills
# Invoke spec writing skill directly
Skill(spec-kit:spec-writing)
# Task planning skill
Skill(spec-kit:task-planning)
Meta-Development
When: Improving claude-night-market itself (skills, commands, templates, orchestration).
When improving the system itself,
treat the repo as the world model and available tools as the agent model.
Run experiments with minimal diffs behind verification,
evaluate them with evidence-first methods like /speckit-analyze
and Skill(superpowers:verification-before-completion),
and update both the artifacts and the methodology so the next loop is cheaper.
Optional pattern: split roles (planner/critic/executor) for long-horizon work, similar to multi-role agent stacks used in open-ended Minecraft agents.
Useful tools:
# Use speckit to keep artifacts + principles explicit
/speckit-constitution
/speckit-analyze
# Use superpowers to enforce evidence
Skill(superpowers:systematic-debugging)
Skill(superpowers:verification-before-completion)
Debugging Issues
When: Investigating bugs or unexpected behavior.
With Superpowers Integration
# Systematic debugging methodology
Skill(superpowers:systematic-debugging)
# This provides:
# - Hypothesis formation
# - Evidence gathering
# - Root cause analysis
# - Fix validation
GitHub Issue Resolution
# Fix a GitHub issue
/do-issue 42
# Or with URL
/do-issue https://github.com/org/repo/issues/42
Analysis Tools
# Test analysis (parseltongue)
/analyze-tests
# Performance profiling
/run-profiler
# Context optimization
/optimize-context
Managing Knowledge
When: Capturing insights, decisions, or learnings.
Memory Palace
# Open knowledge management
/palace
# Access digital garden
/garden
Knowledge Capture
# Capture insight during work
Skill(memory-palace:knowledge-capture)
# Link related concepts
Skill(memory-palace:concept-linking)
Plugin Development
When: Creating or maintaining Night Market plugins.
Create a New Plugin
# Scaffold new plugin
make create-plugin NAME=my-plugin
# Or using attune for plugins
/attune:init --type plugin --name my-plugin
Validate Plugin Structure
# Check plugin structure
/abstract:validate-plugin
# Audit skill quality
/abstract:skill-audit
Update Plugin Documentation
# Update all documentation
/sanctum:update-docs
# Update Makefile demo targets
/abstract:make-dogfood
# Sync templates with reference projects
/attune:sync-templates
Testing
# Run plugin tests
make test
# Validate structure
make validate
# Full quality check
make lint && make test && make build
Context Management
When: Managing token usage or context window.
Monitor Usage
# Check context window usage
/context
# Analyze context optimization
/optimize-context
Reduce Context
# Clear context for fresh start
/clear
# Then catch up
/catchup
# Or scan for bloat
/bloat-scan
Optimization Skills
# Context optimization skill
Skill(conserve:context-optimization)
# Growth analysis (consolidated into bloat-scan)
/bloat-scan
Subagent Delegation
When: Delegating specialized work to focused agents.
Available Subagents
| Subagent | Purpose | When to Use |
|---|---|---|
abstract:plugin-validator | Validate plugin structure | Before publishing plugins |
abstract:skill-auditor | Audit skill quality | During skill development |
pensive:code-reviewer | Focused code review | Reviewing specific files |
attune:project-architect | Architecture design | Planning new features |
attune:project-implementer | Task execution | Systematic implementation |
Example: Code Review Delegation
# Delegate to specialized reviewer
Agent(pensive:code-reviewer) Review src/auth/ for security issues
Example: Plugin Validation
# Delegate validation to subagent
Agent(abstract:plugin-validator) Check plugins/my-plugin
End-to-End Example: New Feature
Here’s a complete workflow for adding a new feature:
# 1. PLANNING PHASE
/speckit-specify Add caching layer for API responses
/speckit-plan
/speckit-tasks
# 2. IMPLEMENTATION PHASE
# Create branch
git checkout -b feature/add-caching
# Implement with Iron Law TDD
Skill(imbue:proof-of-work) # Enforces: NO IMPLEMENTATION WITHOUT FAILING TEST FIRST
# Or with superpowers TDD
Skill(superpowers:tdd)
# Execute planned tasks
/speckit-implement
# 3. QUALITY PHASE
# Run reviews
/architecture-review
/test-review
# Fix any issues
# (make changes)
# 4. PR PREPARATION PHASE
/sanctum:update-docs
/sanctum:update-tests
make lint && make test
# 5. CREATE PR
/pr
Command vs Skill vs Agent
| Type | Syntax | When to Use |
|---|---|---|
| Command | /command-name | Quick actions, one-off tasks |
| Skill | Skill(plugin:skill-name) | Methodologies, detailed workflows |
| Agent | Agent(plugin:agent-name) | Delegated work, specialized focus |
Examples
# Command: Quick action
/pr
# Skill: Detailed methodology
Skill(sanctum:pr-preparation)
# Agent: Delegated specialized work
Agent(pensive:code-reviewer) Review authentication module
Skill Invocation: Secondary Strategy
The Skill tool is a Claude Code feature that may not be available in all
environments. When the Skill tool is unavailable:
Secondary Pattern:
# 1. If Skill tool fails or is unavailable, read the skill file directly:
Read plugins/{plugin}/skills/{skill-name}/SKILL.md
# 2. Follow the skill content as instructions
# The skill file contains the complete methodology to execute
Example:
# Instead of: Skill(sanctum:commit-messages)
# Secondary: Read plugins/sanctum/skills/commit-messages/SKILL.md
# Then follow the instructions in that file
Skill file locations:
- Plugin skills:
plugins/{plugin}/skills/{skill-name}/SKILL.md - User skills:
~/.claude/skills/{skill-name}/SKILL.md
This allows workflows to function across different environments.
Claude Code 2.1.0 Features
New Capabilities
| Feature | Description | Usage |
|---|---|---|
| Skill Hot-Reload | Skills auto-reload without restart | Edit SKILL.md, immediately available |
| Plan Mode Shortcut | Enter plan mode directly | /plan |
| Forked Context | Run skills in isolated context | context: fork in frontmatter |
| Agent Field | Specify agent for skill execution | agent: agent-name in frontmatter |
| Frontmatter Hooks | Lifecycle hooks in skills/agents | hooks: section in frontmatter |
| Wildcard Permissions | Flexible Bash patterns | Bash(npm *), Bash(* install) |
| Skill Visibility | Control slash menu visibility | user-invocable: false |
Skill Development Workflow (Hot-Reload)
With Claude Code 2.1.0, skill development is faster:
# 1. Create/edit skill
vim ~/.claude/skills/my-skill/SKILL.md
# 2. Save changes (no restart needed!)
# 3. Skill is immediately available
Skill(my-skill)
# 4. Iterate rapidly
Using Forked Context
For isolated operations that shouldn’t pollute main context:
# In skill frontmatter
---
name: isolated-analysis
context: fork # Runs in separate context
---
Use cases:
- Heavy file analysis that would bloat context
- Experimental operations that might fail
- Parallel workflows
Frontmatter Hooks
Define hooks scoped to skill/agent/command lifecycle:
---
name: validated-workflow
hooks:
PreToolUse:
- matcher: "Bash"
command: "./validate.sh"
once: true # Run only once per session
PostToolUse:
- matcher: "Write|Edit"
command: "./format.sh"
Stop:
- command: "./teardown.sh"
---
Permission Wildcards
New wildcard patterns for flexible permissions:
allowed-tools:
- Bash(npm *) # All npm commands
- Bash(* install) # Any install command
- Bash(git * main) # Git with main branch
Note (2.1.20+):
Bash(*)is now treated as equivalent to plainBash. Use scoped wildcards likeBash(npm *)for targeted permissions, or plainBashfor unrestricted access.
Disabling Specific Agents
Control which agents can be invoked:
# Via CLI
claude --disallowedTools "Task(expensive-agent)"
# Via settings.json
{
"permissions": {
"deny": ["Task(expensive-agent)"]
}
}
Subagent Resilience
Subagents are designed to continue operations after a permission denial by attempting alternative approaches instead of failing immediately. this behavior results in more reliable agent workflows when interacting with restrictive environments.
Agent-Aware Hooks (2.1.2+)
SessionStart hooks receive agent_type field when launched with --agent:
import json, sys
input_data = json.loads(sys.stdin.read())
agent_type = input_data.get("agent_type", "")
if agent_type in ["code-reviewer", "quick-query"]:
context = "Minimal context" # Skip heavy context
else:
context = full_context
print(json.dumps({"hookSpecificOutput": {"additionalContext": context}}))
This reduces context overhead by 200-800 tokens for lightweight agents.
See Also
- Quick Start Guide - Condensed recipes
- Capabilities Reference - All commands and skills
- Plugin Catalog - Detailed plugin documentation
Technical Debt Migration Guide
Last Updated: 2025-12-06
Overview
Use this guide to migrate plugin code to shared constants and follow function extraction guidelines.
Quick Start
1. Update Your Plugin to Use Shared Constants
Replace scattered magic numbers with centralized constants:
# BEFORE
def check_file_size(content):
if len(content) > 15000: # Magic number!
return "File too large"
if len(content) > 5000: # Another magic number!
return "File is large"
# AFTER
from plugins.shared.constants import MAX_SKILL_FILE_SIZE, LARGE_SIZE_LIMIT
def check_file_size(content):
if len(content) > MAX_SKILL_FILE_SIZE:
return "File too large"
if len(content) > LARGE_SIZE_LIMIT:
return "File is large"
2. Apply Function Extraction Guidelines
Use the patterns from the guidelines to refactor complex functions:
# BEFORE - Complex function with multiple responsibilities
def analyze_and_optimize_skill(content, strategy):
# Validation
if not content:
raise ValueError("Content cannot be empty")
# Analysis
tokens = estimate_tokens(content)
complexity = calculate_complexity(content)
# Optimization
if strategy == "aggressive":
# 20 lines of optimization logic
pass
elif strategy == "moderate":
# 20 lines of optimization logic
pass
return optimized_content, tokens, complexity
# AFTER - Extracted and organized
def analyze_and_optimize_skill(content: str, strategy: str) -> OptimizationResult:
"""Analyze and optimize skill content."""
_validate_content(content)
analysis = _analyze_content(content)
optimized = _optimize_content(content, strategy)
return OptimizationResult(optimized, analysis)
def _validate_content(content: str) -> None:
"""Validate input content."""
if not content:
raise ValueError("Content cannot be empty")
def _analyze_content(content: str) -> ContentAnalysis:
"""Analyze content properties."""
tokens = estimate_tokens(content)
complexity = calculate_complexity(content)
return ContentAnalysis(tokens, complexity)
def _optimize_content(content: str, strategy: str) -> str:
"""Optimize content using specified strategy."""
optimizer = get_strategy_optimizer(strategy)
return optimizer.optimize(content)
Detailed Migration Steps
1. Audit Plugin
Find all magic numbers and complex functions:
# Find magic numbers (search for numeric literals in conditions)
grep -n -E "(if|when|while).*[0-9]+" your_plugin/**/*.py
# Find long functions
find your_plugin -name "*.py" -exec wc -l {} + | awk '$1 > 30 {print}'
# Find functions with many parameters
grep -n "def .*\(.*," your_plugin/**/*.py | grep -oE "\([^)]*\)" | grep -o "," | wc -l
2. Plan Migration
Create a migration plan for your plugin:
-
Identify Constants
- List all magic numbers
- Categorize by purpose (timeouts, sizes, thresholds)
- Check if they exist in shared constants
-
Identify Functions to Refactor
- Functions > 30 lines
- Functions with > 4 parameters
- Functions with multiple responsibilities
-
Create Migration Tasks
- Update constants first (lowest risk)
- Refactor simple functions next
- Tackle complex functions last
3. Replace Magic Numbers
File Size Constants
# Replace these patterns:
if len(content) > 15000:
if file_size > 100000:
if line_count > 200:
# With:
from plugins.shared.constants import (
MAX_SKILL_FILE_SIZE,
MAX_TOTAL_SKILL_SIZE,
LARGE_FILE_LINES
)
Timeout Constants
# Replace these patterns:
timeout=10
timeout=300
time.sleep(30)
# With:
from plugins.shared.constants import (
DEFAULT_SERVICE_CHECK_TIMEOUT,
DEFAULT_EXECUTION_TIMEOUT,
MEDIUM_TIMEOUT
)
Quality Thresholds
# Replace these patterns:
if quality_score > 70.0:
if quality_score > 80.0:
if quality_score > 90.0:
# With:
from plugins.shared.constants import (
MINIMUM_QUALITY_THRESHOLD,
HIGH_QUALITY_THRESHOLD,
EXCELLENT_QUALITY_THRESHOLD
)
4. Refactor Complex Functions
Follow this iterative approach:
4.1 Write Tests First
# Test the current behavior
def test_function_to_refactor():
result = your_complex_function(input_data)
assert result.expected_field == expected_value
# Add more assertions based on current behavior
4.2 Extract Small Helper Functions
# Start with small, obvious extractions
def _calculate_value(item):
"""Extract value calculation from complex function."""
return item.base * item.multiplier + item.offset
def _validate_input(data):
"""Extract input validation."""
if not data:
raise ValueError("Data required")
return True
4.3 Extract Strategy Classes
For functions with conditional logic:
# Before: Complex conditional function
def process_item(item, mode):
if mode == "fast":
# Fast processing logic
pass
elif mode == "thorough":
# Thorough processing logic
pass
elif mode == "minimal":
# Minimal processing logic
pass
# After: Strategy pattern
class ItemProcessor(ABC):
@abstractmethod
def process(self, item):
pass
class FastProcessor(ItemProcessor):
def process(self, item):
# Fast processing implementation
pass
class ThoroughProcessor(ItemProcessor):
def process(self, item):
# Thorough processing implementation
pass
# Registry
PROCESSORS = {
"fast": FastProcessor(),
"thorough": ThoroughProcessor(),
"minimal": MinimalProcessor()
}
def process_item(item, mode):
processor = PROCESSORS.get(mode)
if not processor:
raise ValueError(f"Unknown mode: {mode}")
return processor.process(item)
5. Update Configuration
If your plugin has configuration files:
# config.yaml - Use shared defaults
plugin_name: your_plugin
# Import shared defaults and override only what's needed
shared_constants:
import: file_limits, timeouts, quality
# Plugin-specific settings
specific_settings:
custom_threshold: 42
feature_enabled: true
Migration Checklist
Pre-Migration
- Run existing tests to establish baseline
- Create backup of current code
- Document current behavior
- Identify all dependencies
Constants Migration
- List all magic numbers in your plugin
- Map to appropriate shared constants
- Update imports
- Replace magic numbers
- Run tests to verify no breaking changes
Function Refactoring
- Identify functions > 30 lines
- Write tests for each function
- Extract small helper functions first
- Apply strategy pattern where appropriate
- Keep public APIs stable
- Update documentation
Post-Migration
- Run full test suite
- Update documentation
- Verify performance
- Update CHANGELOG
- Create migration notes for users
Common Migration Patterns
1. Gradual Migration
Don’t refactor everything at once. Use feature flags:
# Gradually migrate to new implementation
def legacy_function(data):
if USE_NEW_IMPLEMENTATION:
return new_refactored_function(data)
else:
return old_implementation(data)
# Set this in config when ready
USE_NEW_IMPLEMENTATION = os.getenv("USE_NEW_IMPLEMENTATION", "false").lower() == "true"
2. Adapter Pattern
Keep old API while using new implementation:
def old_api_function(param1, param2, param3):
"""Legacy API - delegates to new implementation."""
config = LegacyConfig(param1, param2, param3)
return new_refactored_function(config)
# New, cleaner API
def new_refactored_function(config: Config):
"""New, improved implementation."""
pass
3. Parallel Implementation
Run both old and new implementations in parallel to verify:
def process_with_validation(data):
"""Run both implementations and compare."""
old_result = old_implementation(data)
new_result = new_implementation(data)
if not results_equivalent(old_result, new_result):
log_discrepancy(old_result, new_result)
# Return old result for safety
return old_result
return new_result
Testing Your Migration
1. Property-Based Testing
Use hypothesis to test refactored functions:
from hypothesis import given, strategies as st
@given(st.lists(st.integers()))
def test_sort_refactor(data):
"""Test that refactored sort produces same result."""
old_result = old_sort_function(data.copy())
new_result = new_sort_function(data.copy())
assert old_result == new_result
2. Integration Tests
Verify the whole workflow still works:
def test_complete_workflow():
"""Test that refactoring didn't break the workflow."""
input_data = create_test_data()
# Run through entire process
result = your_plugin_workflow(input_data)
# Verify key properties
assert result is not None
assert result.quality_score >= 70
assert len(result.processed_data) > 0
3. Performance Tests
Verify refactoring didn’t hurt performance:
import time
def test_performance():
"""Verify refactoring didn't degrade performance."""
data = create_large_dataset()
start = time.time()
old_result = old_implementation(data)
old_time = time.time() - start
start = time.time()
new_result = new_implementation(data)
new_time = time.time() - start
# New implementation shouldn't be more than 10% slower
assert new_time < old_time * 1.1
Rollback Plan
If Migration Fails
-
Immediate Rollback
git revert <migration-commit> -
Partial Rollback
- Keep constants migration
- Revert function refactoring
- Fix issues and retry
-
Feature Flag Rollback
# Disable new implementation os.environ["USE_NEW_IMPLEMENTATION"] = "false"
Documenting Issues
If you encounter problems:
- Document the specific issue
- Note the affected functionality
- Create a bug report with:
- Migration step that failed
- Error messages
- Minimal reproduction case
- Expected vs actual behavior
Getting Help
Resources
Support
- Create an issue for migration problems
- Join the #migration Slack channel
- Review example migrations in other plugins
Contributing
- Share your migration experience
- Suggest improvements to guidelines
- Add new shared constants as needed
Migration Examples
Example: Memory Palace Plugin
Challenges:
- 15 magic numbers scattered across files
- Functions averaging 45 lines
- Complex conditional logic
Solution:
- Replaced all magic numbers with shared constants
- Refactored 8 functions using extraction patterns
- Introduced strategy pattern for content processing
Results:
- 40% reduction in code complexity
- Improved test coverage from 60% to 85%
- Easier to add new content types
Example: Parseltongue Plugin
Challenges:
- Complex analysis functions with 8+ parameters
- Duplicated logic across multiple analyzers
- Hard to test individual components
Solution:
- Extracted configuration objects for parameters
- Created shared analysis utilities
- Applied builder pattern for complex objects
Results:
- Functions reduced to average 15 lines
- Parameter count reduced to 3-4 per function
- 100% test coverage for core logic
Conclusion
Migrating to shared constants and following function extraction guidelines improves code quality and maintainability.
Key Steps:
- Migrate incrementally: Don’t try to do everything at once.
- Test thoroughly: Verify behavior doesn’t change.
- Document changes: Help others understand the migration.
- Ask for help: Use the community’s experience.
Plugin Overview
The Claude Night Market organizes plugins into four layers, each building on the foundations below.
Architecture
graph TB
subgraph Meta[Meta Layer]
abstract[abstract<br/>Plugin infrastructure]
end
subgraph Foundation[Foundation Layer]
imbue[imbue<br/>Intelligent workflows]
sanctum[sanctum<br/>Git & workspace ops]
leyline[leyline<br/>Pipeline building blocks]
end
subgraph Utility[Utility Layer]
conserve[conserve<br/>Resource optimization]
conjure[conjure<br/>External delegation]
end
subgraph Domain[Domain Specialists]
archetypes[archetypes<br/>Architecture patterns]
pensive[pensive<br/>Code review toolkit]
parseltongue[parseltongue<br/>Python development]
memory_palace[memory-palace<br/>Spatial memory]
spec_kit[spec-kit<br/>Spec-driven dev]
minister[minister<br/>Release management]
attune[attune<br/>Full-cycle development]
scribe[scribe<br/>Documentation review]
cartograph[cartograph<br/>Codebase visualization]
end
abstract --> leyline
pensive --> imbue
pensive --> sanctum
sanctum --> imbue
conjure --> leyline
spec_kit --> imbue
scribe --> imbue
scribe --> conserve
style Meta fill:#fff3e0,stroke:#e65100
style Foundation fill:#e1f5fe,stroke:#01579b
style Utility fill:#f3e5f5,stroke:#4a148c
style Domain fill:#e8f5e8,stroke:#1b5e20
Layer Summary
| Layer | Purpose | Plugins |
|---|---|---|
| Meta | Plugin infrastructure and evaluation | abstract |
| Foundation | Core workflow methodologies | imbue, sanctum, leyline |
| Utility | Resource optimization and delegation | conserve, conjure |
| Domain | Specialized task execution | archetypes, pensive, parseltongue, memory-palace, spec-kit, minister, attune, scribe, cartograph |
Dependency Rules
- Downward Only: Plugins depend on lower layers, never upward
- Foundation First: Most domain plugins work better with foundation plugins installed
- Graceful Degradation: Plugins function standalone but gain capabilities with dependencies
Quick Installation
Minimal (Git Workflows)
/plugin install sanctum@claude-night-market
Standard (Development)
/plugin install sanctum@claude-night-market
/plugin install imbue@claude-night-market
/plugin install spec-kit@claude-night-market
Full (All Capabilities)
/plugin install abstract@claude-night-market
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
/plugin install leyline@claude-night-market
/plugin install conserve@claude-night-market
/plugin install conjure@claude-night-market
/plugin install archetypes@claude-night-market
/plugin install pensive@claude-night-market
/plugin install parseltongue@claude-night-market
/plugin install memory-palace@claude-night-market
/plugin install spec-kit@claude-night-market
/plugin install minister@claude-night-market
/plugin install attune@claude-night-market
/plugin install scribe@claude-night-market
Browse by Layer
- Meta Layer - Plugin infrastructure
- Foundation Layer - Core workflows
- Utility Layer - Resource optimization
- Domain Specialists - Specialized tasks
Browse by Plugin
| Plugin | Description |
|---|---|
| abstract | Meta-skills for plugin development |
| imbue | Analysis and evidence gathering |
| sanctum | Git and workspace operations |
| leyline | Infrastructure building blocks |
| conserve | Context and resource optimization |
| conjure | External LLM delegation |
| archetypes | Architecture paradigms |
| pensive | Code review toolkit |
| parseltongue | Python development |
| memory-palace | Knowledge organization |
| spec-kit | Specification-driven development |
| minister | Release management |
| attune | Full-cycle project development |
| scribe | Documentation review and AI slop detection |
Meta Layer
The meta layer provides infrastructure for building, evaluating, and maintaining plugins themselves.
Purpose
While other layers focus on user-facing workflows, the meta layer focuses on:
- Plugin Development: Tools for creating new skills, commands, and hooks
- Quality Assurance: Evaluation frameworks for plugin quality
- Architecture Guidance: Patterns for modular, maintainable plugins
Plugins
| Plugin | Description |
|---|---|
| abstract | Meta-skills infrastructure for plugin development |
When to Use
Use meta layer plugins when:
- Creating a new plugin for the marketplace
- Evaluating existing skill quality
- Refactoring large skills into modules
- Validating plugin structure before publishing
Key Capabilities
Plugin Validation
/validate-plugin [path]
Checks plugin structure against official requirements.
Skill Creation
/create-skill
Scaffolds new skills using best practices and TDD methodology.
Quality Assessment
/skills-eval
Scores skill quality and suggests improvements.
Architecture Position
Meta Layer
|
v
Foundation Layer (imbue, sanctum, leyline)
|
v
Utility Layer (conservation, conjure)
|
v
Domain Specialists
The meta layer sits above all others, providing tools to build and maintain the entire ecosystem.
abstract
Meta-skills infrastructure for the plugin ecosystem - skill authoring, hook development, and quality evaluation.
Overview
The abstract plugin provides tools for building, evaluating, and maintaining Claude Code plugins. It’s the toolkit for plugin developers.
Installation
/plugin install abstract@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
skill-authoring | TDD methodology with Iron Law enforcement | Creating new skills with quality standards |
hook-authoring | Security-first hook development | Building safe, effective hooks |
modular-skills | Modular design patterns | Breaking large skills into modules |
rules-eval | Claude Code rules validation | Auditing .claude/rules/ for frontmatter, glob patterns, and content quality |
skills-eval | Skill quality assessment | Auditing skills for token efficiency |
hooks-eval | Hook security scanning | Verifying hook safety |
escalation-governance | Model escalation decisions | Deciding when to escalate models |
methodology-curator | Expert framework curation | Grounding skills in proven methodologies |
shared-patterns | Plugin development patterns | Reusable templates |
subagent-testing | Subagent test patterns | Testing subagent interactions |
Commands
| Command | Description |
|---|---|
/validate-plugin [path] | Check plugin structure against requirements |
/create-skill | Scaffold new skill with best practices |
/create-command | Scaffold new command |
/create-hook | Scaffold hook with security-first design |
/analyze-skill | Get modularization recommendations |
/bulletproof-skill | Anti-rationalization workflow for hardening |
/context-report | Context optimization report |
/hooks-eval | detailed hook evaluation |
/make-dogfood | Analyze and enhance Makefiles |
/rules-eval | Evaluate Claude Code rules quality |
/skills-eval | Run skill quality assessment |
/test-skill | Skill testing with TDD methodology |
/validate-hook | Validate hook compliance |
Agents
| Agent | Description |
|---|---|
meta-architect | Designs plugin ecosystem architectures |
plugin-validator | Validates plugin structure |
skill-auditor | Audits skills for quality and compliance |
Hooks
| Hook | Type | Description |
|---|---|---|
homeostatic_monitor.py | PostToolUse | Reads stability gap metrics, queues degrading skills for auto-improvement |
aggregate_learnings_daily.py | UserPromptSubmit | Daily learning aggregation with severity-based issue creation |
pre_skill_execution.py | PreToolUse | Skill execution tracking |
skill_execution_logger.py | PostToolUse | Skill metrics logging |
post-evaluation.json | Config | Quality scoring and improvement tracking |
pre-skill-load.json | Config | Pre-load validation for dependencies |
Insight Engine
The insight engine transforms raw skill execution metrics into diverse findings posted to GitHub Discussions. Four trigger points feed a pluggable lens architecture through a deduplication registry.
Architecture
Stop Hook (lightweight) ──┐
Scheduled agent (deep) ───┤
/pr-review ───────────────┤
/code-refinement ─────────┘
│
v
insight_analyzer.py
(loads lenses, runs analysis)
│
v
InsightRegistry
(content-hash dedup, 30-day expiry)
│
v
post_insights_to_discussions.py
(posts to "Insights" category)
Lenses
Four built-in lightweight lenses run on every Stop hook:
| Lens | What it detects |
|---|---|
| TrendLens | Degradation or improvement over time |
| PatternLens | Shared failure modes across skills |
| HealthLens | Unused skills, orphaned hooks, config drift |
| DeltaLens | Changes since the last posted snapshot |
LLM-augmented lenses (BugLens, OptimizationLens, ImprovementLens) run in the scheduled agent only.
Custom lenses drop into scripts/lenses/ and auto-discover
via the LENS_META + analyze() convention.
Deduplication
Findings pass through four layers before posting:
- Content hash: deterministic SHA-256 from type, skill, and summary prevents re-posting identical findings.
- Snapshot diff: DeltaLens compares current metrics to the last snapshot and only surfaces changes.
- Staleness expiry: hashes expire after 30 days so persistent problems resurface with fresh data.
- Semantic dedup: Jaccard similarity against existing Discussions links related findings or skips near-duplicates.
Insight Types
| Type | Prefix | Source |
|---|---|---|
| Trend | [Trend] | Script |
| Pattern | [Pattern] | Script |
| Bug Alert | [Bug Alert] | Agent |
| Optimization | [Optimization] | Agent |
| Improvement | [Improvement] | Agent |
| PR Finding | [PR Finding] | PR review |
| Health Check | [Health Check] | Script |
See ADR 0007 for the GitHub Discussions integration design and the palace bridge for cross-plugin knowledge flow.
Self-Adapting System
A closed-loop system that monitors skill health and auto-triggers improvements:
homeostatic_monitor.pychecks stability gap after each Skill invocation- Skills with gap > 0.3 are queued in
improvement_queue.py - After 3+ flags, the
skill-improveragent runs automatically skill_versioning.pytracks changes via YAML frontmatterrollback_reviewer.pycreates GitHub issues if regressions are detectedexperience_library.pystores successful trajectories for future context
Cross-plugin dependency:
reads stability metrics from memory-palace’s .history.json.
Usage Examples
Create a New Skill
/create-skill
# Claude will:
# 1. Use brainstorming for idea refinement
# 2. Apply TDD methodology
# 3. Generate skill scaffold
# 4. Create tests
Evaluate Skill Quality
Skill(abstract:skills-eval)
# Scores skills on:
# - Token efficiency
# - Documentation quality
# - Trigger clarity
# - Modular structure
Validate Plugin Structure
/validate-plugin /path/to/my-plugin
# Checks:
# - plugin.json structure
# - Required files present
# - Skill format compliance
# - Command syntax
Best Practices
Skill Design
- Single Responsibility: Each skill does one thing well
- Clear Triggers: Include “Use when…” in descriptions
- Token Efficiency: Keep skills under 2000 tokens
- TodoWrite Integration: Output actionable items
Hook Security
- No Secrets: Never log sensitive data
- Fail Safe: Default to allowing operations
- Minimal Scope: Request only needed permissions
- Audit Trail: Log decisions for review
- Agent-Aware (2.1.2+):
SessionStart hooks receive
agent_typeto customize context
Superpowers Integration
When superpowers is installed:
| Command | Enhancement |
|---|---|
/create-skill | Uses brainstorming for idea refinement |
/create-command | Uses brainstorming for concept development |
/create-hook | Uses brainstorming for security design |
/test-skill | Uses test-driven-development for TDD cycles |
Related Plugins
- leyline: Infrastructure patterns abstract builds on
- imbue: Review patterns for skill evaluation
Foundation Layer
The foundation layer provides core workflow methodologies that other plugins build upon.
Purpose
Foundation plugins establish:
- Analysis Patterns: How to approach investigation and review tasks
- Workspace Operations: Git and file system interactions
- Infrastructure Utilities: Reusable patterns for building plugins
Plugins
| Plugin | Description | Key Use Case |
|---|---|---|
| imbue | Workflow methodologies | Analysis, evidence gathering |
| sanctum | Git operations | Commits, PRs, documentation |
| leyline | Building blocks | Error handling, authentication |
Dependency Flow
imbue (standalone)
|
sanctum --> imbue
|
leyline (standalone)
- imbue: No dependencies, purely methodology
- sanctum: Uses imbue for review patterns
- leyline: No dependencies, infrastructure patterns
When to Use
imbue
Use when you need to:
- Structure a detailed review
- Analyze changes systematically
- Capture evidence for decisions
- Prevent overengineering (scope-guard)
sanctum
Use when you need to:
- Understand repository state
- Generate commit messages
- Prepare pull requests
- Update documentation
leyline
Use when you need to:
- Implement error handling patterns
- Add authentication flows
- Build plugin infrastructure
- Standardize testing approaches
Key Workflows
Pre-Commit Flow
Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)
Review Flow
Skill(imbue:review-core)
Skill(imbue:proof-of-work)
Skill(imbue:structured-output)
PR Preparation
Skill(sanctum:git-workspace-review)
Skill(sanctum:pr-prep)
Installation
# Minimal foundation
/plugin install imbue@claude-night-market
# Full foundation
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
/plugin install leyline@claude-night-market
imbue
Workflow methodologies for analysis, evidence gathering, and structured output.
Overview
Imbue provides reusable patterns for approaching analysis tasks. It’s a methodology plugin - the patterns apply to various inputs (git diffs, specs, logs) and chain together for complex workflows.
Core Philosophy: “NO IMPLEMENTATION WITHOUT A FAILING TEST FIRST” - The Iron Law enforced through proof-of-work validation.
Installation
/plugin install imbue@claude-night-market
Principles
- Generalizable: Patterns work across different input types
- Composable: Skills chain together naturally
- Evidence-based: Emphasizes capturing proof for reproducibility
- TDD-First: Iron Law enforcement prevents cargo cult testing
Skills
Review Patterns
| Skill | Description | When to Use |
|---|---|---|
review-core | Scaffolding for detailed reviews | Starting architecture, security, or code quality reviews |
structured-output | Output formatting patterns | Preparing final reports |
Analysis Methods
| Skill | Description | When to Use |
|---|---|---|
diff-analysis | Semantic changeset analysis | Understanding impact of changes |
catchup | Context recovery | Getting up to speed after time away |
Workflow Guards
| Skill | Description | When to Use |
|---|---|---|
scope-guard | Anti-overengineering with RICE+WSJF scoring | Evaluating features, sprint planning, roadmap reviews |
proof-of-work | Evidence-based validation with output-contracts and retry-protocol modules | Enforcing Iron Law TDD discipline |
rigorous-reasoning | Anti-sycophancy guardrails | Analyzing conflicts, evaluating contested claims |
Workflow Automation
| Skill | Description | When to Use |
|---|---|---|
workflow-monitor | Execution monitoring and issue creation | After workflow failures or inefficiencies |
Commands
| Command | Description |
|---|---|
/catchup | Quick context recovery from recent changes |
/structured-review | Start structured review workflow with evidence logging |
Agents
| Agent | Description |
|---|---|
review-analyst | Autonomous structured reviews with evidence gathering |
Hooks
| Hook | Type | Description |
|---|---|---|
session-start.sh | SessionStart | Initializes scope-guard, Iron Law, and learning mode |
user-prompt-submit.sh | UserPromptSubmit | Validates prompts against scope thresholds |
tdd_bdd_gate.py | PreToolUse | Enforces Iron Law at write-time |
pre-pr-scope-check.sh | Manual | Checks scope before PR creation |
proof-enforcement.md | Design | Iron Law TDD compliance enforcement |
Usage Examples
Structured Review
Skill(imbue:review-core)
# Required TodoWrite items:
# 1. review-core:context-established
# 2. review-core:scope-inventoried
# 3. review-core:evidence-captured
# 4. review-core:deliverables-structured
# 5. review-core:contingencies-documented
Diff Analysis
Skill(imbue:diff-analysis)
# Answers: "What changed and why does it matter?"
# - Categorizes changes by function
# - Assesses risks
# - Summarizes implications
Quick Catchup
/catchup
# Summarizes:
# - Recent commits
# - Changed files
# - Key decisions
# - Action items
Scope Guard
The scope-guard skill prevents overengineering via four components:
| Component | Purpose |
|---|---|
decision-framework | Worthiness formula and scoring |
anti-overengineering | Rules to prevent scope creep |
branch-management | Threshold monitoring (lines, commits, days) |
github-integration | Issue creation and optional Discussion linking for deferrals |
baseline-scenarios | Validated test scenarios |
Iron Law TDD Enforcement
The proof-of-work skill enforces the Iron Law:
NO IMPLEMENTATION WITHOUT A FAILING TEST FIRST
This prevents “Cargo Cult TDD” where tests validate pre-conceived implementations.
Self-Check Protocol
| Thought Pattern | Violation | Action |
|---|---|---|
| “Let me plan the implementation first” | Skipping RED | Write failing test FIRST |
| “I know what tests we need” | Pre-conceived impl | Document failure, THEN design |
| “The design is straightforward” | Skipping uncertainty | Let design EMERGE from tests |
TodoWrite Items
proof:iron-law-red - Failing test documented
proof:iron-law-green - Minimal code to pass
proof:iron-law-refactor - Code improved, tests green
proof:iron-law-coverage - Coverage gates verified
See iron-law-enforcement.md module for full enforcement patterns.
Rigorous Reasoning
The rigorous-reasoning skill prevents sycophantic patterns through structured analysis:
| Component | Purpose |
|---|---|
priority-signals | Override principles (no courtesy agreement, checklist over intuition) |
conflict-analysis | Harm/rights checklist for interpersonal conflicts |
debate-methodology | Truth claims and contested territory handling |
red-flag monitoring | Detect sycophantic thought patterns |
Red Flag Self-Check
| Thought Pattern | Reality Check | Action |
|---|---|---|
| “I agree that…” | Did you validate? | Apply harm/rights checklist |
| “You’re right that…” | Is this proven? | Check for evidence |
| “That’s a fair point” | Fair by what standard? | Specify the standard |
TodoWrite Integration
All skills output TodoWrite items for progress tracking:
review-core:context-established
review-core:scope-inventoried
diff-analysis:baseline-established
diff-analysis:changes-categorized
catchup:context-confirmed
catchup:delta-captured
Integration Pattern
Imbue is foundational - other plugins build on it:
# Sanctum uses imbue for review patterns
Skill(imbue:review-core)
Skill(sanctum:git-workspace-review)
# Pensive uses imbue for evidence gathering
Skill(imbue:proof-of-work)
Skill(pensive:architecture-review)
Superpowers Integration
| Skill | Enhancement |
|---|---|
scope-guard | Uses brainstorming, writing-plans, execute-plan |
Related Plugins
- sanctum: Uses imbue for review scaffolding
- pensive: Uses imbue for evidence gathering
- spec-kit: Uses imbue for analysis patterns
sanctum
Git and workspace operations for active development workflows.
Overview
Sanctum handles the practical side of development: commits, PRs, documentation updates, and version management. It’s the plugin you’ll use most during active coding.
Installation
/plugin install sanctum@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
git-workspace-review | Preflight repo state analysis | Before any git operation |
file-analysis | Codebase structure mapping | Understanding project layout |
commit-messages | Conventional commit generation | After staging changes |
pr-prep | PR preparation with quality gates | Before creating PRs |
pr-review | PR analysis and feedback, supports --local for file output | Reviewing others’ PRs |
doc-consolidation | Merge ephemeral docs | Consolidating LLM-generated docs |
doc-updates | Documentation maintenance | Syncing docs with code |
test-updates | Test generation and enhancement | Maintaining test suites |
version-updates | Version bumping | Managing semantic versions |
workflow-improvement | Workflow retrospectives | Improving development processes |
tutorial-updates | Tutorial maintenance | Keeping tutorials current |
Commands
| Command | Description |
|---|---|
/git-catchup | Git repository catchup |
/commit-msg | Draft conventional commit message |
/pr | Prepare PR with quality gates |
/pr-review | Enhanced PR review |
/fix-pr | Address PR review comments |
/do-issue | Fix GitHub issues systematically |
/fix-workflow | Improve recent workflow |
/merge-docs | Consolidate ephemeral docs |
/update-docs | Update documentation |
/update-plugins | Audit and sync plugin.json registrations |
/update-tests | Maintain tests |
/update-tutorial | Update tutorial content |
/update-version | Bump versions |
/update-dependencies | Update project dependencies |
/create-tag | Create git tags for releases |
/resolve-threads | Resolve PR review threads |
Agents
| Agent | Description |
|---|---|
git-workspace-agent | Repository state analysis |
commit-agent | Commit message generation |
pr-agent | PR preparation specialist |
workflow-recreate-agent | Workflow slice reconstruction |
workflow-improvement-* | Workflow improvement pipeline |
dependency-updater | Dependency version management |
Hooks
| Hook | Type | Description |
|---|---|---|
post_implementation_policy.py | SessionStart | Requires docs/tests/readme updates |
security_pattern_check.py | PreToolUse | Security anti-pattern detection on Write/Edit |
deferred_item_watcher.py | PostToolUse | Detect deferred items in Skill output |
config_change_audit.py | ConfigChange | Audit configuration changes |
verify_workflow_complete.py | Stop | Verifies workflow completion |
session_complete_notify.py | Stop | Toast notification when awaiting input |
deferred_item_sweep.py | Stop | Sweep session ledger and file GitHub issues |
Usage Examples
Pre-Commit Workflow
# Stage changes
git add -p
# Review workspace
Skill(sanctum:git-workspace-review)
# Generate commit message
Skill(sanctum:commit-messages)
# Apply
git commit -m "<generated message>"
PR Preparation
# Run quality checks first
make fmt && make lint && make test
# Prepare PR
/pr
# Creates:
# - Summary
# - Change list
# - Testing checklist
# - Quality gate results
Fix PR Review Comments
/fix-pr
# Claude will:
# 1. Read PR comments
# 2. Triage by priority
# 3. Implement fixes
# 4. Resolve threads on GitHub
Fix GitHub Issue
/do-issue 42
# Uses subagent-driven-development:
# 1. Analyze issue
# 2. Create plan
# 3. Implement fix
# 4. Test
# 5. Prepare PR
Shared Modules
Sanctum uses shared modules under commands/shared/
to deduplicate logic across commands.
| Module | Used By | Purpose |
|---|---|---|
test-plan-injection | /fix-pr, /pr-review | Detect, generate, and inject test plans into PR descriptions |
The test plan injection module checks whether a PR description already contains a test plan section (recognized heading + 3 or more checkbox items). When missing, it generates one from triage data and injects it before the review summary or appends it to the body.
Skill Dependencies
Most sanctum skills depend on git-workspace-review:
git-workspace-review (foundation)
├── commit-messages
├── pr-prep
├── doc-updates
└── version-updates
file-analysis (standalone)
Always run git-workspace-review first to establish context.
TodoWrite Integration
git-review:repo-confirmed
git-review:status-overview
git-review:diff-stat
git-review:diff-details
pr-prep:workspace-reviewed
pr-prep:quality-gates
pr-prep:changes-summarized
pr-prep:testing-documented
pr-prep:pr-drafted
Workflow Patterns
Pre-Commit
git add -p
Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)
Pre-PR
make fmt && make lint && make test
Skill(sanctum:git-workspace-review)
Skill(sanctum:pr-prep)
Post-Review
/fix-pr
# Implements fixes, resolves threads
Release
Skill(sanctum:git-workspace-review)
Skill(sanctum:version-updates)
Skill(sanctum:doc-updates)
git commit && git tag
Superpowers Integration
| Command | Enhancement |
|---|---|
/pr | Uses receiving-code-review for validation |
/pr-review | Uses receiving-code-review for analysis |
/fix-pr | Uses receiving-code-review for resolution |
/do-issue | Uses multiple superpowers for full workflow |
Related Plugins
- imbue: Provides review scaffolding sanctum uses
- pensive: Code review complements sanctum’s git operations
leyline
Infrastructure and pipeline building blocks for plugins.
Overview
Leyline provides reusable infrastructure patterns that other plugins build on. Think of it as a standard library for plugin development - error handling, authentication, storage, and testing patterns.
Installation
/plugin install leyline@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
quota-management | Rate limiting and quotas | Building services that consume APIs |
usage-logging | Telemetry tracking | Logging tool usage for analytics |
service-registry | Service discovery patterns | Managing external tool connections |
error-patterns | Standardized error handling patterns | Production-grade error recovery |
damage-control | Recovery protocols for broken agent state | Crash recovery, context overflow, merge conflicts |
content-sanitization | Sanitization for external content | Loading Issues, PRs, Discussions, or WebFetch results |
markdown-formatting | Line wrapping and style conventions | Generating or editing markdown prose |
authentication-patterns | Auth flow patterns | Handling API keys and OAuth |
evaluation-framework | Decision thresholds | Building evaluation criteria |
progressive-loading | Dynamic content loading | Lazy loading strategies |
risk-classification | Inline 4-tier risk classification for agent tasks | Risk-based task routing with war-room escalation |
pytest-config | Pytest configuration | Standardized test configuration |
storage-templates | Storage abstraction | File and database patterns |
stewardship | Cross-cutting stewardship principles with five virtues (Care, Curiosity, Humility, Diligence, Foresight) | Working with project health, codebase improvement, or virtue-aligned development |
testing-quality-standards | Test quality guidelines | Ensuring high-quality tests |
deferred-capture | Contract for unified deferred-item capture across plugins | Implementing or testing deferred-capture wrappers |
git-platform | Git platform detection and cross-platform commands | Abstracting GitHub/GitLab/Bitbucket differences |
supply-chain-advisory | Known-bad version detection, lockfile auditing, incident response | After supply chain advisories, dependency audits, or suspected compromise |
sem-integration | sem CLI detection, install-on-first-use, fallback patterns | Skills consuming git diff output that benefit from entity-level diffs |
Commands
| Command | Description |
|---|---|
/reinstall-all-plugins | Uninstall and reinstall all plugins to refresh cache |
/update-all-plugins | Update all installed plugins from marketplaces |
/verify-plugin | Verify plugin trust via ERC-8004 Reputation Registry |
Usage Examples
Plugin Management
# Refresh all plugins (fixes version mismatches)
/reinstall-all-plugins
# Update to latest versions
/update-all-plugins
Using as Dependencies
Leyline skills are typically used as dependencies in other plugins:
# In your skill's SKILL.md frontmatter
dependencies:
- leyline:error-patterns
- leyline:quota-management
Error Handling Pattern
Skill(leyline:error-patterns)
# Provides:
# - Structured error types
# - Recovery strategies
# - Logging standards
# - User-friendly messages
Authentication Pattern
Skill(leyline:authentication-patterns)
# Covers:
# - API key management
# - OAuth flows
# - Token refresh
# - Secret storage
Testing Standards
Skill(leyline:testing-quality-standards)
# Enforces:
# - Test naming conventions
# - Coverage requirements
# - Mocking guidelines
# - Fixture patterns
Modules
frontmatter
Canonical YAML frontmatter parser shared across plugins.
from leyline.frontmatter import parse_frontmatter
content = """---
name: my-skill
category: testing
---
# My Skill
"""
meta = parse_frontmatter(content)
# {'name': 'my-skill', 'category': 'testing'}
When PyYAML is installed, it uses yaml.safe_load. When unavailable,
it falls back to a minimal key-value parser that handles simple key: value
pairs (no nested structures). Returns None for content without frontmatter.
Other plugins should import this instead of reimplementing frontmatter parsing.
Pattern Categories
Rate Limiting
# quota-management pattern
from leyline import QuotaManager
manager = QuotaManager(
daily_limit=1000,
hourly_limit=100,
burst_limit=10
)
if manager.can_proceed():
# Make API call
manager.record_usage()
Telemetry
# usage-logging pattern
from leyline import UsageLogger
logger = UsageLogger(output="telemetry.csv")
logger.log_tool_use("WebFetch", tokens=500, latency_ms=1200)
Storage Abstraction
# storage-templates pattern
from leyline import Storage
storage = Storage.from_config()
storage.save("key", data)
data = storage.load("key")
Discussion Operations (GitHub Only)
The git-platform skill’s command-mapping module provides GraphQL templates
for GitHub Discussions. These templates are consumed by attune (war room
publishing), imbue (scope-guard linking), memory-palace (knowledge promotion),
and minister (playbook rituals).
Supported operations: create, comment, threaded reply, mark-as-answer, search,
get-by-number, update, and list-by-category.
Category resolution from slug to nodeId is included as a prerequisite step.
On non-GitHub platforms (GitLab, Bitbucket), all Discussion operations are skipped with a warning.
A fetch-recent-discussions.sh SessionStart hook queries the 5 most recent
“Decisions” discussions at session start
and injects a summary (<600 tokens) so that new sessions can discover prior
deliberations.
An auto-star-repo.sh SessionStart hook stars the repository if not already
starred. The hook is idempotent (checks status before acting), never unstars,
and fails silently if no auth method is available.
Integration
Leyline is used by:
- abstract: Plugin validation uses error patterns
- conjure: Delegation uses quota management
- conservation: Context optimization uses MECW patterns
Best Practices
- Don’t Duplicate: Use leyline patterns instead of reimplementing
- Compose Patterns: Combine multiple patterns for complex needs
- Test with Standards: Use pytest-config for consistent testing
- Log Everything: Use usage-logging for debugging and analytics
Related Plugins
- abstract: Uses leyline for plugin infrastructure
- conjure: Uses leyline for quota and service management
- conservation: Uses leyline for MECW implementation
Utility Layer
The utility layer provides resource optimization and external integration capabilities.
Purpose
Utility plugins handle:
- Resource Management: Context window optimization, token conservation
- External Delegation: Offloading tasks to external LLM services
- Performance Monitoring: CPU/GPU and memory tracking
Plugins
| Plugin | Description | Key Use Case |
|---|---|---|
| conserve | Resource optimization | Context management |
| conjure | External delegation | Long-context tasks |
| hookify | Behavioral rules | Preventing unwanted actions |
When to Use
conserve
Use when you need to:
- Monitor context window usage
- Optimize token consumption
- Handle large codebases efficiently
- Track resource usage patterns
conjure
Use when you need to:
- Process files too large for Claude’s context
- Delegate bulk processing tasks
- Use specialized external models
- Manage API quotas across services
hookify
Use when you need to:
- Prevent accidental destructive actions (force push, etc.)
- Enforce coding standards via pattern matching
- Create project-specific behavioral constraints
- Add safety guardrails for automated workflows
Key Capabilities
Context Optimization
/optimize-context
Analyzes current context usage and suggests MECW (Minimum Effective Context Window) strategies.
Growth Analysis
/bloat-scan
Predicts context budget impact of skill growth patterns.
(Growth analysis has been consolidated into /bloat-scan.)
External Delegation
make delegate-auto PROMPT="Summarize" FILES="src/"
Auto-selects the best external service for a task.
Conserve Modes
The conserve plugin supports different modes via environment variables:
| Mode | Command | Behavior |
|---|---|---|
| Normal | claude | Full conservation guidance |
| Quick | CONSERVE_MODE=quick claude | Skip guidance for fast tasks |
| Deep | CONSERVE_MODE=deep claude | Extended resource allowance |
Key Thresholds
Context Usage
- < 30%: LOW - Normal operation
- 30-50%: MODERATE - Consider optimization
- > 50%: CRITICAL - Optimize immediately
Token Quotas
- 5-hour rolling cap
- Weekly cap
- Check with
/status
Installation
# Resource optimization
/plugin install conserve@claude-night-market
# External delegation
/plugin install conjure@claude-night-market
Integration with Other Layers
Utility plugins enhance all other layers:
Domain Specialists
|
v
Utility Layer (optimization, delegation)
|
v
Foundation Layer
For example, conjure can delegate large file processing before sanctum analyzes the results.
conserve
Resource optimization and performance monitoring for context window management.
Overview
Conserve helps you work efficiently within Claude’s context limits. It automatically loads optimization guidance at session start and provides tools for monitoring and reducing context usage.
Installation
/plugin install conserve@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
context-optimization | MECW principles, 50% context rule, findings-format, memory-tiers, session-routing modules | When context usage > 30% |
token-conservation | Token usage strategies and quota tracking | Session start, before heavy loads |
cpu-gpu-performance | Resource monitoring and selective testing | Before builds, tests, or training |
mcp-code-execution | MCP patterns for data pipelines | Processing data outside context |
bloat-detector | Detect bloated documentation, dead code, dead wrappers | During documentation reviews, code cleanup |
clear-context | Context window management strategies | When approaching context limits |
Commands
| Command | Description |
|---|---|
/bloat-scan | Detect code bloat, dead code, and dead wrapper scripts |
/unbloat | Remove detected bloat with progressive analysis |
/optimize-context | Analyze and optimize context window usage |
Agents
| Agent | Description |
|---|---|
context-optimizer | Autonomous context optimization and MECW compliance |
Hooks
| Hook | Type | Description |
|---|---|---|
session-start.sh | SessionStart | Loads conservation guidance at startup |
Usage Examples
Context Optimization
/optimize-context
# Analyzes:
# - Current context usage
# - Token distribution
# - Compression opportunities
# - MECW compliance
Manual Skill Invocation
Skill(conservation:context-optimization)
# Provides:
# - MECW principles
# - 50% context rule
# - Compression strategies
# - Eviction priorities
Bypass Modes
Control conservation behavior via environment variables:
| Mode | Command | Behavior |
|---|---|---|
| Normal | claude | Full conservation guidance |
| Quick | CONSERVATION_MODE=quick claude | Skip guidance for fast processing |
| Deep | CONSERVATION_MODE=deep claude | Extended resource allowance |
Examples
# Quick mode for simple tasks
CONSERVATION_MODE=quick claude
# Deep mode for complex analysis
CONSERVATION_MODE=deep claude
Key Thresholds
Context Usage
| Level | Usage | Action |
|---|---|---|
| LOW | < 30% | Normal operation |
| MODERATE | 30-50% | Consider optimization |
| CRITICAL | > 50% | Optimize immediately |
Token Quotas
- 5-hour rolling cap: Prevents burst usage
- Weekly cap: validates sustainable usage
- Check status: Use
/statusto see current usage
MECW Principles
Minimum Effective Context Window strategies:
- Summarize Early: Compress large outputs before they accumulate
- Load on Demand: Fetch file contents only when needed
- Evict Stale: Remove information no longer relevant
- Prioritize Recent: Weight recent context higher than old
Optimization Strategies
For Large Files
# Don't load entire file
# Instead, use targeted reads
Read file.py --offset 100 --limit 50
For Search Results
# Limit search output
Grep --head_limit 20
For Git Operations
# Use stats instead of full diffs
git diff --stat
git log --oneline -10
CPU/GPU Performance
The cpu-gpu-performance skill monitors resource usage:
Skill(conservation:cpu-gpu-performance)
# Provides:
# - Baseline establishment
# - Resource monitoring
# - Selective test execution
# - Build optimization
MCP Code Execution
For processing data too large for context:
Skill(conservation:mcp-code-execution)
# Patterns for:
# - External data processing
# - Pipeline optimization
# - Result summarization
Superpowers Integration
| Command | Enhancement |
|---|---|
/optimize-context | Uses condition-based-waiting for smart optimization |
Related Plugins
- leyline: Provides MECW pattern implementations
- abstract: Uses conservation for skill optimization
- conjure: Delegates to external services when context limited
conjure
Delegation to external LLM services for long-context or bulk tasks.
Overview
Conjure provides a framework for delegating tasks to external LLM services (Gemini, Qwen) when Claude’s context window is insufficient or when specialized models are better suited.
Installation
/plugin install conjure@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
delegation-core | Framework for delegation decisions | Assessing if tasks should be offloaded |
gemini-delegation | Gemini CLI integration | Processing massive context windows |
qwen-delegation | Qwen MCP integration | Tasks requiring specific privacy needs |
Commands (Makefile)
| Command | Description | Example |
|---|---|---|
make delegate-auto | Auto-select best service | make delegate-auto PROMPT="Summarize" FILES="src/" |
make quota-status | Show current quota usage | make quota-status |
make usage-report | Summarize token usage and costs | make usage-report |
Hooks
| Hook | Type | Description |
|---|---|---|
bridge.on_tool_start | PreToolUse | Suggests delegation when files exceed thresholds |
bridge.after_tool_use | PostToolUse | Suggests delegation if output is truncated |
Usage Examples
Auto-Delegation
make delegate-auto PROMPT="Summarize all files" FILES="src/"
# Conjure will:
# 1. Assess file sizes
# 2. Check quota availability
# 3. Select optimal service
# 4. Execute delegation
# 5. Return results
Check Quota Status
make quota-status
# Output:
# Gemini: 450/1000 tokens used (5h rolling)
# Qwen: 200/500 tokens used (5h rolling)
Usage Report
make usage-report
# Output:
# This week:
# Gemini: 2,500 tokens, $0.05
# Qwen: 800 tokens, $0.02
# Total: 3,300 tokens, $0.07
Manual Service Selection
# Force Gemini for large context
Skill(conjure:gemini-delegation)
# Force Qwen for privacy-sensitive tasks
Skill(conjure:qwen-delegation)
Delegation Decision Framework
The delegation-core skill evaluates:
| Factor | Weight | Description |
|---|---|---|
| Context Size | High | Does input exceed Claude’s context? |
| Task Type | Medium | Is task better suited for another model? |
| Privacy Needs | High | Are there data residency requirements? |
| Quota Available | High | Do we have capacity on target service? |
| Cost | Low | Is delegation cost-effective? |
Service Comparison
| Service | Strengths | Best For |
|---|---|---|
| Gemini | Large context (1M+ tokens) | Bulk file processing, long documents |
| Qwen | Local/private inference | Sensitive data, offline work |
Hook Behavior
Pre-Tool Use Hook
When reading large files:
[Conjure Bridge] File exceeds context threshold
Suggested action: Delegate to Gemini
Estimated tokens: 125,000
Quota available: Yes
Post-Tool Use Hook
When output is truncated:
[Conjure Bridge] Output truncated at 50,000 chars
Suggested action: Re-run with delegation
Recommended service: Gemini
Configuration
Environment Variables
# Gemini API key
export GEMINI_API_KEY=your-key
# Qwen MCP endpoint
export QWEN_MCP_ENDPOINT=http://localhost:8080
Quota Configuration
Edit conjure/config/quotas.yaml:
gemini:
hourly_limit: 1000
daily_limit: 10000
qwen:
hourly_limit: 500
daily_limit: 5000
Integration Patterns
With Conservation
# Conservation detects high context usage
# Suggests delegation via conjure
Skill(conservation:context-optimization)
# -> Recommends: Skill(conjure:delegation-core)
With Sanctum
# Large repo analysis
Skill(sanctum:git-workspace-review)
# If repo too large:
# -> Suggests: make delegate-auto FILES="."
Dependencies
Conjure uses leyline for infrastructure:
conjure
|
v
leyline (quota-management, service-registry)
Best Practices
- Check Quota First: Run
make quota-statusbefore large delegations - Use Auto Mode: Let conjure select the optimal service
- Monitor Costs: Review
make usage-reportweekly - Cache Results: Store delegation results locally to avoid repeat calls
Related Plugins
- leyline: Provides quota management and service registry
- conservation: Detects when delegation is beneficial
hookify
Create custom behavioral rules through markdown configuration files.
Overview
Hookify provides a framework for defining behavioral rules that prevent unwanted actions through pattern matching. Rules are defined in markdown files and can be enabled, disabled, or customized per project.
Installation
/plugin install hookify@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
writing-rules | Guide for authoring behavioral rules | Creating new rules |
rule-catalog | Pre-built behavioral rule templates | Installing common rules |
Commands
| Command | Description |
|---|---|
/hookify | Create behavioral rules to prevent unwanted actions |
/hookify:install | Install hookify rule from catalog |
/hookify:list | List all hookify rules with status |
/hookify:configure | Interactive rule enable/disable interface |
/hookify:help | Display hookify help and documentation |
Usage Examples
Install a Rule
# Install from catalog
/hookify:install no-force-push
# List installed rules
/hookify:list --status
Create Custom Rule
# Create a new rule interactively
/hookify
# Configure existing rule
/hookify:configure no-force-push --disable
Rule Structure
Rules are markdown files with frontmatter:
---
name: no-force-push
trigger: PreToolUse
matcher: Bash
pattern: "git push.*--force"
action: block
message: "Force push blocked. Use --force-with-lease instead."
---
# No Force Push Rule
Prevents accidental force pushes that could overwrite remote history.
Integration
Hookify integrates with:
- abstract: Rule validation and testing
- imbue: Scope guard integration
- sanctum: Git workflow protection
egregore
Autonomous agent orchestrator for full development lifecycles with session budget management and crash recovery.
Overview
Egregore spawns autonomous Claude Code sessions that execute multi-step development tasks without human input. It manages session budgets, provides crash recovery via a watchdog daemon, and validates output quality before merging.
Installation
/plugin install egregore@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
summon | Spawn autonomous session with budget | Delegating full tasks |
quality-gate | Pre-merge quality validation | Before merging autonomous work |
install-watchdog | Install crash-recovery watchdog | Setting up monitoring |
uninstall-watchdog | Remove watchdog | Cleaning up monitoring |
Commands
| Command | Description |
|---|---|
/summon | Spawn autonomous agent session |
/dismiss | Terminate autonomous session |
/status | Check session status |
/install-watchdog | Install crash-recovery daemon |
/uninstall-watchdog | Remove watchdog daemon |
Agents
| Agent | Description |
|---|---|
orchestrator | Manages autonomous development lifecycle |
sentinel | Watchdog agent for crash recovery |
Usage Examples
Spawn an Autonomous Session
# Summon with default budget
/summon "Implement feature X"
# Check status
/status
# Dismiss when done
/dismiss
Install Watchdog
# Set up crash recovery monitoring
/install-watchdog
# Remove when no longer needed
/uninstall-watchdog
Hooks
| Hook | Event | Description |
|---|---|---|
session_start_hook.py | SessionStart | Injects manifest context into new sessions |
user_prompt_hook.py | UserPromptSubmit | Reminds orchestrator to resume after user interrupts |
stop_hook.py | Stop | Prevents early exit while work items remain |
The UserPromptSubmit hook lets users interact with a
running egregore session without breaking the orchestration
loop. After handling the user’s request, the orchestrator
re-reads the manifest and resumes where it left off.
Self-Healing Heartbeat
A recurring cron (*/5 * * * *) detects stalled pipelines
and re-enters the orchestration loop automatically.
This catches edge cases where context compaction or
unexpected errors break the loop despite the hooks.
Architecture
Egregore uses a convention-based approach where
autonomous sessions follow project conventions stored
in conventions/. The orchestrator agent manages the
session lifecycle, while the sentinel agent monitors
for crashes and restarts sessions as needed.
Parallel Execution
Independent work items run concurrently via git
worktrees (up to 3 by default). Within the quality
stage, independent steps execute in parallel waves
using dependency-graph scheduling from
stage_parallel.py.
Agent Specialization
Specialist agents (reviewer, documenter, tester)
handle specific pipeline steps and accumulate context
across sessions. Profiles persist in
.egregore/specialists/.
Cross-Item Learning
The learning module extracts patterns from decision
logs (tech stack choices, failure modes, architecture
decisions) and generates briefings for new work items
based on historical success rates.
Multi-Repository Support
RepoRegistry manages work across multiple
repositories, routing items by labels and tracking
per-repo configuration in .egregore/repos.json.
GitHub Discussions Publishing
Discoveries, insights, and retrospectives from autonomous sessions are published to GitHub Discussions with rate limiting and deduplication.
Related Plugins
herald
Shared notification library for Claude Code plugins.
Overview
Herald was extracted from egregore to provide independent notification capabilities. Any plugin can send alerts through herald without depending on the full egregore orchestrator.
Herald is a pure library plugin: it declares no skills, commands, agents, or hooks. Plugins import its Python API directly (guarded with try/except per ADR-0001).
Installation
/plugin install herald@claude-night-market
Features
- GitHub issue creation via
ghCLI - Webhook delivery to Slack, Discord, or generic endpoints
- SSRF protection with URL validation
- Configurable source labels for multi-plugin use
Alert Events
| Event | Value | Description |
|---|---|---|
CRASH | crash | Process or agent crash |
RATE_LIMIT | rate_limit | API quota exceeded |
PIPELINE_FAILURE | pipeline_failure | Build or deploy failure |
COMPLETION | completion | Task finished |
WATCHDOG_RELAUNCH | watchdog_relaunch | Watchdog restarted agent |
Usage
from notify import AlertEvent, alert
alert(
event=AlertEvent.CRASH,
detail="Worker process crashed",
source="my-plugin",
)
See the herald README for webhook examples.
oracle
Local ONNX Runtime inference daemon for ML-enhanced plugin capabilities.
Overview
Oracle runs a sidecar HTTP daemon that serves ONNX model inference
on localhost. Plugins opt in explicitly by writing a sentinel file.
It uses a dedicated Python 3.11+ venv managed by uv and does not
touch the system Python environment.
Installation
/plugin install oracle@claude-night-market
Skills
setup- Install and configure the oracle ONNX inference daemon
Commands
/oracle-setup- Install and configure the oracle ONNX inference daemon, including venv creation and model placement
Domain Specialists
Domain specialist plugins provide deep expertise in specific areas of software development.
Purpose
Domain plugins offer:
- Deep Expertise: Specialized knowledge for specific domains
- Workflow Automation: End-to-end processes for common tasks
- Best Practices: Curated patterns and anti-patterns
Plugins
| Plugin | Domain | Key Use Case |
|---|---|---|
| cartograph | Visualization | Codebase diagrams via Mermaid |
| archetypes | Architecture | Paradigm selection |
| pensive | Code Review | Multi-faceted reviews |
| parseltongue | Python | Modern Python development |
| phantom | Desktop | Computer use automation |
| memory-palace | Knowledge | Spatial memory organization |
| spec-kit | Specifications | Spec-driven development |
| minister | Releases | Initiative tracking |
| attune | Projects | Full-cycle project development |
| scry | Media | Documentation recordings |
| scribe | Documentation | AI slop detection and cleanup |
When to Use
archetypes
Use when you need to:
- Choose an architecture for a new system
- Evaluate trade-offs between patterns
- Get implementation guidance for a paradigm
pensive
Use when you need to:
- Conduct thorough code reviews
- Audit security and architecture
- Review APIs, tests, or Makefiles
parseltongue
Use when you need to:
- Write modern Python (3.12+)
- Implement async patterns
- Package projects with uv
- Profile and optimize performance
phantom
Use when you need to:
- Drive desktop environments through vision and action
- Automate GUI interactions with screenshot capture
- Control mouse and keyboard programmatically
- Run autonomous desktop agent loops
memory-palace
Use when you need to:
- Organize complex knowledge
- Build spatial memory structures
- Maintain digital gardens
- Cache research efficiently
spec-kit
Use when you need to:
- Define features before implementation
- Generate structured task lists
- Maintain specification consistency
- Track implementation progress
minister
Use when you need to:
- Track GitHub initiatives
- Monitor release readiness
- Generate stakeholder reports
attune
Use when you need to:
- Brainstorm project ideas
- Create specifications from concepts
- Plan architecture and tasks
- Initialize projects with tooling
- Execute systematic implementation
scry
Use when you need to:
- Record terminal demos with VHS
- Capture browser sessions with Playwright
- Generate GIFs for documentation
- Compose multi-source tutorials
scribe
Use when you need to:
- Detect AI-generated content markers
- Clean up documentation slop
- Learn and apply writing styles
- Verify documentation accuracy
Dependencies
Most domain plugins depend on foundation layers:
archetypes (standalone)
pensive --> imbue, sanctum
parseltongue (standalone)
phantom (standalone)
memory-palace (standalone)
spec-kit --> imbue
minister (standalone)
attune --> spec-kit, imbue
scry (standalone)
scribe --> imbue, conserve
Example Workflows
Architecture Decision
Skill(archetypes:architecture-paradigms)
# Interactive paradigm selection
# Returns: Detailed implementation guide
Full Code Review
/full-review
# Runs multiple review types:
# - architecture-review
# - api-review
# - bug-review
# - test-review
Python Project Setup
Skill(parseltongue:python-packaging)
Skill(parseltongue:python-testing)
Feature Development
/speckit-specify Add user authentication
/speckit-plan
/speckit-tasks
/speckit-implement
Full Project Lifecycle
/attune:brainstorm
# Socratic questioning to explore project idea
/attune:specify
# Create specification from brainstorm
/attune:blueprint
# Design architecture and break down tasks
/attune:init
# Initialize project with tooling
/attune:execute
# Execute implementation with TDD
Media Recording
/record-terminal
# Creates VHS tape script and records terminal to GIF
/record-browser
# Records browser session with Playwright
Documentation Cleanup
Skill(scribe:slop-detector)
# Scans for AI-generated content markers
/doc-polish README.md
# Interactive cleanup of AI slop
Agent(scribe:doc-verifier)
# Validates documentation claims
Installation
Install based on your needs:
# Architecture work
/plugin install archetypes@claude-night-market
# Code review
/plugin install pensive@claude-night-market
# Python development
/plugin install parseltongue@claude-night-market
# Desktop automation
/plugin install phantom@claude-night-market
# Knowledge management
/plugin install memory-palace@claude-night-market
# Specification-driven development
/plugin install spec-kit@claude-night-market
# Release management
/plugin install minister@claude-night-market
# Full-cycle project development
/plugin install attune@claude-night-market
# Media recording
/plugin install scry@claude-night-market
# Documentation review
/plugin install scribe@claude-night-market
cartograph
Codebase visualization through architecture, data flow, dependency, workflow, and class diagrams rendered via Mermaid Chart MCP.
Overview
Cartograph analyzes code structure and generates Mermaid diagrams. A codebase explorer agent extracts modules, imports, and relationships, then diagram-specific skills convert the structural model into rendered visuals.
Installation
/plugin install cartograph@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
architecture-diagram | Component relationship diagrams | System structure, plugin architecture |
data-flow | Data movement between components | Request paths, API flows |
dependency-graph | Import and dependency relationships | Coupling analysis, circular deps |
workflow-diagram | Process steps and state transitions | CI/CD pipelines, dev workflows |
class-diagram | Classes, interfaces, inheritance | OOP structure, type hierarchies |
Commands
| Command | Description |
|---|---|
/visualize | Generate a codebase diagram |
Agents
| Agent | Description |
|---|---|
codebase-explorer | Analyzes modules, imports, and relationships |
Usage Examples
Architecture Diagram
/visualize architecture plugins/sanctum
Generates a flowchart showing component relationships within the specified scope.
Dependency Graph
/visualize dependency plugins/
Shows import relationships between modules. Useful for spotting circular dependencies or tight coupling.
Data Flow
/visualize data-flow plugins/conserve
Produces a sequence diagram tracing data movement through the system.
Workflow Diagram
/visualize workflow
Maps process steps, decision points, and state transitions for development workflows or CI/CD pipelines.
Class Diagram
/visualize class plugins/gauntlet
Shows classes, interfaces, inheritance, and composition within a module.
How It Works
- The
/visualizecommand routes to a diagram skill - The skill dispatches the
codebase-exploreragent - The agent analyzes code structure and produces a JSON structural model
- The skill generates Mermaid syntax from the model
- The Mermaid Chart MCP server renders the diagram
Requirements
- Mermaid Chart MCP server (included with Claude Code)
Related Plugins
- scry: Terminal and browser recordings for demos
- pensive: Architecture review complements visual diagrams with written assessments
- archetypes: Architecture paradigm selection pairs with architectural visualization
archetypes
Architecture paradigm selection and implementation planning.
Overview
Archetypes helps you choose the right architecture for your system. It provides an interactive paradigm selector and detailed implementation guides for 13 architectural patterns.
Installation
/plugin install archetypes@claude-night-market
Skills
Orchestrator
| Skill | Description | When to Use |
|---|---|---|
architecture-paradigms | Interactive paradigm selector | Choosing architecture for new systems |
Paradigm Guides
| Skill | Architecture | Best For |
|---|---|---|
architecture-paradigm-layered | N-tier | Simple web apps, internal tools |
architecture-paradigm-hexagonal | Ports & Adapters | Infrastructure independence |
architecture-paradigm-microservices | Distributed services | Large-scale enterprise |
architecture-paradigm-event-driven | Async communication | Real-time processing |
architecture-paradigm-serverless | Function-as-a-Service | Event-driven with minimal infra |
architecture-paradigm-pipeline | Pipes-and-filters | ETL, media processing |
architecture-paradigm-cqrs-es | CQRS + Event Sourcing | Audit trails, event replay |
architecture-paradigm-microkernel | Plugin-based | Minimal core with extensions |
architecture-paradigm-modular-monolith | Internal boundaries | Module separation without distribution |
architecture-paradigm-space-based | Data-grid | High-scale stateful workloads |
architecture-paradigm-service-based | Coarse-grained SOA | Modular without microservices |
architecture-paradigm-functional-core | Functional Core, Imperative Shell | Superior testability |
architecture-paradigm-client-server | Client-server | Clear client/server responsibilities |
Usage Examples
Interactive Selection
Skill(archetypes:architecture-paradigms)
# Claude will:
# 1. Ask about your requirements
# 2. Evaluate trade-offs
# 3. Recommend paradigms
# 4. Provide implementation guidance
Direct Paradigm Access
# Get specific paradigm details
Skill(archetypes:architecture-paradigm-hexagonal)
# Returns:
# - Core concepts
# - When to use
# - Implementation patterns
# - Example code
# - Trade-offs
Paradigm Comparison
By Complexity
| Level | Paradigms |
|---|---|
| Low | Layered, Client-Server |
| Medium | Modular Monolith, Service-Based, Functional Core |
| High | Microservices, Event-Driven, CQRS-ES, Space-Based |
By Team Size
| Team | Recommended |
|---|---|
| 1-3 | Layered, Functional Core, Modular Monolith |
| 4-10 | Hexagonal, Service-Based, Pipeline |
| 10+ | Microservices, Event-Driven |
By Scalability Need
| Need | Paradigms |
|---|---|
| Single server | Layered, Modular Monolith |
| Horizontal | Microservices, Serverless |
| Extreme | Space-Based, Event-Driven |
Selection Criteria
The paradigm selector evaluates:
- Team size and structure
- Scalability requirements
- Deployment constraints
- Data consistency needs
- Development velocity priorities
- Operational maturity
Example Output
Hexagonal Architecture
## Hexagonal Architecture (Ports & Adapters)
### Core Concepts
- Domain logic at center
- Ports define interfaces
- Adapters implement ports
- Infrastructure is pluggable
### When to Use
- Need to swap databases/frameworks
- Test-driven development focus
- Long-lived applications
- Multiple integration points
### Structure
src/
├── domain/ # Pure business logic
│ ├── models/
│ └── services/
├── ports/ # Interface definitions
│ ├── inbound/
│ └── outbound/
└── adapters/ # Implementations
├── web/
├── persistence/
└── external/
### Trade-offs
+ Easy testing via port mocking
+ Framework-agnostic domain
+ Clear dependency direction
- More initial structure
- Learning curve
Best Practices
- Start Simple: Begin with layered, evolve as needed
- Match Team: Don’t use microservices with a small team
- Consider Ops: Complex architectures need operational maturity
- Plan Evolution: Design for change, not perfection
Decision Tree
Start
|
v
Simple CRUD? --> Yes --> Layered
|
No
|
v
Need testability? --> Yes --> Functional Core or Hexagonal
|
No
|
v
High scale? --> Yes --> Event-Driven or Space-Based
|
No
|
v
Multiple teams? --> Yes --> Microservices or Service-Based
|
No
|
v
Modular Monolith
Related Plugins
- pensive: Architecture review complements paradigm selection
- spec-kit: Use after paradigm selection for implementation planning
pensive
Code review and analysis toolkit with specialized review skills.
Overview
Pensive provides deep code review capabilities across multiple dimensions: architecture, APIs, bugs, tests, and more. It orchestrates reviews intelligently, selecting the right skills for each codebase.
Installation
/plugin install pensive@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
unified-review | Review orchestration | Starting reviews (Claude picks tools) |
api-review | API surface evaluation | Reviewing OpenAPI specs, library exports |
architecture-review | Architecture assessment | Checking ADR alignment, design principles |
bug-review | Bug hunting | Systematic search for logic errors |
rust-review | Rust-specific checking | Auditing unsafe code, borrow patterns |
test-review | Test quality review | Ensuring tests verify behavior |
makefile-review | Makefile best practices | Reviewing Makefile quality |
math-review | Mathematical correctness | Reviewing mathematical logic |
shell-review | Shell script auditing | Exit codes, portability, safety patterns |
safety-critical-patterns | NASA Power of 10 rules | Robust, verifiable code with context-appropriate rigor |
code-refinement | Code quality analysis | Duplication, efficiency, clean code violations |
tiered-audit | Three-tier escalation audit | Codebase audits starting from git history |
Commands
| Command | Description |
|---|---|
/full-review | Unified review with intelligent skill selection |
/api-review | Run API surface review |
/architecture-review | Run architecture assessment |
/bug-review | Run bug hunting |
/rust-review | Run Rust-specific review |
/test-review | Run test quality review |
/makefile-review | Run Makefile review |
/math-review | Run mathematical review |
/shell-review | Run shell script safety review |
/skill-review | Analyze skill runtime metrics and stability gaps (canonical) |
/skill-history | View recent skill executions |
Note: For static skill quality analysis (frontmatter, structure), use
abstract:skill-auditorinstead.
Agents
| Agent | Description |
|---|---|
code-reviewer | Expert code review for bugs, security, quality |
architecture-reviewer | Principal-level architecture specialist |
rust-auditor | Expert Rust security and safety auditor |
Usage Examples
Full Review
/full-review
# Claude will:
# 1. Analyze codebase structure
# 2. Select relevant review skills
# 3. Execute reviews in priority order
# 4. Synthesize findings
# 5. Provide actionable recommendations
Specific Reviews
# Architecture review
/architecture-review
# API review
/api-review
# Bug hunting
/bug-review
# Test quality
/test-review
Manual Skill Invocation
Skill(pensive:architecture-review)
# Checks:
# - ADR compliance
# - Dependency direction
# - Layer violations
# - Design pattern usage
Review Depth
Each review skill operates at multiple levels:
| Level | Description | Time |
|---|---|---|
| Quick | High-level scan | 1-2 min |
| Standard | Thorough review | 5-10 min |
| Deep | Exhaustive analysis | 15+ min |
Specify depth when invoking:
/architecture-review --depth deep
Review Categories
Architecture Review
- ADR alignment
- Dependency analysis
- Layer boundary violations
- Pattern consistency
- Coupling metrics
API Review
- Endpoint consistency
- Error response patterns
- Authentication/authorization
- Versioning strategy
- Documentation completeness
Bug Review
- Logic errors
- Edge cases
- Race conditions
- Resource leaks
- Error handling gaps
Test Review
- Coverage gaps
- Test isolation
- Assertion quality
- Mocking patterns
- Edge case coverage
Rust Review
- Unsafe code audit
- Borrow checker patterns
- Memory safety
- Concurrency safety
- Idiomatic usage
- Silent return value checks
- Collection type selection
- SQL injection detection
#[cfg(test)]misuse patterns- Error message quality
- Duplicate validator detection
- Builtin preference (From/Into/TryFrom/Default over helpers)
Dependencies
Pensive builds on foundation plugins:
pensive
|
+--> imbue (review-core, proof-of-work)
|
+--> sanctum (git-workspace-review)
Workflow Integration
Pre-PR Review
# Before opening PR
Skill(sanctum:git-workspace-review)
/full-review
# Address findings
# Then create PR
Post-Merge Review
# After merge, deep review
/architecture-review --depth deep
Targeted Review
# Review specific area
/api-review src/api/
Superpowers Integration
| Command | Enhancement |
|---|---|
/full-review | Uses systematic-debugging for four-phase analysis |
/full-review | Uses verification-before-completion for evidence |
Output Format
Reviews produce structured output:
## Review Summary
### Critical Issues
1. [BUG] Race condition in UserService.update()
- Location: src/services/user.ts:45
- Impact: Data corruption under load
- Recommendation: Add mutex lock
### Warnings
1. [ARCH] Layer violation detected
- Controllers importing from repositories
- Recommendation: Add service layer
### Suggestions
1. [TEST] Missing edge case coverage
- UserService.delete() lacks null check test
Related Plugins
- imbue: Provides review scaffolding
- sanctum: Provides workspace context
- archetypes: Paradigm context for architecture review
phantom
Computer use toolkit for desktop automation via Claude’s vision and action API.
Overview
Phantom enables Claude to interact with desktop environments through screenshot capture, mouse/keyboard control, and an autonomous agent loop. It wraps Claude’s Computer Use API for sandboxed GUI automation workflows.
Security Precautions
Computer use grants Claude direct control over mouse, keyboard, and screen reading. Follow these precautions:
- Run in a sandboxed environment (VM, container, or dedicated machine). Never run on a machine with access to production systems or sensitive credentials.
- Review tasks before execution. The
/control-desktopcommand displays the planned actions. Confirm before allowing execution. - Limit network access. Restrict outbound connections from the sandbox to prevent data exfiltration if the agent navigates to an unintended URL.
- Do not store credentials in the sandbox environment. If a workflow requires login, use temporary tokens with narrow scope.
- Monitor active sessions. The desktop-pilot agent runs autonomously. Watch for unexpected navigation or input actions and terminate if behavior deviates.
Installation
/plugin install phantom@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
computer-control | Desktop automation via screenshot capture, mouse/keyboard control | Automating GUI tasks in sandboxed environments |
Commands
| Command | Description |
|---|---|
/control-desktop | Run a computer use task on the desktop |
Agents
| Agent | Description |
|---|---|
desktop-pilot | Autonomous desktop control with multi-step GUI workflows |
Usage Examples
Control a Desktop
# Run a GUI automation task
/control-desktop "Open the browser and navigate to example.com"
# Use the agent for multi-step workflows
Agent(phantom:desktop-pilot)
parseltongue
Modern Python development suite for testing, performance, async patterns, and packaging.
Overview
Parseltongue brings Python 3.12+ best practices to your workflow. It covers the full development lifecycle: testing with pytest, performance optimization, async patterns, and modern packaging with uv.
Installation
/plugin install parseltongue@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
python-testing | Pytest and TDD workflows | Writing and running tests |
python-performance | Profiling and optimization | Debugging slow code |
python-async | Async programming patterns | Implementing asyncio |
python-packaging | Modern packaging with uv | Managing pyproject.toml |
Commands
| Command | Description |
|---|---|
/analyze-tests | Report on test suite health |
/run-profiler | Profile code execution |
/check-async | Validate async patterns |
Agents
| Agent | Description |
|---|---|
python-pro | Master Python 3.12+ with modern features |
python-tester | Expert testing for pytest, TDD, mocking |
python-optimizer | Expert performance optimization |
Usage Examples
Test Analysis
/analyze-tests
# Reports:
# - Coverage metrics
# - Test distribution
# - Slow tests
# - Missing coverage areas
# - Anti-patterns detected
Profiling
/run-profiler src/heavy_function.py
# Outputs:
# - CPU time breakdown
# - Memory usage
# - Hot paths
# - Optimization suggestions
Async Validation
/check-async src/async_module.py
# Checks:
# - Proper await usage
# - Event loop handling
# - Async context managers
# - Concurrency patterns
Skill Invocation
Skill(parseltongue:python-testing)
# Provides:
# - Pytest configuration patterns
# - TDD workflow guidance
# - Mocking strategies
# - Fixture patterns
Python 3.12+ Features
Parseltongue emphasizes modern Python:
Type Hints
# Modern syntax (3.10+)
def process(data: list[str] | None) -> dict[str, int]:
...
Pattern Matching
# Structural pattern matching (3.10+)
match response:
case {"status": "ok", "data": data}:
return data
case {"status": "error", "message": msg}:
raise ValueError(msg)
Exception Groups
# Exception groups (3.11+)
try:
async with asyncio.TaskGroup() as tg:
tg.create_task(task1())
tg.create_task(task2())
except* ValueError as eg:
for exc in eg.exceptions:
handle(exc)
Testing Patterns
TDD Workflow
Skill(parseltongue:python-testing)
# RED-GREEN-REFACTOR:
# 1. Write failing test
# 2. Implement minimal code
# 3. Refactor with tests green
Fixture Patterns
# Recommended patterns
@pytest.fixture
def db_session(tmp_path):
"""Session-scoped database fixture."""
db = Database(tmp_path / "test.db")
yield db
db.close()
@pytest.fixture
def user(db_session):
"""User fixture depending on db."""
return db_session.create_user("test")
Mocking Strategies
# Strategic mocking
def test_api_call(mocker):
mock_response = mocker.patch("requests.get")
mock_response.return_value.json.return_value = {"status": "ok"}
result = fetch_data()
assert result["status"] == "ok"
Performance Optimization
Profiling Tools
# cProfile integration
python -m cProfile -s cumtime script.py
# Memory profiling
from memory_profiler import profile
@profile
def memory_heavy():
...
Optimization Patterns
- Generators over lists: Save memory
- Local variables: Faster lookup
- Built-in functions: C-optimized
- Lazy evaluation: Defer computation
Async Patterns
Recommended Structure
async def main():
async with aiohttp.ClientSession() as session:
tasks = [fetch(session, url) for url in urls]
results = await asyncio.gather(*tasks)
return results
if __name__ == "__main__":
asyncio.run(main())
Anti-Patterns to Avoid
- Blocking calls in async functions
- Creating event loops inside coroutines
- Ignoring exceptions in fire-and-forget tasks
Packaging with uv
pyproject.toml
[project]
name = "my-package"
version = "1.0.0"
dependencies = ["requests>=2.28"]
[project.optional-dependencies]
dev = ["pytest", "ruff", "mypy"]
[tool.uv]
index-url = "https://pypi.org/simple"
Commands
# Install with uv
uv pip install -e ".[dev]"
# Lock dependencies
uv pip compile pyproject.toml -o requirements.lock
# Sync environment
uv pip sync requirements.lock
Superpowers Integration
| Skill | Enhancement |
|---|---|
python-testing | Uses test-driven-development for TDD cycles |
python-testing | Uses testing-anti-patterns for detection |
Related Plugins
- leyline: Provides pytest-config patterns
- sanctum: Test updates integrate with test-updates skill
memory-palace
Knowledge organization using spatial memory techniques.
Overview
Memory Palace applies the ancient method of loci to digital knowledge management. It helps you build “palaces” - structured knowledge repositories that use spatial metaphors for organization and retrieval.
Installation
/plugin install memory-palace@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
memory-palace-architect | Building virtual palaces | Organizing complex concepts |
knowledge-locator | Spatial search | Finding stored information |
knowledge-intake | Intake and curation | Processing new information |
digital-garden-cultivator | Digital garden maintenance | Long-term knowledge base care |
session-palace-builder | Session-specific palaces | Temporary working knowledge |
Commands
| Command | Description |
|---|---|
/palace | Manage memory palaces |
/garden | Manage digital gardens |
/navigate | Search and traverse palaces |
Agents
| Agent | Description |
|---|---|
palace-architect | Designs memory palace architectures |
knowledge-navigator | Searches and retrieves from palaces |
knowledge-librarian | Evaluates and routes knowledge |
garden-curator | Maintains digital gardens |
Hooks
| Hook | Type | Description |
|---|---|---|
research_interceptor.py | PreToolUse | Checks local knowledge before web searches |
url_detector.py | UserPromptSubmit | Detects URLs for intake |
local_doc_processor.py | PostToolUse | Processes local docs after reads |
web_research_handler.py | PostToolUse | Processes web content and prompts for knowledge storage |
Usage Examples
Create a Palace
/palace create "Python Async Patterns"
# Creates:
# - Palace structure
# - Entry rooms
# - Navigation paths
Add Knowledge
Skill(memory-palace:knowledge-intake)
# Processes:
# - New information
# - Categorization
# - Spatial placement
# - Cross-references
Navigate Knowledge
/navigate "async context managers"
# Returns:
# - Matching rooms
# - Related concepts
# - Cross-references
# - Source citations
Maintain Garden
/garden cultivate
# Performs:
# - Pruning outdated content
# - Strengthening connections
# - Identifying gaps
# - Suggesting additions
Cache Modes
The research interceptor supports four modes:
| Mode | Behavior | Use Case |
|---|---|---|
cache_only | Deny web when no cache match | Offline work, audits |
cache_first | Check cache, fall back to web | Default research |
augment | Blend cache with live results | When freshness matters |
web_only | Bypass Memory Palace | Incident response |
Set mode in hooks/memory-palace-config.yaml:
research_mode: cache_first
Palace Architecture
Palaces use spatial metaphors:
Palace: "Python Async"
├── Entry Hall
│ └── Overview concepts
├── Library Wing
│ ├── asyncio basics
│ ├── coroutines
│ └── event loops
├── Practice Room
│ ├── code examples
│ └── exercises
└── Reference Archive
├── official docs
└── external sources
Knowledge Intake Flow
New Information
|
v
[Semantic Dedup] --> Near-duplicate? --> Increment counter, skip
|
No
v
[Domain Alignment] --> Matches interests? --> Flag for intake
|
Yes
v
[Palace Placement] --> Store in appropriate room
|
v
[Cross-Reference] --> Link to related concepts
The SemanticDeduplicator uses FAISS cosine similarity (threshold:
0.8) to detect near-duplicate content before storage.
When FAISS is unavailable, it falls back to Jaccard word-set similarity.
Suppressed duplicates increment a counter rather than being stored,
keeping the corpus dense.
Semantic Deduplication
FAISS-based duplicate detection is included as a mandatory dependency.
The SemanticDeduplicator.should_store() API uses cosine similarity on
L2-normalized vectors to detect near-duplicates before storage.
Embedding Support
Optional semantic search via embeddings:
# Build embeddings
cd plugins/memory-palace
uv run python scripts/build_embeddings.py --provider local
# Toggle at runtime
export MEMORY_PALACE_EMBEDDINGS_PROVIDER=local
Telemetry
Track research decisions:
# data/telemetry/memory-palace.csv
timestamp,query,decision,novelty_score,domains,duplicates
2025-01-15,async patterns,cache_hit,0.2,python,entry-123
Curation Workflow
Regular maintenance keeps palaces useful:
- Review intake queue:
data/intake_queue.jsonl - Approve/reject items: Based on value and fit
- Update vitality scores: Mark evergreen vs. probationary
- Prune stale content: Archive outdated information
- Document in curation log:
docs/curation-log.md
Digital Gardens
Unlike palaces (structured), gardens are organic:
/garden status
# Shows:
# - Growth rate
# - Connection density
# - Orphan nodes
# - Suggested links
Knowledge Promotion to Discussions
Evergreen corpus entries can be promoted to a GitHub Discussion in the
“Knowledge” (Q&A) category.
The discussion-promotion module in knowledge-intake checks entry maturity —
only entries at the evergreen lifecycle stage are eligible.
Promotion creates a structured Discussion with title, summary, key findings,
and source references. Entries that already have a discussion_url field are
updated rather than duplicated.
Related Plugins
- conservation: Memory Palace helps reduce redundant web fetches
- imbue: Evidence logging integrates with knowledge intake
spec-kit
Specification-Driven Development (SDD) toolkit for structured feature development.
Overview
Spec-Kit enforces “define before implement” - you write specifications first, generate plans, create tasks, then execute. This reduces wasted effort and validates features match requirements.
Installation
/plugin install spec-kit@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
spec-writing | Specification authoring | Writing requirements from ideas |
task-planning | Task generation | Breaking specs into tasks |
speckit-orchestrator | Workflow coordination | Managing spec-to-code lifecycle |
Commands
| Command | Description |
|---|---|
/speckit-specify | Create a new specification |
/speckit-plan | Generate implementation plan |
/speckit-tasks | Generate ordered tasks |
/speckit-implement | Execute tasks |
/speckit-analyze | Check artifact consistency |
/speckit-checklist | Generate custom checklist |
/speckit-clarify | Ask clarifying questions |
/speckit-constitution | Create project constitution |
/speckit-startup | Bootstrap workflow at session start |
Agents
| Agent | Description |
|---|---|
spec-analyzer | Validates artifact consistency |
task-generator | Creates implementation tasks |
implementation-executor | Executes tasks and writes code |
Usage Examples
Full SDD Workflow
# 1. Create specification
/speckit-specify Add user authentication with OAuth2
# 2. Clarify requirements
/speckit-clarify
# 3. Generate plan
/speckit-plan
# 4. Create tasks
/speckit-tasks
# 5. Execute implementation
/speckit-implement
# 6. Verify consistency
/speckit-analyze
Quick Specification
/speckit-specify Add dark mode toggle
# Claude will:
# 1. Ask clarifying questions
# 2. Generate spec.md
# 3. Identify dependencies
# 4. Suggest next steps
Session Startup
/speckit-startup
# Loads:
# - Existing spec.md
# - Current plan.md
# - Outstanding tasks
# - Progress status
# - Constitution (principles/constraints)
Artifact Structure
Spec-Kit creates three main artifacts:
spec.md
# Feature: User Authentication
## Overview
OAuth2-based authentication for web application.
## Requirements
- [ ] Google OAuth integration
- [ ] Session management
- [ ] Token refresh
## Acceptance Criteria
1. Users can sign in with Google
2. Sessions persist for 7 days
3. Tokens refresh automatically
## Non-Functional Requirements
- Login latency < 2s
- 99.9% availability
plan.md
# Implementation Plan
## Phase 1: OAuth Setup
- Configure Google OAuth credentials
- Implement OAuth callback handler
## Phase 2: Session Management
- Design session schema
- Implement token storage
## Phase 3: Integration
- Connect to frontend
- Add logout functionality
tasks.md
# Tasks
## Phase 1 Tasks
- [ ] Create OAuth config module
- [ ] Implement /auth/login endpoint
- [ ] Implement /auth/callback endpoint
## Phase 2 Tasks
- [ ] Design session table schema
- [ ] Create session service
- [ ] Implement token refresh logic
Constitution
Project constitution defines principles:
/speckit-constitution
# Creates:
# - Coding standards
# - Architecture principles
# - Testing requirements
# - Documentation standards
Consistency Analysis
/speckit-analyze
# Checks:
# - spec.md requirements map to plan.md
# - plan.md phases map to tasks.md
# - No orphan tasks
# - No missing implementations
Checklist Generation
/speckit-checklist
# Generates custom checklist:
# - [ ] All acceptance criteria met
# - [ ] Tests written
# - [ ] Documentation updated
# - [ ] Security reviewed
Dependencies
Spec-Kit uses imbue for analysis:
spec-kit
|
v
imbue (diff-analysis, proof-of-work)
Superpowers Integration
| Command | Enhancement |
|---|---|
/speckit-clarify | Uses brainstorming for questions |
/speckit-plan | Uses writing-plans for structure |
/speckit-tasks | Uses executing-plans, systematic-debugging |
/speckit-implement | Uses executing-plans, systematic-debugging |
/speckit-analyze | Uses systematic-debugging, verification-before-completion |
/speckit-checklist | Uses verification-before-completion |
Best Practices
- Specify First: Never skip the specification phase
- Clarify Ambiguity: Use
/speckit-clarifyliberally - Small Tasks: Break into 1-2 hour chunks
- Verify Often: Run
/speckit-analyzeafter changes - Update Artifacts: Keep spec/plan/tasks in sync
Workflow Tips
Starting New Feature
/speckit-specify [feature description]
/speckit-clarify
/speckit-plan
Resuming Work
/speckit-startup
# Review current state
/speckit-implement
Before PR
/speckit-analyze
/speckit-checklist
Related Plugins
- imbue: Provides analysis patterns
- sanctum: Integrates for PR preparation after implementation
minister
GitHub initiative tracking and release management.
Overview
Minister helps you track project initiatives, monitor release readiness, and generate stakeholder reports. It bridges the gap between development work and project management.
Installation
/plugin install minister@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
github-initiative-pulse | Initiative progress tracking | Weekly status reports |
release-health-gates | Release readiness checks | Before releasing |
Scripts
| Script | Description |
|---|---|
tracker.py | CLI for initiative database and reporting |
Usage Examples
Initiative Tracking
Skill(minister:github-initiative-pulse)
# Generates:
# - Issue completion rates
# - Milestone progress
# - Velocity trends
# - Risk flags
Release Readiness
Skill(minister:release-health-gates)
# Checks:
# - CI status
# - Documentation completeness
# - Breaking change inventory
# - Risk assessment
CLI Usage
# List initiatives
python tracker.py list
# Show initiative details
python tracker.py show auth-v2
# Generate weekly report
python tracker.py report --week
# Update status
python tracker.py update auth-v2 --status in-progress
Initiative Structure
Initiatives track work across issues and PRs:
initiative:
id: auth-v2
title: "Authentication v2"
status: in-progress
milestones:
- name: "OAuth Setup"
due: 2025-01-30
issues: [#42, #43, #44]
- name: "Session Management"
due: 2025-02-15
issues: [#45, #46]
metrics:
velocity: 3.5 issues/week
completion: 65%
risk: low
Health Gates
Release health gates verify readiness:
| Gate | Checks |
|---|---|
| CI | All checks passing, no flaky tests |
| Docs | README updated, CHANGELOG complete |
| Breaking | Breaking changes documented |
| Security | No critical vulnerabilities |
| Coverage | Test coverage above threshold |
Gate Output
## Release Health: v2.0.0
### CI Status: PASS
- All 156 tests passing
- Build time: 3m 42s
- No flaky tests detected
### Documentation: PASS
- README updated
- CHANGELOG has v2.0.0 section
- API docs generated
### Breaking Changes: WARN
- 2 breaking changes identified
- Migration guide needed for UserService API
### Security: PASS
- No critical/high vulnerabilities
- Dependencies up to date
### Coverage: PASS
- 87% coverage (threshold: 80%)
## Recommendation: CONDITIONAL RELEASE
Address breaking change documentation before release.
Reporting
Weekly Report
python tracker.py report --week
# Outputs:
# - Initiatives summary
# - This week's completions
# - Next week's focus
# - Blockers and risks
Stakeholder Summary
python tracker.py report --stakeholder
# Generates executive summary:
# - High-level progress
# - Key achievements
# - Timeline updates
# - Resource needs
Integration with GitHub
Minister reads from GitHub:
# Sync initiative from GitHub milestone
python tracker.py sync --milestone "v2.0"
# Pull issue status
python tracker.py refresh auth-v2
Superpowers Integration
| Skill | Enhancement |
|---|---|
issue-management | Uses systematic-debugging for investigation |
Configuration
tracker.yaml
github:
repo: athola/my-project
token_env: GITHUB_TOKEN
initiatives_dir: .minister/initiatives
reports_dir: .minister/reports
health_gates:
coverage_threshold: 80
max_critical_vulns: 0
require_changelog: true
Workflow Examples
Sprint Planning
# Check initiative status
python tracker.py list
# Update priorities
python tracker.py update auth-v2 --priority high
# Generate planning report
python tracker.py report --planning
Release Preparation
# Run health gates
Skill(minister:release-health-gates)
# Address any failures
# Then re-run until all pass
# Tag release
git tag v2.0.0
Weekly Standup
# Generate pulse report
Skill(minister:github-initiative-pulse)
# Share with team
# Update tracker based on discussion
Playbooks
Minister includes operational playbooks in docs/playbooks/:
| Playbook | Purpose |
|---|---|
github-program-rituals.md | Weekly cadences: Risk Radar, Velocity Digest, Executive Packet |
release-train-health.md | Release gate checklists for CI, docs, and support signals |
These playbooks use GitHub Discussions via GraphQL mutations (not the
non-existent gh discussion CLI subcommand).
Discussion creation and commenting follow the templates in
leyline:git-platform’s command-mapping module.
Related Plugins
- sanctum: PR preparation integrates with release workflow
- imbue: Feature review complements initiative tracking
Attune
Full-cycle project development from ideation to implementation.
Overview
Attune integrates the brainstorm-plan-execute workflow from superpowers with spec-driven development from spec-kit to provide a complete project lifecycle.
Workflow
graph LR
A[Brainstorm] --> B[War Room]
B --> C[Specify]
C --> D[Plan]
D --> E[Initialize]
E --> F[Execute]
style A fill:#e1f5fe
style B fill:#fff9c4
style C fill:#f3e5f5
style D fill:#fff3e0
style E fill:#e8f5e8
style F fill:#fce4ec
Commands
| Command | Phase | Description |
|---|---|---|
/attune:brainstorm | 1. Ideation | Socratic questioning to explore problem space |
/attune:war-room | 2. Deliberation | Multi-LLM expert deliberation with reversibility-based routing |
/attune:specify | 3. Specification | Create detailed specs from war-room decision |
/attune:blueprint | 4. Planning | Design architecture and break down tasks |
/attune:init | 5. Initialization | Generate or update project structure with tooling |
/attune:execute | 6. Implementation | Execute tasks with TDD discipline |
/attune:upgrade-project | Maintenance | Add configs to existing projects |
/attune:mission | Full Cycle | Run entire lifecycle as a single mission with state detection |
/attune:validate | Quality | Validate project structure |
Supported Languages
- Python: uv, pytest, ruff, mypy, pre-commit
- Rust: cargo, clippy, rustfmt, CI workflows
- TypeScript/React: npm/pnpm/yarn, vite, jest, eslint, prettier
What Gets Configured
- Git initialization with detailed .gitignore
- ✅ GitHub Actions workflows (test, lint, typecheck, publish)
- ✅ Pre-commit hooks (formatting, linting, security)
- ✅ Makefile with standard development targets
- ✅ Dependency management (uv/cargo/package managers)
- ✅ Project structure (src/, tests/, README.md)
Quick Start
New Python Project
# Interactive mode
/attune:init
# Non-interactive
/attune:init --lang python --name my-project --author "Your Name"
Full Cycle Workflow
# 1. Brainstorm the idea
/attune:brainstorm
# 2. War room deliberation (auto-routes by complexity)
/attune:war-room --from-brainstorm
# 3. Create specification
/attune:specify
# 4. Plan architecture
/attune:blueprint
# 5. Initialize project
/attune:init
# 6. Execute implementation
/attune:execute
Skills
| Skill | Purpose |
|---|---|
project-brainstorming | Socratic ideation workflow |
war-room | Multi-LLM expert council with Type 1/2 decision routing |
war-room-checkpoint | Inline RS assessment for embedded escalation during workflow |
project-specification | Spec creation from war-room decision |
project-planning | Architecture and task breakdown |
project-init | Interactive project initialization |
project-execution | Systematic implementation |
makefile-generation | Generate language-specific Makefiles |
mission-orchestrator | Unified brainstorm-specify-plan-execute lifecycle orchestrator |
workflow-setup | Configure CI/CD pipelines |
precommit-setup | Set up code quality hooks |
Agents
| Agent | Role |
|---|---|
project-architect | Guides full-cycle workflow (brainstorm → plan) |
project-implementer | Executes implementation with TDD |
Integration
Attune combines capabilities from:
- superpowers: Brainstorming, planning, execution workflows
- spec-kit: Specification-driven development
- abstract: Plugin and skill authoring for plugin projects
War Room Integration
The war room is a mandatory phase after brainstorming. It automatically routes to the appropriate deliberation intensity based on Reversibility Score (RS):
| Mode | RS Range | Duration | Description |
|---|---|---|---|
| Express | ≤ 0.40 | <2 min | Quick decision by Chief Strategist |
| Lightweight | 0.41-0.60 | 5-10 min | 3-expert panel |
| Full Council | 0.61-0.80 | 15-30 min | 7-expert deliberation |
| Delphi | > 0.80 | 30-60 min | Iterative consensus for critical decisions |
The war-room-checkpoint skill can also trigger additional deliberation during
planning or execution when high-stakes decisions arise.
Discussion Publishing
After the Supreme Commander synthesis (Phase 7), the war room offers to publish the decision to a GitHub Discussion in the “Decisions” category. This requires user approval and checks for prior decisions on the same topic to avoid duplicates. The published Discussion includes the full decision record with alternatives considered, scoring breakdown, and implementation guidance. Local strategeion files remain the primary record; the Discussion is an additional cross-session discovery channel.
Examples
Initialize Python CLI Project
/attune:init --lang python --type cli
Creates:
pyproject.tomlwith uv configurationMakefilewith test/lint/format targets- GitHub Actions workflows
- Pre-commit hooks for ruff and mypy
- Basic CLI structure
Upgrade Existing Project
# Add missing configs
/attune:upgrade-project
# Validate structure
/attune:validate
Configuration
Custom Templates
Place custom templates in:
~/.claude/attune/templates/(user-level).attune/templates/(project-level)$ATTUNE_TEMPLATES_PATH(environment variable)
Reference Projects
Templates sync from reference projects:
simple-resume(Python)skrills(multi-language)importobot(automation)
scribe
Documentation review, cleanup, and generation with AI slop detection.
Overview
Scribe helps maintain high-quality documentation by detecting AI-generated content patterns (“slop”), learning writing styles from exemplars, and generating or remediating documentation. It integrates with sanctum’s documentation workflows.
Installation
/plugin install scribe@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
slop-detector | Detect AI-generated content markers | Scanning docs for AI tells |
style-learner | Extract writing style from exemplar text | Creating style profiles |
doc-generator | Generate/remediate documentation | Writing or fixing docs |
doc-importer | Import external documents (PDF, DOCX, PPTX) to markdown | Converting non-markdown files for editing |
tech-tutorial | Plan, draft, and refine technical tutorials | Writing step-by-step developer guides |
session-to-post | Convert sessions into blog posts or case studies | Sharing session outcomes |
session-replay | Convert session JSONL into GIF/MP4/WebM replays | Creating animated session recordings |
Commands
| Command | Description |
|---|---|
/style-learn | Create style profile from examples |
/doc-polish | Clean up AI-generated content |
/doc-generate | Generate new documentation |
/session-to-post | Convert current session into a blog post or case study |
/session-replay | Generate GIF/MP4/WebM replay from session JSONL |
Agents
| Agent | Description |
|---|---|
doc-editor | Interactive documentation editing |
slop-hunter | Full-document slop detection |
doc-verifier | QA validation using proof-of-work methodology |
Usage Examples
Detect AI Slop
# Scan using the slop-detector skill
Skill(scribe:slop-detector)
# Or use the slop-hunter agent for thorough detection
Agent(scribe:slop-hunter)
Clean Up Content
# Interactive polish
/doc-polish docs/guide.md
# Polish all markdown files
/doc-polish **/*.md
Learn a Style
# Create style profile from examples
/style-learn good-examples/*.md --name house-style
# Generate with learned style
/doc-generate readme --style house-style
Replay a Session
# Generate a GIF replay from a Claude Code session
/session-replay ~/.claude/projects/myproject/sessions/abc123.jsonl
# Codex sessions are auto-detected
/session-replay codex-session.jsonl --format mp4
Verify Documentation
# Verify README claims and commands (now agent-only)
Agent(scribe:doc-verifier)
# For targeted verification, use the doc-generator skill
Skill(scribe:doc-generator)
AI Slop Detection
Scribe detects patterns that reveal AI-generated content:
Tier 1 Words (Highest Confidence)
Words that appear far more often in AI text than human text.
See Skill(scribe:slop-detector) for the full word list
and scoring weights.
Phrase Patterns
Formulaic constructions: vapid openers, empty emphasis, and attribution cliches. The detector scores these at 2-4 points each.
Structural Markers
Overuse of em dashes, excessive bullet points, uniform sentence length, perfect grammar without contractions.
Writing Principles
Scribe enforces these principles:
- Ground every claim: Use specifics, not adjectives
- Trim crutches: No formulaic openers or closers
- Show perspective: Include reasoning and trade-offs
- Vary structure: Mix sentence lengths, balance bullets with prose
- Use active voice: Direct statements over passive constructions
Vocabulary Substitutions
Scribe suggests plain replacements for flagged words.
See Skill(scribe:slop-detector) for the full
substitution table with context-aware alternatives.
Examples
These examples show slop remediation in practice. Each pair includes a score reduction from the detector.
Example 1: Vocabulary Slop (8/10 to 1/10)
A sentence with five Tier 1 words was reduced to plain language. The fix replaced jargon verbs with “uses” and “check,” and removed unnecessary adjectives.
After:
“This solution uses modern tools to check documentation quality.”
Example 2: Structural Patterns (7/10 to 1/10)
Four em dashes in a single sentence were collapsed into one flowing statement using “and” and “to.”
After:
“The system processes requests and handles validation to ensure data integrity before returning results.”
Example 3: Phrase Patterns (9/10 to 1/10)
A vapid opener, a filler hedge, and an empty emphasis phrase were all removed. The rewrite states the tool’s purpose directly.
After:
“This tool improves documentation quality by detecting and flagging AI-generated patterns.”
Integration
Scribe integrates with sanctum documentation workflows:
| Sanctum Command | Scribe Integration |
|---|---|
/pr-review | Runs slop-detector on changed .md files |
/update-docs | Runs slop-detector on edited docs |
/update-docs --readme | Runs slop-detector on README |
/prepare-pr | Verifies PR descriptions with slop-detector |
Dependencies
Scribe uses skills from other plugins:
- imbue:proof-of-work: Evidence-based verification (used by
doc-verifier) - conserve:bloat-detector: Token optimization
scry
Media generation for terminal recordings, browser recordings, GIF processing, and media composition.
Overview
Scry creates documentation assets through terminal recordings (VHS), browser automation recordings (Playwright), GIF processing, and multi-source media composition. Use it to build tutorials, demos, and README assets.
Installation
/plugin install scry@claude-night-market
Skills
| Skill | Description | When to Use |
|---|---|---|
vhs-recording | Terminal recordings using VHS tape scripts | CLI demos, tool tutorials |
browser-recording | Browser recordings using Playwright | Web UI walkthroughs |
gif-generation | GIF processing and optimization | README assets, docs |
media-composition | Combine multiple media sources | Full tutorials |
Commands
| Command | Description |
|---|---|
/record-terminal | Create terminal recording with VHS |
/record-browser | Record browser session with Playwright |
Usage Examples
Terminal Recording
/record-terminal
# Or use the skill directly
Skill(scry:vhs-recording)
Creates a VHS tape script and records terminal output to GIF or video.
Browser Recording
/record-browser
# Or use the skill directly
Skill(scry:browser-recording)
Records browser sessions with Playwright for web UI documentation.
GIF Generation
Skill(scry:gif-generation)
# Optimizes recordings for documentation:
# - Resize for README display
# - Compress file size
# - Adjust frame rate
Media Composition
Skill(scry:media-composition)
# Combines assets:
# - Terminal + browser recordings
# - Multiple clips into tutorials
# - Add transitions and captions
VHS Tape Script Example
VHS uses tape scripts to define recordings:
# demo.tape
Output demo.gif
Set FontSize 16
Set Width 1200
Set Height 600
Type "echo 'Hello, World!'"
Sleep 500ms
Enter
Sleep 2s
Run with:
vhs demo.tape
Dependencies
VHS (Terminal Recording)
macOS:
brew install charmbracelet/tap/vhs
brew install ttyd ffmpeg
Linux (Debian/Ubuntu):
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://repo.charm.sh/apt/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/charm.gpg
echo "deb [signed-by=/etc/apt/keyrings/charm.gpg] https://repo.charm.sh/apt/ * *" | sudo tee /etc/apt/sources.list.d/charm.list
sudo apt update && sudo apt install vhs
sudo apt install ffmpeg
Playwright (Browser Recording)
npm install -g playwright
npx playwright install
FFmpeg (Media Processing)
Required for GIF generation and media composition.
# macOS
brew install ffmpeg
# Linux
sudo apt install ffmpeg
Workflow Patterns
Tutorial Creation
- Record terminal demo with
vhs-recording - Record web UI walkthrough with
browser-recording - Combine with
media-composition - Optimize output with
gif-generation
Quick Demo
/record-terminal
# Creates demo.gif ready for README
Documentation Assets
# Generate multiple GIFs for docs
Skill(scry:vhs-recording)
Skill(scry:gif-generation)
# Move outputs to docs/images/
Integration with sanctum
Scry integrates with sanctum for PR and documentation workflows:
# Generate demo for PR description
/record-terminal
# Include in PR body
/sanctum:pr
Related Plugins
- sanctum: PR preparation uses scry for demo assets
- memory-palace: Store and organize media assets
gauntlet
Codebase learning through knowledge extraction, challenges, and spaced repetition.
Overview
Gauntlet prevents knowledge atrophy for experienced developers and accelerates onboarding for new ones. It extracts knowledge from the codebase and tests understanding through adaptive challenges.
Installation
/plugin install gauntlet@claude-night-market
Skills
extract- Analyze codebase and build a knowledge basechallenge- Adaptive difficulty challenge sessiononboard- Guided five-stage onboarding pathcurate- Add or edit knowledge annotations
Commands
/gauntlet- Run an ad-hoc challenge session/gauntlet-extract- Rebuild the knowledge base/gauntlet-progress- Show accuracy stats and streak/gauntlet-onboard- Start or resume onboarding/gauntlet-curate- Add or edit a knowledge annotation
ML Scoring
Gauntlet uses a pluggable Scorer protocol to evaluate answers.
Two implementations ship by default:
- YamlScorer (default): heuristic scoring based on YAML rule files. Always available, no external dependencies.
- OnnxSidecarScorer: upgrades scoring quality by calling the oracle sidecar daemon for ONNX model inference. Activates automatically when oracle is running.
The scorer selection is automatic. When oracle’s port file exists and the health check passes, gauntlet uses the sidecar scorer with configurable blend weights. When the sidecar is unavailable, it falls back to YamlScorer with no user intervention.
See oracle for daemon setup and ADR-0009 for the discovery pattern.
Code Knowledge Graph
The graph module builds a SQLite-backed knowledge graph using
Tree-sitter parsing. GraphStore supports context manager
usage for safe resource cleanup. Community detection groups
related nodes, and blast radius analysis scores the risk of
code changes using security keywords from constants.py.
Problem Bank
Curated algorithm problems in data/problems/*.yaml cover
arrays, graphs, trees, dynamic programming, and 15 other
categories. Each entry includes difficulty level and pattern
metadata. The challenge engine draws from this bank for
targeted practice sessions.
Agents
extractor- Autonomous knowledge extraction agent
tome
Multi-source research plugin for code archaeology, community discourse, academic literature, and TRIZ cross-domain analysis.
Overview
Tome orchestrates research across four channels: GitHub code search, community discourse (HN, Lobsters, Reddit), academic literature (arXiv, Semantic Scholar), and TRIZ analogical reasoning. It classifies domains and adapts search depth automatically.
Installation
/plugin install tome@claude-night-market
Commands
| Command | Description |
|---|---|
/tome:research | Run multi-source research session |
/tome:dig | Refine results interactively |
/tome:cite | Generate formatted bibliography |
/tome:export | Export findings for knowledge-intake |
Skills
research– orchestrate a full research sessioncode-search– search GitHub implementationsdiscourse– scan community discussionspapers– search academic literaturetriz– cross-domain analogical reasoningsynthesize– merge and rank findingsdig– interactive refinement
Agents
code-searcher– GitHub code searchdiscourse-scanner– community discussion scanningliterature-reviewer– academic paper reviewtriz-analyst– cross-domain analysis
Tutorials
Workflow-driven tutorials for real developer scenarios. Each tutorial walks through an actual task using real commands.
Available Tutorials
| Tutorial | Description | Level |
|---|---|---|
| Your First Session | Install, explore skills, run your first command | Beginner |
| Feature Development Lifecycle | Spec → implement → test → PR end-to-end | Intermediate |
| Code Review and PR Workflow | Review, commit, PR, and address feedback | Beginner |
| Debugging and Issue Resolution | Triage a GitHub issue, debug, fix, verify | Intermediate |
| Memory Palace: Knowledge Management | Build and maintain a persistent knowledge base | Intermediate |
Suggested Path
New Users
- Your First Session - understand skills, commands, and plugins
- Code Review and PR Workflow - the most common daily workflow
- Feature Development Lifecycle - full feature development cycle
Experienced Users
- Debugging and Issue Resolution - issue triage and resolution
- Memory Palace: Knowledge Management - persistent knowledge base
Prerequisites
- Claude Code installed
- Night Market plugins installed (see Installation)
- A git repository to work in
Your First Session
You’ve just installed Claude Night Market. This tutorial walks through your first real session: discovering what’s available, running your first skill, and seeing how plugins work together.
Scenario
You’ve followed the installation guide and have Night Market plugins installed. You open Claude Code in a project and want to explore what you can do.
Step 1: See What’s Available
Start by asking Claude Code what skills are available:
What skills do I have installed?
Claude reads the installed plugins and lists available skills. You’ll see entries like:
- sanctum:commit-msg - Draft a conventional commit message
- sanctum:prepare-pr - Complete PR preparation
- pensive:code-reviewer - Code review agent
- imbue:catchup - Quickly understand recent changes
- abstract:validate-plugin - Validate plugin structure
Each skill is identified by plugin:skill-name.
The plugin tells you which domain it belongs to,
and the skill name tells you what it does.
Step 2: Explore a Plugin
Pick a plugin to understand what it offers. For example, sanctum handles git workflows:
What commands does the sanctum plugin provide?
You’ll see commands like:
| Command | What it does |
|---|---|
/commit-msg | Generate a conventional commit message from staged changes |
/prepare-pr | Run quality gates and prepare a PR description |
/do-issue | Implement a GitHub issue end-to-end |
/fix-pr | Address PR review feedback |
/git-catchup | Catch up on repository changes |
Commands (prefixed with /) are the main way you interact with skills.
They’re shorthand: /commit-msg invokes the sanctum:commit-msg skill behind
the scenes.
Step 3: Run Your First Skill
Let’s use /catchup to understand the current state of the repository:
/catchup
This invokes the imbue:catchup skill, which:
- Reads recent git history
- Analyzes what changed and why
- Summarizes the current state of the project
The output gives you a summary of recent commits, active branches, what areas of the code changed, and what work is in progress.
Step 4: Try a Review
If you have uncommitted changes or a branch with work on it, try a code review:
/code-review
This invokes the pensive plugin’s review system.
It analyzes your changes and reports findings by category: bugs, style issues,
architecture concerns, test coverage gaps.
For a more targeted review, you can use specific variants:
/bug-review # Focus on potential bugs
/architecture-review # Focus on design patterns
/test-review # Focus on test quality
Step 5: Understand How Skills Compose
Skills often work together. For example, preparing a PR typically involves:
/commit-msg- generate a commit message for staged changes/prepare-pr- run quality gates and create the PR description
The PR preparation skill runs workspace analysis, checks for scope drift, and produces a PR description, all by composing underlying skills.
This composition happens on its own. You don’t need to orchestrate it. Just invoke the top-level command and the skill handles the rest.
What You’ve Learned
- Skills are the building blocks. Each does one thing well.
- Commands (
/command) are the main interface for invoking skills. - Plugins group related skills by domain (git, review, analysis, etc.).
- Composition lets skills chain together into workflows without manual orchestration.
Next Steps
| Tutorial | When to read it |
|---|---|
| Feature Development Lifecycle | You want to build a feature from spec to PR |
| Code Review and PR Workflow | You’re ready to review code and submit PRs |
| Debugging and Issue Resolution | You need to triage and fix a bug |
| Memory Palace: Knowledge Management | You want to build a persistent knowledge base |
Difficulty: Beginner Prerequisites: Claude Code installed, Night Market plugins installed Duration: 5 minutes
Feature Development Lifecycle
Walk through building a feature from specification to merged PR. This tutorial covers the full development cycle using real commands across multiple plugins.
Scenario
You’ve been asked to add a new capability to your project. You need to specify what you’re building, plan the implementation, write the code, and get it reviewed and merged.
Step 1: Start with a Specification
Don’t jump straight to code. Start by defining what you’re building:
/speckit-specify Add rate limiting to the API endpoints
This invokes the spec-kit plugin’s specification skill. It will:
- Ask clarifying questions about requirements (limits, scope, behavior)
- Create a
spec.mdwith user stories, acceptance criteria, and constraints - Identify edge cases you might not have considered
The spec becomes the source of truth for the feature.
Refine the Spec
If the spec needs clarification:
/speckit-clarify
This asks targeted questions to resolve ambiguities. “Should rate limits be per-user or per-IP?” “What HTTP status code for rate-limited requests?”
Step 2: Plan the Implementation
With a clear spec, generate an implementation plan:
/speckit-plan
This produces a phased plan showing:
- Which files to create or modify
- Dependencies between changes
- Test strategy for each phase
- Estimated scope per phase
Generate Tasks
Break the plan into ordered tasks:
/speckit-tasks
This creates a tasks.md with dependency-ordered implementation steps.
Each task is specific enough to implement independently.
Step 3: Implement
Execute the tasks:
/speckit-implement
This processes tasks from tasks.md in dependency order. For each task, it:
- Reads the task requirements
- Writes a failing test (TDD approach)
- Implements the minimum code to pass
- Moves to the next task
You can also implement tasks selectively:
/speckit-implement --phase 1
Check Consistency
After implementing, verify the spec, plan, and code are aligned:
/speckit-analyze
This cross-checks all artifacts: spec requirements against tests, plan phases against implementation, task completion against acceptance criteria.
Step 4: Review Your Work
Before committing, review what you’ve built:
/code-review
This runs pensive’s review system against your changes. For a feature like this, you might also run:
/architecture-review
This checks whether your implementation fits the existing architecture. Are you adding rate limiting in the right layer? Does it follow existing patterns?
Step 5: Commit and Create a PR
Stage your changes and generate a commit message:
/commit-msg
This analyzes staged changes and drafts a conventional commit message. It classifies the change type (feat, fix, refactor) and summarizes the intent.
Then prepare the pull request:
/prepare-pr
This runs quality gates (tests, lint, scope check) and generates a PR description with:
- Summary of changes
- Test plan
- Breaking changes (if any)
What You’ve Learned
- spec-kit handles the specification → plan → tasks → implementation pipeline
- pensive provides code review before you commit
- sanctum handles git operations: commits, PRs, quality gates
- Plugins collaborate through the workflow. You don’t orchestrate them manually.
Command Reference
| Phase | Command | Plugin |
|---|---|---|
| Specify | /speckit-specify | spec-kit |
| Clarify | /speckit-clarify | spec-kit |
| Plan | /speckit-plan | spec-kit |
| Tasks | /speckit-tasks | spec-kit |
| Implement | /speckit-implement | spec-kit |
| Analyze | /speckit-analyze | spec-kit |
| Review | /code-review | pensive |
| Commit | /commit-msg | sanctum |
| PR | /prepare-pr | sanctum |
Difficulty: Intermediate Prerequisites: Your First Session Duration: 15 minutes (following along with a real feature)
Code Review and PR Workflow
The most common daily workflow: review your changes, commit them cleanly, create a PR, and address reviewer feedback.
Scenario
You’ve finished working on a feature branch. You have uncommitted changes and need to get them reviewed, committed, and merged.
Step 1: Understand What Changed
Start by catching up on your own work:
/catchup
This summarizes recent changes: which files were modified, what the commit history looks like, and what’s currently unstaged. Useful even for your own branch, especially after stepping away.
Step 2: Self-Review Before Committing
Run a code review on your changes before anyone else sees them:
/code-review
This analyzes your uncommitted and staged changes. The review covers:
- Bugs: Logic errors, off-by-one mistakes, null handling
- Style: Naming, formatting, consistency with existing patterns
- Architecture: Does the change fit the codebase design?
- Tests: Are changes covered by tests?
Fix any issues found before proceeding.
Targeted Reviews
If your change is in a specific domain, use a focused review:
/bug-review # Focus on defect detection
/test-review # Evaluate test coverage and quality
/architecture-review # Check design patterns and structure
Step 3: Commit with a Clean Message
Stage your changes and generate a commit message:
/commit-msg
This analyzes staged changes and produces a conventional commit message. It:
- Classifies the change type (feat, fix, refactor, docs, test)
- Identifies the appropriate scope
- Writes a concise description of the intent (why, not what)
Example output:
feat(api): add rate limiting to public endpoints
Implements per-user rate limiting with configurable thresholds.
Requests exceeding the limit receive 429 responses with retry-after headers.
You review the message and approve or edit it before the commit is created.
Step 4: Prepare the Pull Request
With your changes committed, prepare a PR:
/prepare-pr
This runs a multi-step workflow:
- Workspace analysis - reviews all commits on the branch
- Quality gates - runs tests and lint checks
- Scope check - flags if the branch has drifted beyond its original intent
- PR description - generates a description with summary, test plan, and checklist
The PR is created with a description that reviewers can actually use.
Step 5: Address Review Feedback
After reviewers comment on your PR, use:
/fix-pr
This reads the PR review comments and works through them:
- Fetches all unresolved review threads
- Groups feedback by type (required changes, suggestions, questions)
- Addresses each item: makes code changes, responds to questions
- Resolves threads as changes are made
Resolve Threads in Bulk
After addressing feedback, resolve all completed threads:
/resolve-threads
This batch-resolves review threads that have been addressed by code changes.
Step 6: Review a Teammate’s PR
You can also review PRs from others:
/pr-review 123
This reviews PR #123:
- Reads the PR description and all changed files
- Checks changes against the stated scope
- Identifies potential issues organized by severity
- Produces a review with specific feedback
What You’ve Learned
- Self-review before committing catches issues early
- Conventional commits via
/commit-msgmaintain a clean git history - PR preparation via
/prepare-prautomates quality gates and descriptions - Feedback handling via
/fix-prworks through review comments one by one - PR review via
/pr-reviewgives you a thorough analysis of others’ work
Command Reference
| Step | Command | Plugin |
|---|---|---|
| Catch up | /catchup | imbue |
| Self-review | /code-review | pensive |
| Commit | /commit-msg | sanctum |
| Create PR | /prepare-pr | sanctum |
| Fix feedback | /fix-pr | sanctum |
| Resolve threads | /resolve-threads | sanctum |
| Review others | /pr-review | sanctum |
Difficulty: Beginner Prerequisites: Your First Session Duration: 10 minutes
Debugging and Issue Resolution
Walk through the process of triaging a GitHub issue, debugging the problem, implementing a fix, and verifying the solution.
Scenario
A user has filed GitHub issue #42: “API returns 500 when request body is empty.” You need to investigate, fix it, and close the issue.
Step 1: Understand the Context
Before diving in, catch up on recent changes that might be related:
/git-catchup
This shows recent commits, active branches, and areas of change. If someone recently modified the API layer, that context is immediately relevant.
Step 2: Implement the Issue End-to-End
For well-defined issues, use the issue resolution command:
/do-issue 42
This reads the GitHub issue and orchestrates the full fix:
- Reads the issue - title, description, labels, comments
- Plans the approach - identifies affected files and tests needed
- Creates a branch - based on the issue number
- Implements the fix - with tests written first (TDD approach)
- Prepares a PR - linking back to the issue
This is the fastest path from issue to PR. It handles the orchestration so you focus on reviewing the result.
Step 3: Manual Debugging (When Needed)
Sometimes issues need investigation before you can fix them. For complex bugs, work through the problem step by step.
Investigate the Problem
Start by reading the issue and understanding the reproduction steps. Then explore the relevant code:
Show me the API endpoint handlers that process request bodies
Claude will search the codebase, read the relevant files, and explain the code flow.
Find the Root Cause
Ask Claude to trace the execution path:
Trace what happens when an empty POST body hits the /api/data endpoint
Claude reads the handler code, middleware, and validation layers to identify where the 500 error originates.
Verify the Fix
After implementing a fix, verify it works:
Run the tests for the API endpoint module
Claude runs the relevant test suite and reports results. If tests fail, it analyzes the failure and suggests corrections.
Step 4: Create the Issue (When You Find Bugs)
If you discover a bug while working, create an issue to track it:
/create-issue
This creates a formatted GitHub issue with:
- Clear title and description
- Reproduction steps
- Expected vs. actual behavior
- Labels and assignees
Step 5: Close Resolved Issues
After your PR is merged, check if the issue can be closed:
/close-issue 42
This analyzes whether the issue’s requirements have been met by reviewing the linked PR and test evidence.
Debugging Tips
Use Catchup for Context
When you inherit a bug you didn’t create,
/catchup gives you the recent history that led to the current state.
This often reveals what change introduced the bug.
Use Targeted Reviews
If you suspect a specific type of issue:
/bug-review # Systematic bug hunting in recent changes
/test-review # Check if tests actually cover the bug scenario
Work Incrementally
For complex bugs:
- Reproduce the bug (confirm you can trigger it)
- Write a failing test that captures the bug
- Fix the code until the test passes
- Run the full test suite to check for regressions
What You’ve Learned
/do-issuehandles the full lifecycle: read → plan → implement → PR/create-issueformats new issues with proper structure/close-issueverifies issues are resolved before closing/git-catchupprovides historical context for debugging- Targeted reviews (
/bug-review,/test-review) focus analysis on specific concerns
Command Reference
| Step | Command | Plugin |
|---|---|---|
| Context | /git-catchup | sanctum |
| Full fix | /do-issue 42 | sanctum |
| Create issue | /create-issue | minister |
| Close issue | /close-issue 42 | minister |
| Bug review | /bug-review | pensive |
| Test review | /test-review | pensive |
| Catchup | /catchup | imbue |
Difficulty: Intermediate Prerequisites: Your First Session Duration: 10 minutes
Memory Palace: Knowledge Management
Build a persistent knowledge base that grows with your work. This tutorial covers the core Memory Palace workflows: capturing knowledge, organizing it in palaces, maintaining it over time, and finding what you need.
Scenario
You’re working on a project with technologies you’ll reference repeatedly: API patterns, architecture decisions, library quirks. Instead of re-researching every session, you want a knowledge base that remembers what you’ve learned.
Step 1: Create a Palace
A palace is a themed container for knowledge. Create one for your project’s domain:
/palace create "API Patterns" "rest-api" --metaphor library
This creates a palace named “API Patterns” in the rest-api domain using the
library metaphor. Metaphors determine how knowledge is organized:
| Metaphor | Best for |
|---|---|
library | Research, documentation |
workshop | Practical skills, tools |
garden | Evolving knowledge |
fortress | Security, production systems |
building | General organization (default) |
Check what you have:
/palace list
This shows all palaces with entry counts and last-modified dates.
Step 2: Capture Knowledge
Knowledge enters the palace through two paths.
Automatic Capture
When you research topics during a Claude Code session (web searches, reading docs, analyzing code), the Memory Palace hooks queue findings for later processing. This happens in the background. You don’t need to do anything special.
Check the queue:
/palace status
This shows total palaces, entry counts, and the intake queue size (how many items are waiting to be processed).
Manual Intake
To explicitly capture something you’ve learned:
/garden seed ~/my-garden.json "OAuth2 PKCE Flow" --section auth --links "Authentication,Security"
This adds a new entry with links to related concepts, which helps with navigation later.
Step 3: Process the Queue
Queued research needs to be synced into palaces. Preview first:
/palace sync --dry-run
This shows what would be processed: which items match existing palaces, which would create new entries, and which have no matching palace.
When it looks right:
/palace sync
Items are matched to palaces by domain and tags, then organized into districts within each palace.
Step 4: Find What You Know
Search across all your palaces:
/navigate search "rate limiting" --type semantic
This searches by meaning, not just keywords. It returns matches with:
- Which palace and district contains the result
- Relevance score
- Related concepts nearby
For a specific concept:
/navigate locate "OAuth 2.0"
To explore connections between concepts:
/navigate path "OAuth" "JWT"
This shows the navigation path between two concepts, revealing how your knowledge connects.
Step 5: Maintain the Garden
Knowledge goes stale. Regularly check palace health:
/garden health ~/my-garden.json
This reports metrics like link density (are entries well-connected?) and freshness (when were entries last updated?).
Prune stale entries:
/palace prune --stale-days 90
This identifies entries older than 90 days, low-quality entries, and duplicates. It shows recommendations and asks for your approval before making any changes.
After reviewing:
/palace prune --apply
Garden Metrics
Track the health of your knowledge base over time:
/garden metrics ~/my-garden.json --format brief
Output: plots=42 link_density=3.2 avg_days_since_tend=4.5
Healthy gardens have link density above 2.0 and average staleness under 7 days.
Step 6: Use Knowledge in Reviews
The Memory Palace integrates with PR reviews through the review chamber:
/review-room
This captures review patterns and decisions, building a knowledge base of your team’s code review preferences over time.
What You’ve Learned
- Palaces organize knowledge by domain with architectural metaphors
- Automatic capture queues findings from research sessions
- Sync processes the queue into organized palace entries
- Navigation finds knowledge using semantic, exact, or fuzzy search
- Maintenance keeps the knowledge base healthy through pruning and metrics
Command Reference
| Task | Command | Description |
|---|---|---|
| Create | /palace create <name> <domain> | Create a new palace |
| List | /palace list | See all palaces |
| Status | /palace status | Queue size and health |
| Sync | /palace sync | Process intake queue |
| Search | /navigate search "<query>" | Find across palaces |
| Locate | /navigate locate "<concept>" | Find specific concept |
| Path | /navigate path "<from>" "<to>" | Show concept connections |
| Health | /garden health <path> | Assess garden health |
| Prune | /palace prune | Clean stale entries |
| Metrics | /garden metrics <path> | Track garden health |
Difficulty: Intermediate Prerequisites: Your First Session, Memory Palace plugin installed Duration: 15 minutes
Capabilities Reference
Quick lookup table of all skills, commands, agents, and hooks in the Claude Night Market.
For full flag documentation and workflow examples: See Capabilities Reference Details.
Quick Reference Index
All Skills (Alphabetical)
| Skill | Plugin | Description |
|---|---|---|
agent-expenditure | conserve | Per-agent token usage tracking |
agent-teams | conjure | Coordinate Claude Code Agent Teams through filesystem-based protocol |
api-review | pensive | API surface evaluation |
architecture-aware-init | attune | Architecture-aware project initialization with research |
architecture-diagram | cartograph | Component relationship diagrams |
architecture-paradigm-client-server | archetypes | Client-server communication |
architecture-paradigm-cqrs-es | archetypes | CQRS and Event Sourcing |
architecture-paradigm-event-driven | archetypes | Asynchronous communication |
architecture-paradigm-functional-core | archetypes | Functional Core, Imperative Shell |
architecture-paradigm-hexagonal | archetypes | Ports & Adapters architecture |
architecture-paradigm-layered | archetypes | Traditional N-tier architecture |
architecture-paradigm-microkernel | archetypes | Plugin-based extensibility |
architecture-paradigm-microservices | archetypes | Independent distributed services |
architecture-paradigm-modular-monolith | archetypes | Single deployment with internal boundaries |
architecture-paradigm-pipeline | archetypes | Pipes-and-filters model |
architecture-paradigm-serverless | archetypes | Function-as-a-Service |
architecture-paradigm-service-based | archetypes | Coarse-grained SOA |
architecture-paradigm-space-based | archetypes | Data-grid architecture |
architecture-paradigms | archetypes | Orchestrator for paradigm selection |
architecture-review | pensive | Architecture assessment |
authentication-patterns | leyline | Auth flow patterns |
blast-radius | pensive | Code change blast radius analysis with risk scoring |
bloat-detector | conserve | Detection algorithms for dead code, God classes, documentation duplication |
browser-recording | scry | Playwright browser recordings |
bug-review | pensive | Bug hunting |
call-chain | cartograph | Trace execution paths through code knowledge graph |
catchup | imbue | Context recovery |
challenge | gauntlet | Adaptive difficulty challenge session for codebase knowledge testing |
class-diagram | cartograph | Class and interface diagrams |
clear-context | conserve | Auto-clear workflow with session state persistence |
code-communities | cartograph | Detect architectural clusters via community detection |
code-quality-principles | conserve | Core principles for AI-assisted code quality |
code-refinement | pensive | Duplication, algorithms, and clean code analysis |
code-search | tome | GitHub implementation search |
commit-messages | sanctum | Conventional commits |
compression-strategy | conserve | Context compression analysis and recommendations |
computer-control | phantom | Desktop automation via Claude’s vision and action API |
content-sanitization | leyline | External content sanitization |
context-map | conserve | Pre-scan project structure to reduce exploration token waste |
context-optimization | conserve | MECW principles and 50% context rule |
cpu-gpu-performance | conserve | Resource monitoring and selective testing |
curate | gauntlet | Add or edit knowledge annotations with tribal context |
damage-control | leyline | Agent crash recovery and state reconciliation |
data-flow | cartograph | Data movement diagrams |
decisive-action | conserve | Decisive action patterns for efficient workflows |
deferred-capture | leyline | Contract for unified deferred-item capture across plugins |
delegation-core | conjure | Framework for delegation decisions |
dependency-graph | cartograph | Import and dependency diagrams |
diff-analysis | imbue | Semantic changeset analysis |
dig | tome | Interactive research refinement |
digital-garden-cultivator | memory-palace | Digital garden maintenance |
discourse | tome | Community discussion scanning |
do-issue | sanctum | GitHub issue resolution workflow |
doc-consolidation | sanctum | Document merging |
doc-generator | scribe | Generate and remediate documentation |
doc-importer | scribe | Import external documents to markdown |
doc-updates | sanctum | Documentation maintenance |
document-conversion | leyline | Universal document-to-markdown conversion |
dorodango | attune | Iterative code polishing workflow |
error-patterns | leyline | Standardized error handling |
escalation-governance | abstract | Model escalation decisions |
evaluation-framework | leyline | Decision thresholds |
extract | gauntlet | Analyze codebase and build a knowledge base |
feature-review | imbue | Feature prioritization with RICE/WSJF/Kano scoring and optional research enrichment via tome (--research) |
file-analysis | sanctum | File structure analysis |
gemini-delegation | conjure | Gemini CLI integration |
gif-generation | scry | GIF processing and optimization |
git-platform | leyline | Cross-platform git forge detection and command mapping |
git-workspace-review | sanctum | Repo state analysis |
github-initiative-pulse | minister | Initiative progress tracking |
graph-build | gauntlet | Build or update the code knowledge graph |
graph-search | gauntlet | FTS5 search of the code knowledge graph |
hook-authoring | abstract | Security-first hook development |
hooks-eval | abstract | Hook security scanning |
install-watchdog | egregore | Install crash-recovery watchdog |
justify | imbue | Anti-additive-bias change audit |
knowledge-intake | memory-palace | Intake and curation |
knowledge-locator | memory-palace | Spatial search |
latent-space-engineering | imbue | Agent behavior shaping through instruction framing |
makefile-generation | attune | Generate language-specific Makefiles |
makefile-review | pensive | Makefile best practices |
markdown-formatting | leyline | Line wrapping and style conventions |
math-review | pensive | Mathematical correctness |
mcp-code-execution | conserve | MCP patterns for data pipelines |
media-composition | scry | Multi-source media stitching |
memory-palace-architect | memory-palace | Building virtual palaces |
metacognitive-self-mod | abstract | Hyperagents self-improvement analysis |
methodology-curator | abstract | Surface expert frameworks for skill development |
mission-orchestrator | attune | Unified lifecycle orchestrator for project development |
modular-skills | abstract | Modular design patterns |
onboard | gauntlet | Guided five-stage onboarding path through a codebase |
papers | tome | Academic literature search |
plugin-review | abstract | Tiered plugin quality review with dependency-aware scoping |
pr-prep | sanctum | PR preparation |
pr-review | sanctum | PR review workflows |
precommit-setup | attune | Set up pre-commit hooks |
progressive-loading | leyline | Dynamic content loading |
project-brainstorming | attune | Socratic ideation workflow |
project-execution | attune | Systematic implementation |
project-init | attune | Interactive project initialization |
project-planning | attune | Architecture and task breakdown |
project-specification | attune | Spec creation from brainstorm |
proof-of-work | imbue | Evidence-based work validation |
pytest-config | leyline | Pytest configuration patterns |
python-async | parseltongue | Async patterns |
python-packaging | parseltongue | Packaging with uv |
python-performance | parseltongue | Profiling and optimization |
python-testing | parseltongue | Pytest/TDD workflows |
quality-gate | egregore | Pre-merge quality validation for autonomous sessions |
quota-management | leyline | Rate limiting and quotas |
qwen-delegation | conjure | Qwen MCP integration |
release-health-gates | minister | Release readiness checks |
research | tome | Multi-source research orchestration |
response-compression | conserve | Response compression patterns |
palace-diagram | memory-palace | Visual palace structure diagrams |
review-chamber | memory-palace | PR review knowledge capture and retrieval |
review-core | imbue | Scaffolding for detailed reviews |
rigorous-reasoning | imbue | Anti-sycophancy guardrails |
risk-classification | leyline | Inline 4-tier risk classification for agent tasks |
rule-catalog | hookify | Pre-built behavioral rule templates |
rules-eval | abstract | Evaluate and validate Claude Code rules in .claude/rules/ directories |
rust-review | pensive | Rust-specific checking |
safety-critical-patterns | pensive | NASA Power of 10 rules for robust code |
scope-guard | imbue | Anti-overengineering |
sem-integration | leyline | Semantic diff CLI detection and fallback |
service-registry | leyline | Service discovery patterns |
session-management | sanctum | Session naming, checkpointing, and resume strategies |
session-palace-builder | memory-palace | Session-specific palaces |
session-replay | scribe | Convert session JSONL into GIF/MP4/WebM replays via VHS |
session-to-post | scribe | Convert sessions into shareable blog posts or case studies |
setup | oracle | Install and configure the oracle ONNX inference daemon |
shared-patterns | abstract | Reusable plugin development patterns |
shell-review | pensive | Shell script auditing for safety and portability |
skill-authoring | abstract | TDD methodology for skill creation |
skills-eval | abstract | Skill quality assessment |
slop-detector | scribe | Detect AI-generated content markers |
smart-sourcing | conserve | Balance accuracy with token efficiency |
spec-writing | spec-kit | Specification authoring |
speckit-orchestrator | spec-kit | Workflow coordination |
stewardship | leyline | Cross-cutting stewardship principles with layer-specific guidance |
storage-templates | leyline | Storage abstraction patterns |
structured-output | imbue | Formatting patterns |
style-learner | scribe | Extract writing style from exemplar text |
subagent-testing | abstract | Testing patterns for subagent interactions |
summon | egregore | Spawn autonomous agent session with budget |
supply-chain-advisory | leyline | Known-bad version detection, lockfile auditing, incident response |
synthesize | tome | Research findings synthesis |
task-planning | spec-kit | Task generation |
tech-tutorial | scribe | Plan, draft, and refine technical tutorials |
test-review | pensive | Test quality review |
test-updates | sanctum | Test maintenance |
testing-quality-standards | leyline | Test quality guidelines |
tiered-audit | pensive | Three-tier escalation audit (git history, targeted, full) |
token-conservation | conserve | Token usage strategies |
triz | tome | TRIZ cross-domain analogical reasoning |
tutorial-updates | sanctum | Tutorial maintenance and updates |
unified-review | pensive | Review orchestration |
uninstall-watchdog | egregore | Remove crash-recovery watchdog |
update-readme | sanctum | README maintenance and updates |
usage-logging | leyline | Telemetry tracking |
utility | leyline | Utility-guided action selection for orchestration |
version-updates | sanctum | Version bumping |
vhs-recording | scry | Terminal recordings with VHS |
voice-extract | scribe | SICO comparative extraction from writing samples |
voice-generate | scribe | Generate text in learned writing voice |
voice-learn | scribe | Learning loop from manual edits |
voice-review | scribe | Dual-gate review against voice profile |
war-room | attune | Multi-LLM expert council with Type 1/2 reversibility routing |
war-room-checkpoint | attune | Inline reversibility assessment for embedded escalation |
workflow-diagram | cartograph | Process and state transition diagrams |
workflow-improvement | sanctum | Workflow retrospectives |
workflow-monitor | imbue | Workflow execution monitoring and issue creation |
workflow-setup | attune | Configure CI/CD pipelines |
writing-rules | hookify | Guide for authoring behavioral rules |
All Commands (Alphabetical)
| Command | Plugin | Description |
|---|---|---|
/acp | sanctum | Add, commit, push to current branch |
/aggregate-logs | abstract | Generate LEARNINGS.md from skill execution logs |
/ai-hygiene-audit | conserve | Audit codebase for AI-generated code quality issues (vibe coding, Tab bloat, slop) |
/analyze-skill | abstract | Skill complexity analysis |
/analyze-tests | parseltongue | Test suite health report |
/api-review | pensive | API surface review |
/architecture-review | pensive | Architecture assessment |
/attune:arch-init | attune | Initialize with architecture-aware templates |
/attune:blueprint | attune | Plan architecture and break down tasks |
/attune:brainstorm | attune | Brainstorm project ideas using Socratic questioning |
/attune:execute | attune | Execute implementation tasks systematically |
/attune:mission | attune | Run full project lifecycle as a single mission with state detection and recovery |
/attune:project-init | attune | Initialize project with development infrastructure |
/attune:specify | attune | Create detailed specifications from brainstorm |
/attune:upgrade-project | attune | Add or update configurations in existing project |
/attune:validate | attune | Validate project structure against best practices |
/attune:war-room | attune | Multi-LLM expert deliberation with reversibility-based routing |
/bloat-scan | conserve | Progressive bloat detection (3-tier scan) |
/bug-review | pensive | Bug hunting review |
/bulletproof-skill | abstract | Anti-rationalization workflow |
/catchup | imbue | Quick context recovery |
/check-async | parseltongue | Async pattern validation |
/close-issue | minister | Analyze if GitHub issues can be closed based on commits |
/commit-msg | sanctum | Generate commit message |
/context-report | abstract | Context optimization report |
/control-desktop | phantom | Run a computer use task on the desktop |
/create-command | abstract | Scaffold new command |
/create-hook | abstract | Scaffold new hook |
/create-issue | minister | Create GitHub issue with labels and references |
/create-skill | abstract | Scaffold new skill |
/create-tag | sanctum | Create git tags for releases |
/dismiss | egregore | Terminate autonomous agent session |
/do-issue | sanctum | Fix GitHub issues |
/doc-generate | scribe | Generate new documentation |
/doc-polish | scribe | Clean up AI-generated content |
/evaluate-skill | abstract | Evaluate skill execution quality |
/fix-pr | sanctum | Address PR review comments |
/fix-workflow | sanctum | Workflow retrospective with automatic improvement context gathering |
/full-review | pensive | Unified code review |
/garden | memory-palace | Manage digital gardens |
/gauntlet | gauntlet | Run an ad-hoc challenge session (5 questions, random scope) |
/gauntlet-curate | gauntlet | Add or edit a knowledge annotation |
/gauntlet-extract | gauntlet | Rebuild the knowledge base from the current codebase |
/gauntlet-graph | gauntlet | Build, search, and query the code knowledge graph |
/gauntlet-onboard | gauntlet | Start or resume a guided onboarding path |
/gauntlet-progress | gauntlet | Show challenge accuracy stats, weak areas, and streak |
/git-catchup | sanctum | Git repository catchup |
/hookify | hookify | Create behavioral rules to prevent unwanted actions |
/hookify:configure | hookify | Interactive rule enable/disable interface |
/hookify:from-hook | hookify | Convert Python SDK hooks to declarative rules |
/hookify:help | hookify | Display hookify help and documentation |
/hookify:install | hookify | Install hookify rule from catalog |
/hookify:list | hookify | List all hookify rules with status |
/hooks-eval | abstract | Hook evaluation |
/improve-skills | abstract | Auto-improve skills from observability data |
/install-watchdog | egregore | Install crash-recovery watchdog |
/justify | imbue | Audit changes for additive bias |
/make-dogfood | abstract | Makefile enhancement |
/makefile-review | pensive | Makefile review |
/math-review | pensive | Mathematical review |
/merge-docs | sanctum | Consolidate ephemeral docs |
/navigate | memory-palace | Search palaces |
/optimize-context | conserve | Context optimization |
/oracle-setup | oracle | Install and configure the oracle ONNX inference daemon |
/palace | memory-palace | Manage palaces |
/plugin-review | abstract | Tiered plugin quality review (branch/pr/release) |
/pr-review | sanctum | Enhanced PR review |
/prepare-pr | sanctum | Complete PR preparation with updates and validation |
/promote-discussions | abstract | Promote highly-voted community learnings from Discussions to Issues |
/record-browser | scry | Record browser session |
/record-terminal | scry | Create terminal recording |
/refine-code | pensive | Analyze and improve living code quality |
/reinstall-all-plugins | leyline | Refresh all plugins |
/resolve-threads | sanctum | Resolve PR review threads |
/review-room | memory-palace | Manage PR review knowledge in palaces |
/rules-eval | abstract | Evaluate Claude Code rules for frontmatter, glob patterns, and content quality |
/run-profiler | parseltongue | Profile code execution |
/rust-review | pensive | Rust-specific review |
/session-replay | scribe | Generate GIF/MP4/WebM replay from session JSONL |
/session-to-post | scribe | Convert session into blog post or case study |
/shell-review | pensive | Shell script safety and portability review |
/skill-history | pensive | View recent skill executions with context |
/skill-logs | memory-palace | View skill execution logs |
/skill-review | pensive | Analyze skill metrics and stability gaps |
/skills-eval | abstract | Skill quality assessment |
/speckit-analyze | spec-kit | Check artifact consistency |
/speckit-checklist | spec-kit | Generate checklist |
/speckit-clarify | spec-kit | Clarifying questions |
/speckit-constitution | spec-kit | Project constitution |
/speckit-implement | spec-kit | Execute tasks |
/speckit-plan | spec-kit | Generate plan |
/speckit-specify | spec-kit | Create specification |
/speckit-startup | spec-kit | Bootstrap workflow |
/speckit-tasks | spec-kit | Generate tasks |
/speckit-taskstoissues | spec-kit | Convert tasks.md entries to GitHub Issues |
/status | egregore | Check autonomous session status |
/stewardship-health | imbue | Display stewardship health dimensions for plugins |
/structured-review | imbue | Structured review workflow |
/style-learn | scribe | Create style profile from examples |
/summon | egregore | Spawn autonomous agent session with budget |
/sync-capabilities | sanctum | Detect and fix drift between plugin.json and docs |
/test-review | pensive | Test quality review |
/test-skill | abstract | Skill testing workflow |
/tome:cite | tome | Generate formatted bibliography |
/tome:dig | tome | Refine research results interactively |
/tome:export | tome | Export research findings |
/tome:research | tome | Run multi-source research session |
/unbloat | conserve | Safe bloat remediation with interactive approval |
/uninstall-watchdog | egregore | Remove crash-recovery watchdog |
/update-all-plugins | leyline | Update all plugins |
/update-ci | sanctum | Update pre-commit hooks and CI/CD workflows |
/update-dependencies | sanctum | Update project dependencies |
/update-docs | sanctum | Update documentation |
/update-labels | minister | Reorganize GitHub issue labels with professional taxonomy |
/update-plugins | sanctum | Audit plugin registrations + automatic performance analysis and improvement recommendations |
/update-tests | sanctum | Maintain tests |
/update-tutorial | sanctum | Update tutorial content |
/update-version | sanctum | Bump versions |
/validate-hook | abstract | Validate hook compliance |
/validate-plugin | abstract | Check plugin structure |
/verify-plugin | leyline | Verify plugin behavioral contract history via GitHub Attestations |
/visualize | cartograph | Generate codebase diagrams via Mermaid Chart MCP |
/voice-extract | scribe | Extract writing voice from samples |
/voice-generate | scribe | Generate text in trained voice |
/voice-learn | scribe | Learn from manual edits |
/voice-review | scribe | Review text against voice profile |
All Agents (Alphabetical)
| Agent | Plugin | Description |
|---|---|---|
ai-hygiene-auditor | conserve | Audit codebases for AI-generation warning signs |
architecture-reviewer | pensive | Principal-level architecture review |
blast-radius-reviewer | pensive | Graph-aware code review using blast radius analysis |
bloat-auditor | conserve | Orchestrates bloat detection scans |
code-refiner | pensive | Code quality refinement orchestrator |
code-reviewer | pensive | Expert code review |
code-searcher | tome | GitHub code search |
craft-reviewer | scribe | Writing craft evaluation (naming, structure, anchoring) |
codebase-explorer | cartograph | Codebase structure analysis for diagrams |
commit-agent | sanctum | Commit message generator |
context-optimizer | conserve | Context optimization |
continuation-agent | conserve | Continue work from session state checkpoint |
dependency-updater | sanctum | Dependency version management |
desktop-pilot | phantom | Autonomous desktop control via Computer Use API |
discourse-scanner | tome | Community discourse scanning |
doc-editor | scribe | Interactive documentation editing |
doc-verifier | scribe | QA validation using proof-of-work methodology |
extractor | gauntlet | Autonomous knowledge extraction agent for gauntlet knowledge base |
garden-curator | memory-palace | Digital garden maintenance |
git-workspace-agent | sanctum | Repository state analyzer |
implementation-executor | spec-kit | Task executor |
knowledge-librarian | memory-palace | Knowledge routing |
knowledge-navigator | memory-palace | Palace search |
literature-reviewer | tome | Academic literature review |
media-recorder | scry | Autonomous media generation for demos and GIFs |
meta-architect | abstract | Plugin ecosystem design |
orchestrator | egregore | Autonomous development lifecycle agent |
palace-architect | memory-palace | Palace design |
plugin-validator | abstract | Plugin validation |
pr-agent | sanctum | PR preparation |
prose-reviewer | scribe | AI patterns, banned phrases, voice drift detection |
project-architect | attune | Guides full-cycle workflow (brainstorm to plan) |
project-implementer | attune | Executes implementation with TDD |
python-linter | parseltongue | Strict ruff linting without bypasses |
python-optimizer | parseltongue | Performance optimization |
python-pro | parseltongue | Python 3.9+ expertise |
python-tester | parseltongue | Testing expertise |
review-analyst | imbue | Structured reviews |
rust-auditor | pensive | Rust security audit |
sentinel | egregore | Watchdog agent for crash recovery |
skill-auditor | abstract | Skill quality audit |
skill-evaluator | abstract | Skill execution evaluator |
skill-improver | abstract | Implements skill improvements from observability |
insight-engine | abstract | Deep analysis for bugs, optimizations, and improvements |
slop-hunter | scribe | Full-document AI slop detection |
spec-analyzer | spec-kit | Spec consistency |
task-generator | spec-kit | Task creation |
triz-analyst | tome | TRIZ cross-domain analysis |
unbloat-remediator | conserve | Executes safe bloat remediation |
workflow-improvement-analysis-agent | sanctum | Workflow improvement analysis |
workflow-improvement-implementer-agent | sanctum | Workflow improvement implementation |
workflow-improvement-planner-agent | sanctum | Workflow improvement planning |
workflow-improvement-validator-agent | sanctum | Workflow improvement validation |
workflow-recreate-agent | sanctum | Workflow reconstruction |
All Hooks (Alphabetical)
| Hook | Plugin | Type | Description |
|---|---|---|---|
aggregate_learnings_daily.py | abstract | UserPromptSubmit | Daily learning aggregation (24h cadence) with severity-based issue creation |
auto-star-repo.sh | leyline | SessionStart | Auto-star the repo if not already starred |
config_change_audit.py | sanctum | ConfigChange | Audit configuration changes |
context_warning.py | conserve | PreToolUse | Context utilization monitoring |
daemon_lifecycle.py | oracle | SessionStart, Stop | Oracle daemon lifecycle management |
deferred_item_sweep.py | sanctum | Stop | Sweep session ledger and file deferred items as GitHub issues |
deferred_item_watcher.py | sanctum | PostToolUse | Detect deferred items in Skill output and write to session ledger |
detect-git-platform.sh | leyline | SessionStart | Detect git forge platform from remote URL |
fetch-recent-discussions.sh | leyline | SessionStart | Fetch recent GitHub Discussions |
graph_auto_update.py | gauntlet | PostToolUse | Auto-update code graph after git commits |
graph_community_refresh.py | cartograph | PostToolUse | Refresh community detection after graph builds |
homeostatic_monitor.py | abstract | PostToolUse | Stability gap monitoring, queues degrading skills for improvement |
local_doc_processor.py | memory-palace | PostToolUse | Processes local docs |
noqa_guard.py | leyline | PreToolUse | Block inline lint suppression directives |
permission_denied_logger.py | conserve | PermissionDenied | Log auto-mode permission denials for observability |
permission_request.py | conserve | PermissionRequest | Permission automation |
post-evaluation.json | abstract | Config | Quality scoring config |
post_implementation_policy.py | sanctum | SessionStart | Requires docs/tests updates |
post_learnings_stop.py | abstract | Stop | Post learnings to GitHub Discussions on session stop |
pr_blast_radius.py | pensive | PreToolUse | Surface blast radius context on PR creation |
pre-skill-load.json | abstract | Config | Pre-load validation |
pre_compact.py | tome | PreCompact | Checkpoint active research session |
pre_compact_preserve.py | conserve | PreCompact | Preserve critical context before compression |
pre_skill_execution.py | abstract | PreToolUse | Skill execution tracking |
precommit_gate.py | gauntlet | PreToolUse | Pre-commit quality gate for gauntlet |
research_interceptor.py | memory-palace | PreToolUse | Cache lookup before web |
sanitize_external_content.py | leyline | PostToolUse | Sanitize external content for prompt injection |
security_pattern_check.py | sanctum | PreToolUse | Security anti-pattern detection |
session-start.sh | conserve, imbue | SessionStart | Session initialization |
session_complete_notify.py | sanctum | Stop, UserPromptSubmit | Cross-platform toast notifications and state management |
session_lifecycle.py | memory-palace | Stop | Session lifecycle management |
session_start.py | tome | SessionStart | Check for active research sessions |
session_start_hook.py | egregore | SessionStart | Inject manifest context into new sessions |
setup.sh | conserve | Setup | Environment initialization |
setup.sh | memory-palace | Setup | Palace directory initialization |
skill_execution_logger.py | abstract | PostToolUse | Skill metrics logging |
stop_hook.py | egregore | Stop | Prevent early exit while work items remain |
supply_chain_check.py | leyline | SessionStart | Warn about known-compromised package versions in lockfiles |
task_created_tracker.py | sanctum | TaskCreated | Track task creation for workflow completeness monitoring |
tdd_bdd_gate.py | imbue | PreToolUse | Iron Law enforcement at write-time |
tool_output_summarizer.py | conserve | PostToolUse | Monitor and warn about tool output bloat |
url_detector.py | memory-palace | UserPromptSubmit | URL detection |
user-prompt-submit.sh | imbue | UserPromptSubmit | Scope validation |
user_prompt_hook.py | egregore | UserPromptSubmit | Resume orchestration after user interrupts |
verify_workflow_complete.py | sanctum | Stop | End-of-session workflow verification |
web_research_handler.py | memory-palace | PostToolUse | Web research processing and storage prompting |
Command Reference — Core Plugins
Flag and option documentation for core plugin commands (abstract, attune, conserve, imbue, sanctum).
Extended plugins: Memory Palace, Pensive, Parseltongue, Spec-Kit, Scribe, Scry, Hookify, Leyline
See also: Capabilities Reference | Skills | Agents | Hooks | Workflows
Command Syntax
/<plugin>:<command-name> [--flags] [positional-args]
Common Flag Patterns:
| Flag Pattern | Description | Example |
|---|---|---|
--verbose | Enable detailed output | /bloat-scan --verbose |
--dry-run | Preview without executing | /unbloat --dry-run |
--force | Skip confirmation prompts | /attune:init --force |
--report FILE | Output to file | /bloat-scan --report audit.md |
--level N | Set intensity/depth | /bloat-scan --level 3 |
--skip-X | Skip specific phase | /prepare-pr --skip-updates |
Abstract Plugin
/abstract:validate-plugin
Validate plugin structure against ecosystem conventions.
# Usage
/abstract:validate-plugin [plugin-name] [--strict] [--fix]
# Options
--strict Fail on warnings (not just errors)
--fix Auto-fix correctable issues
--report FILE Output validation report
# Examples
/abstract:validate-plugin sanctum
/abstract:validate-plugin --strict conserve
/abstract:validate-plugin memory-palace --fix
/abstract:create-skill
Scaffold a new skill with proper frontmatter and structure.
# Usage
/abstract:create-skill <plugin>:<skill-name> [--template basic|modular] [--category]
# Options
--template Skill template type (basic or modular with modules/)
--category Skill category for classification
--interactive Guided creation flow
# Examples
/abstract:create-skill pensive:shell-review --template modular
/abstract:create-skill imbue:new-methodology --category workflow-methodology
/abstract:create-command
Scaffold a new command with hooks and documentation.
# Usage
/abstract:create-command <plugin>:<command-name> [--hooks] [--extends]
# Options
--hooks Include lifecycle hook templates
--extends Base command or skill to extend
--aliases Comma-separated command aliases
# Examples
/abstract:create-command sanctum:new-workflow --hooks
/abstract:create-command conserve:deep-clean --extends "conserve:bloat-scan"
/abstract:create-hook
Scaffold a new hook with security-first patterns.
# Usage
/abstract:create-hook <plugin>:<hook-name> [--type] [--lang]
# Options
--type Hook event type (PreToolUse|PostToolUse|SessionStart|Stop|UserPromptSubmit)
--lang Implementation language (bash|python)
--matcher Tool matcher pattern
# Examples
/abstract:create-hook memory-palace:cache-check --type PreToolUse --lang python
/abstract:create-hook sanctum:commit-validator --type PreToolUse --matcher "Bash"
/abstract:analyze-skill
Analyze skill complexity and optimization opportunities.
# Usage
/abstract:analyze-skill <plugin>:<skill-name> [--metrics] [--suggest]
# Options
--metrics Show detailed token/complexity metrics
--suggest Generate optimization suggestions
--compare Compare against skill baselines
# Examples
/abstract:analyze-skill imbue:proof-of-work --metrics
/abstract:analyze-skill sanctum:pr-prep --suggest
/abstract:make-dogfood
Update Makefile demonstration targets to reflect current features.
# Usage
/abstract:make-dogfood [--check] [--update]
# Options
--check Verify Makefile is current (exit 1 if stale)
--update Apply updates to Makefile
--dry-run Show what would change
# Examples
/abstract:make-dogfood --check
/abstract:make-dogfood --update
/abstract:skills-eval
Evaluate skill quality across the ecosystem.
# Usage
/abstract:skills-eval [--plugin PLUGIN] [--threshold SCORE]
# Options
--plugin Limit to specific plugin
--threshold Minimum quality score (default: 70)
--output Output format (table|json|markdown)
# Examples
/abstract:skills-eval --plugin sanctum
/abstract:skills-eval --threshold 80 --output markdown
/abstract:hooks-eval
Evaluate hook security and performance.
# Usage
/abstract:hooks-eval [--plugin PLUGIN] [--security]
# Options
--plugin Limit to specific plugin
--security Focus on security patterns
--perf Focus on performance impact
# Examples
/abstract:hooks-eval --security
/abstract:hooks-eval --plugin memory-palace --perf
/abstract:evaluate-skill
Evaluate skill execution quality.
# Usage
/abstract:evaluate-skill <plugin>:<skill-name> [--metrics] [--suggestions]
# Options
--metrics Show detailed execution metrics
--suggestions Generate improvement suggestions
--compare Compare against baseline metrics
# Examples
/abstract:evaluate-skill imbue:proof-of-work --metrics
/abstract:evaluate-skill sanctum:pr-prep --suggestions
Attune Plugin
/attune:init
Initialize project with complete development infrastructure.
# Usage
/attune:init [--lang LANGUAGE] [--name NAME] [--author AUTHOR]
# Options
--lang LANGUAGE Project language: python|rust|typescript|go
--name NAME Project name (default: directory name)
--author AUTHOR Author name
--email EMAIL Author email
--python-version VER Python version (default: 3.10)
--description TEXT Project description
--path PATH Project path (default: .)
--force Overwrite existing files without prompting
--no-git Skip git initialization
# Examples
/attune:init --lang python --name my-cli
/attune:init --lang rust --author "Your Name" --force
/attune:brainstorm
Brainstorm project ideas using Socratic questioning.
# Usage
/attune:brainstorm [TOPIC] [--output FILE]
# Options
--output FILE Save brainstorm results to file
--rounds N Number of question rounds (default: 5)
--focus AREA Focus area: features|architecture|ux|technical
# Examples
/attune:brainstorm "CLI tool for data processing"
/attune:brainstorm --focus architecture --rounds 3
/attune:blueprint
Plan architecture and break down tasks.
# Usage
/attune:blueprint [--from BRAINSTORM] [--output FILE]
# Options
--from FILE Use brainstorm results as input
--output FILE Save plan to file
--depth LEVEL Planning depth: high|detailed|exhaustive
--include Include specific aspects: tests|ci|docs
# Examples
/attune:blueprint --from brainstorm.md --depth detailed
/attune:blueprint --include tests,ci
/attune:specify
Create detailed specifications from brainstorm or plan.
# Usage
/attune:specify [--from FILE] [--type TYPE]
# Options
--from FILE Input file (brainstorm or plan)
--type TYPE Spec type: technical|functional|api|data-model
--output DIR Output directory for specs
# Examples
/attune:specify --from plan.md --type technical
/attune:specify --type api --output .specify/
/attune:execute
Execute implementation tasks systematically.
# Usage
/attune:execute [--plan FILE] [--phase PHASE] [--task ID]
# Options
--plan FILE Task plan file (default: .specify/tasks.md)
--phase PHASE Execute specific phase: setup|tests|core|integration|polish
--task ID Execute specific task by ID
--parallel Enable parallel execution where marked [P]
--continue Resume from last checkpoint
# Examples
/attune:execute --plan tasks.md --phase setup
/attune:execute --task T1.2 --parallel
/attune:validate
Validate project structure against best practices.
# Usage
/attune:validate [--strict] [--fix]
# Options
--strict Fail on warnings
--fix Auto-fix correctable issues
--config Path to custom validation config
# Examples
/attune:validate --strict
/attune:validate --fix
/attune:upgrade-project
Add or update configurations in existing project.
# Usage
/attune:upgrade-project [--component COMPONENT] [--force]
# Options
--component Specific component: makefile|precommit|workflows|gitignore
--force Overwrite existing without prompting
--diff Show diff before applying
# Examples
/attune:upgrade-project --component makefile
/attune:upgrade-project --component workflows --force
Conserve Plugin
/conserve:bloat-scan
Progressive bloat detection for dead code and duplication.
# Usage
/bloat-scan [--level 1|2|3] [--focus TYPE] [--report FILE] [--dry-run]
# Options
--level 1|2|3 Scan tier: 1=quick, 2=targeted, 3=deep audit
--focus TYPE Focus area: code|docs|deps|all (default: all)
--report FILE Save report to file
--dry-run Preview findings without taking action
--exclude PATTERN Additional exclude patterns
# Scan Tiers
# Tier 1 (2-5 min): Large files, stale files, commented code, old TODOs
# Tier 2 (10-20 min): Dead code, duplicate patterns, import bloat
# Tier 3 (30-60 min): All above + cyclomatic complexity, dependency graphs
# Examples
/bloat-scan # Quick Tier 1 scan
/bloat-scan --level 2 --focus code # Targeted code analysis
/bloat-scan --level 3 --report Q1-audit.md # Deep audit with report
/conserve:unbloat
Safe bloat remediation with interactive approval.
# Usage
/unbloat [--approve LEVEL] [--dry-run] [--backup]
# Options
--approve LEVEL Auto-approve level: high|medium|low|all
--dry-run Show what would be removed
--backup Create backup branch before changes
--interactive Prompt for each item (default)
# Examples
/unbloat --dry-run # Preview all removals
/unbloat --approve high --backup # Auto-approve high priority, backup first
/unbloat --interactive # Approve each item manually
/conserve:optimize-context
Optimize context window usage.
# Usage
/optimize-context [--target PERCENT] [--scope PATH]
# Options
--target PERCENT Target context utilization (default: 50%)
--scope PATH Limit to specific directory
--suggest Only show suggestions, don't apply
--aggressive Apply all optimizations
# Examples
/optimize-context --target 40%
/optimize-context --scope plugins/sanctum/ --suggest
/conserve:analyze-growth
Consolidated: This command has been merged into /bloat-scan.
See bloat-scan.
Analyze skill growth patterns.
# Usage (now use /bloat-scan instead)
/bloat-scan [--level 1|2|3] [--focus TYPE] [--report FILE]
# Previous /analyze-growth options are covered by:
/bloat-scan --level 2 --focus code # Growth pattern analysis
Imbue Plugin
/imbue:justify
Audit changes for AI additive bias and Iron Law compliance.
# Usage
/justify [--scope staged|branch|file] [path...]
# Examples
/justify # Audit all branch changes
/justify --scope staged # Only staged changes
/justify src/auth.py # Specific files
/imbue:catchup
Quick context recovery after session restart.
# Usage
/catchup [--depth LEVEL] [--focus AREA]
# Options
--depth LEVEL Recovery depth: shallow|standard|deep (default: standard)
--focus AREA Focus on: git|docs|issues|all
--since DATE Catch up from specific date
# Examples
/catchup # Standard recovery
/catchup --depth deep # Full context recovery
/catchup --focus git --since "3 days ago"
/imbue:feature-review
Consolidated: This command has been merged into Skill(imbue:scope-guard).
Invoke via Skill(imbue:scope-guard) instead.
Feature prioritization and gap analysis.
# Usage (now use Skill(imbue:scope-guard) instead)
Skill(imbue:scope-guard)
# scope-guard covers feature prioritization, gap analysis,
# and anti-overengineering evaluation
/imbue:structured-review
Structured review workflow with methodology options.
# Usage
/structured-review PATH [--methodology METHOD]
# Options
--methodology METHOD Review methodology: evidence-based|checklist|formal
--todos Generate TodoWrite items
--summary Include executive summary
# Examples
/structured-review plugins/sanctum/ --methodology evidence-based
/structured-review . --todos --summary
Sanctum Plugin
/sanctum:prepare-pr (alias: /pr)
Complete PR preparation workflow.
# Usage
/prepare-pr [--no-code-review] [--reviewer-scope SCOPE] [--skip-updates] [FILE]
/pr [options...] # Alias
# Options
--no-code-review Skip automated code review (faster)
--reviewer-scope SCOPE Review strictness: strict|standard|lenient
--skip-updates Skip documentation/test updates (Phase 0)
FILE Output file for PR description (default: pr_description.md)
# Reviewer Scope Levels
# strict - All suggestions must be addressed
# standard - Critical issues must be fixed, suggestions are recommendations
# lenient - Focus on blocking issues only
# Examples
/prepare-pr # Full workflow
/pr # Alias for full workflow
/prepare-pr --skip-updates # Skip Phase 0 updates
/prepare-pr --no-code-review # Skip code review
/prepare-pr --reviewer-scope strict # Strict review for critical changes
/prepare-pr --skip-updates --no-code-review # Fastest (legacy behavior)
/sanctum:acp
Add, commit, push. Stages all changes, generates a conventional commit message, commits, and pushes to the current branch.
# Usage
/acp
/sanctum:commit-msg
Generate commit message.
# Usage
/commit-msg [--type TYPE] [--scope SCOPE]
# Options
--type TYPE Force commit type: feat|fix|docs|refactor|test|chore
--scope SCOPE Force commit scope
--breaking Include breaking change footer
--issue N Reference issue number
# Examples
/commit-msg
/commit-msg --type feat --scope api
/commit-msg --breaking --issue 42
/sanctum:do-issue
Fix GitHub issues.
# Usage
/do-issue ISSUE_NUMBER [--branch NAME]
# Options
--branch NAME Branch name (default: issue-N)
--auto-merge Attempt auto-merge after PR
--draft Create draft PR
# Examples
/do-issue 42
/do-issue 123 --branch fix/auth-bug
/do-issue 99 --draft
/sanctum:fix-pr
Address PR review comments.
# Usage
/fix-pr [PR_NUMBER] [--auto-resolve]
# Options
PR_NUMBER PR number (default: current branch's PR)
--auto-resolve Auto-resolve addressed comments
--batch Address all comments in batch
--interactive Address one comment at a time
# Examples
/fix-pr 42
/fix-pr --auto-resolve
/fix-pr 42 --batch
/sanctum:fix-workflow
Workflow retrospective with automatic improvement context.
# Usage
/fix-workflow [WORKFLOW_NAME] [--context]
# Options
WORKFLOW_NAME Specific workflow to analyze
--context Gather improvement context automatically
--lessons Generate lessons learned
--improvements Suggest workflow improvements
# Examples
/fix-workflow pr-review --context
/fix-workflow --lessons --improvements
/sanctum:pr-review
Enhanced PR review.
# Usage
/pr-review [PR_NUMBER] [--thorough]
# Options
PR_NUMBER PR to review (default: current)
--thorough Deep review with all checks
--quick Fast review of critical issues only
--security Security-focused review
# Examples
/pr-review 42
/pr-review --thorough
/pr-review --quick --security
/sanctum:update-docs
Update project documentation.
# Usage
/update-docs [--scope SCOPE] [--check]
# Options
--scope SCOPE Scope: all|api|readme|guides
--check Check only, don't modify
--sync Sync with code changes
# Examples
/update-docs
/update-docs --scope api
/update-docs --check
/sanctum:update-readme
Consolidated: This command has been merged into /update-docs.
See update-docs.
Use /update-docs --scope readme for README-specific updates.
Modernize README.
# Usage (now use /update-docs instead)
/update-docs --scope readme
# Previous /update-readme options are covered by /update-docs:
/update-docs --scope readme # README-specific updates
/update-docs --scope all # Full documentation refresh
/sanctum:update-tests
Maintain tests.
# Usage
/update-tests [PATH] [--coverage]
# Options
PATH Test path to update
--coverage Ensure coverage targets
--missing Add missing tests
--modernize Update to modern patterns
# Examples
/update-tests tests/
/update-tests --missing --coverage
/sanctum:update-version
Bump versions.
# Usage
/update-version [VERSION] [--type TYPE]
# Options
VERSION Explicit version (e.g., 1.2.3)
--type TYPE Bump type: major|minor|patch|prerelease
--tag Create git tag
--push Push tag to remote
# Examples
/update-version 2.0.0
/update-version --type minor --tag
/update-version --type patch --tag --push
/sanctum:update-dependencies
Update project dependencies.
# Usage
/update-dependencies [--type TYPE] [--dry-run]
# Options
--type TYPE Dependency type: all|prod|dev|security
--dry-run Preview updates without applying
--major Include major version updates
--security Security updates only
# Examples
/update-dependencies
/update-dependencies --dry-run
/update-dependencies --type security
/update-dependencies --major
/sanctum:git-catchup
Git repository catchup.
# Usage
/git-catchup [--since DATE] [--author AUTHOR]
# Options
--since DATE Start date for catchup
--author AUTHOR Filter by author
--branch BRANCH Specific branch
--format FORMAT Output format: summary|detailed|log
# Examples
/git-catchup --since "1 week ago"
/git-catchup --author "user@example.com"
/sanctum:create-tag
Create git tags for releases.
# Usage
/create-tag VERSION [--message MSG] [--sign]
# Options
VERSION Tag version (e.g., v1.0.0)
--message MSG Tag message
--sign Create signed tag
--push Push tag to remote
# Examples
/create-tag v1.0.0
/create-tag v1.0.0 --message "Release 1.0.0" --sign --push
Extended plugins: Memory Palace, Pensive, Parseltongue, Spec-Kit, Scribe, Scry, Hookify, Leyline
See also: Skills | Agents | Hooks | Workflows
Command Reference — Extended Plugins
Flag and option documentation for extended plugin commands (memory-palace, parseltongue, pensive, spec-kit, scribe, scry, hookify, leyline).
Core plugins: Abstract, Attune, Conserve, Imbue, Sanctum
See also: Capabilities Reference | Skills | Agents | Hooks | Workflows
Memory Palace Plugin
/memory-palace:garden
Manage digital gardens.
# Usage
/garden [ACTION] [--path PATH]
# Actions
tend Review and update garden entries
prune Remove stale/low-value entries
cultivate Add new entries from queue
status Show garden health metrics
# Options
--path PATH Garden path (default: docs/knowledge-corpus/)
--dry-run Preview changes
--score N Minimum score threshold for cultivation
# Examples
/garden tend # Review garden entries
/garden prune --dry-run # Preview what would be removed
/garden cultivate --score 70 # Add high-quality entries
/garden status # Show health metrics
/memory-palace:navigate
Search across knowledge palaces.
# Usage
/navigate QUERY [--scope SCOPE] [--type TYPE]
# Options
--scope SCOPE Search scope: local|corpus|all
--type TYPE Content type: docs|code|web|all
--limit N Maximum results (default: 10)
--relevance N Minimum relevance score
# Examples
/navigate "authentication patterns" --scope corpus
/navigate "pytest fixtures" --type docs --limit 5
/memory-palace:palace
Manage knowledge palaces.
# Usage
/palace [ACTION] [PALACE_NAME]
# Actions
create NAME Create new palace
list List all palaces
status NAME Show palace status
archive NAME Archive palace
# Options
--template TEMPLATE Palace template: session|project|topic
--from FILE Initialize from existing content
# Examples
/palace create project-x --template project
/palace list
/palace status project-x
/palace archive old-project
/memory-palace:review-room
Review items in the knowledge queue.
# Usage
/review-room [--status STATUS] [--source SOURCE]
# Options
--status STATUS Filter by status: pending|approved|rejected
--source SOURCE Filter by source: webfetch|websearch|manual
--batch N Review N items at once
--auto-score Auto-generate scores
# Examples
/review-room --status pending --batch 10
/review-room --source webfetch --auto-score
Parseltongue Plugin
/parseltongue:analyze-tests
Test suite health report.
# Usage
/analyze-tests [PATH] [--coverage] [--flaky]
# Options
--coverage Include coverage analysis
--flaky Detect potentially flaky tests
--slow N Flag tests slower than N seconds
--missing Find untested code
# Examples
/analyze-tests tests/ --coverage
/analyze-tests --flaky --slow 5
/analyze-tests src/api/ --missing
/parseltongue:run-profiler
Profile code execution.
# Usage
/run-profiler [COMMAND] [--type TYPE]
# Options
--type TYPE Profiler type: cpu|memory|line|call
--output FILE Output file for profile data
--flame Generate flame graph
--top N Show top N hotspots
# Examples
/run-profiler "python main.py" --type cpu
/run-profiler "pytest tests/" --type memory --flame
/run-profiler --type line --top 20
/parseltongue:check-async
Async pattern validation.
# Usage
/check-async [PATH] [--strict]
# Options
--strict Strict async compliance
--suggest Suggest async improvements
--blocking Find blocking calls in async code
# Examples
/check-async src/ --strict
/check-async --blocking --suggest
Pensive Plugin
/pensive:full-review
Unified code review.
# Usage
/full-review [PATH] [--scope SCOPE] [--output FILE]
# Options
--scope SCOPE Review scope: changed|staged|all
--output FILE Save review to file
--severity MIN Minimum severity: critical|high|medium|low
--categories Include categories: bugs|security|style|perf
# Examples
/full-review src/ --scope staged
/full-review --scope changed --severity high
/full-review . --output review.md --categories bugs,security
/pensive:code-review
Expert code review.
# Usage
/code-review [FILES...] [--focus FOCUS]
# Options
--focus FOCUS Focus area: bugs|api|tests|security|style
--evidence Include evidence logging
--lsp Enable LSP-enhanced review (requires ENABLE_LSP_TOOL=1)
# Examples
/code-review src/api.py --focus bugs
/code-review --focus security --evidence
ENABLE_LSP_TOOL=1 /code-review src/ --lsp
/pensive:architecture-review
Architecture assessment.
# Usage
/architecture-review [PATH] [--depth DEPTH]
# Options
--depth DEPTH Analysis depth: surface|standard|deep
--patterns Identify architecture patterns
--anti-patterns Flag anti-patterns
--suggestions Generate improvement suggestions
# Examples
/architecture-review src/ --depth deep
/architecture-review --patterns --anti-patterns
/pensive:rust-review
Rust-specific review.
# Usage
/rust-review [PATH] [--safety]
# Options
--safety Focus on unsafe code analysis
--lifetimes Analyze lifetime patterns
--memory Memory safety review
--perf Performance-focused review
# Examples
/rust-review src/lib.rs --safety
/rust-review --lifetimes --memory
/pensive:test-review
Test quality review.
# Usage
/test-review [PATH] [--coverage]
# Options
--coverage Include coverage analysis
--patterns Review test patterns (AAA, BDD)
--flaky Detect flaky test patterns
--gaps Find testing gaps
# Examples
/test-review tests/ --coverage
/test-review --patterns --gaps
/pensive:shell-review
Shell script safety and portability review.
# Usage
/shell-review [FILES...] [--strict]
# Options
--strict Strict POSIX compliance
--security Security-focused review
--portability Check cross-shell compatibility
# Examples
/shell-review scripts/*.sh --strict
/shell-review --security install.sh
/pensive:skill-review
Analyze skill runtime metrics and stability. This is the canonical command for skill performance analysis (execution counts, success rates, stability gaps).
For static quality analysis (frontmatter, structure),
use abstract:skill-auditor.
# Usage
/skill-review [--plugin PLUGIN] [--recommendations]
# Options
--plugin PLUGIN Limit to specific plugin
--all-plugins Aggregate metrics across all plugins
--unstable-only Only show skills with stability_gap > 0.3
--skill NAME Deep-dive specific skill
--recommendations Generate improvement recommendations
# Examples
/skill-review --plugin sanctum
/skill-review --unstable-only
/skill-review --skill imbue:proof-of-work
/skill-review --all-plugins --recommendations
Spec-Kit Plugin
/speckit-startup
Bootstrap specification workflow.
# Usage
/speckit-startup [--dir DIR]
# Options
--dir DIR Specification directory (default: .specify/)
--template Use template structure
--minimal Minimal specification setup
# Examples
/speckit-startup
/speckit-startup --dir specs/
/speckit-startup --minimal
/speckit-clarify
Generate clarifying questions.
# Usage
/speckit-clarify [TOPIC] [--rounds N]
# Options
TOPIC Topic to clarify
--rounds N Number of question rounds
--depth Deep clarification
--technical Technical focus
# Examples
/speckit-clarify "user authentication"
/speckit-clarify --rounds 3 --technical
/speckit-specify
Create specification.
# Usage
/speckit-specify [--from FILE] [--output DIR]
# Options
--from FILE Input source (brainstorm, requirements)
--output DIR Output directory
--type TYPE Spec type: full|api|data|ui
# Examples
/speckit-specify --from requirements.md
/speckit-specify --type api --output .specify/
/speckit-plan
Generate implementation plan.
# Usage
/speckit-plan [--from SPEC] [--phases]
# Options
--from SPEC Source specification
--phases Include phase breakdown
--estimates Include time estimates
--dependencies Show task dependencies
# Examples
/speckit-plan --from .specify/spec.md
/speckit-plan --phases --estimates
/speckit-tasks
Generate task breakdown.
# Usage
/speckit-tasks [--from PLAN] [--parallel]
# Options
--from PLAN Source plan
--parallel Mark parallelizable tasks
--granularity Task granularity: coarse|medium|fine
--assignable Make tasks assignable
# Examples
/speckit-tasks --from .specify/plan.md
/speckit-tasks --parallel --granularity fine
/speckit-implement
Execute implementation plan.
# Usage
/speckit-implement [--phase PHASE] [--task ID] [--continue]
# Options
--phase PHASE Execute specific phase
--task ID Execute specific task
--continue Resume from checkpoint
--parallel Enable parallel execution
# Examples
/speckit-implement --phase setup
/speckit-implement --task T1.2
/speckit-implement --continue
/speckit-checklist
Generate implementation checklist.
# Usage
/speckit-checklist [--type TYPE] [--output FILE]
# Options
--type TYPE Checklist type: ux|test|security|deployment
--output FILE Output file
--interactive Interactive completion mode
# Examples
/speckit-checklist --type security
/speckit-checklist --type ux --output checklists/ux.md
/speckit-analyze
Check artifact consistency.
# Usage
/speckit-analyze [--strict] [--fix]
# Options
--strict Strict consistency checking
--fix Auto-fix inconsistencies
--report Generate consistency report
# Examples
/speckit-analyze
/speckit-analyze --strict --report
Scribe Plugin
/slop-scan
Consolidated: This command wrapper has been removed.
slop-scan is now agent-only via the slop-hunter agent.
Invoke directly with Agent(scribe:slop-hunter).
Scan files for AI-generated content markers.
# Usage (now agent-only)
Agent(scribe:slop-hunter)
# Or use the slop-detector skill directly:
Skill(scribe:slop-detector)
/style-learn
Create style profile from examples.
# Usage
/style-learn [FILES] --name NAME
# Options
FILES Example files to learn from
--name NAME Profile name
--merge Merge with existing profile
# Examples
/style-learn good-examples/*.md --name house-style
/style-learn docs/api.md --name api-docs --merge
/doc-polish
Clean up AI-generated content.
# Usage
/doc-polish [FILES] [--style NAME] [--dry-run]
# Options
FILES Files to polish
--style NAME Apply learned style
--dry-run Preview changes without writing
# Examples
/doc-polish README.md
/doc-polish docs/*.md --style house-style
/doc-polish **/*.md --dry-run
/doc-generate
Generate new documentation.
# Usage
/doc-generate TYPE [--style NAME] [--output FILE]
# Options
TYPE Document type: readme|api|changelog|usage
--style NAME Apply learned style
--output FILE Output file path
# Examples
/doc-generate readme
/doc-generate api --style api-docs
/doc-generate changelog --output CHANGELOG.md
/doc-verify
Consolidated: This command wrapper has been removed.
doc-verify is now agent-only via the doc-verifier agent.
Invoke directly with Agent(scribe:doc-verifier).
Validate documentation claims with proof-of-work.
# Usage (now agent-only)
Agent(scribe:doc-verifier)
# Or use the doc-generator skill with verification mode:
Skill(scribe:doc-generator)
Scry Plugin
/scry:record-terminal
Create terminal recording.
# Usage
/record-terminal [COMMAND] [--output FILE] [--format FORMAT]
# Options
COMMAND Command to record
--output FILE Output file (default: recording.gif)
--format FORMAT Output format: gif|svg|mp4|tape
--width N Terminal width
--height N Terminal height
--speed N Playback speed multiplier
# Examples
/record-terminal "make test" --output demo.gif
/record-terminal --format svg --width 80 --height 24
/scry:record-browser
Record browser session.
# Usage
/record-browser [URL] [--output FILE] [--actions FILE]
# Options
URL Starting URL
--output FILE Output file
--actions FILE Playwright actions script
--headless Run headless
--viewport WxH Viewport size
# Examples
/record-browser "http://localhost:3000" --output demo.mp4
/record-browser --actions test-flow.js --headless
Hookify Plugin
/hookify:install
Install hooks.
# Usage
/hookify:install [HOOK_NAME] [--plugin PLUGIN]
# Options
HOOK_NAME Specific hook to install
--plugin PLUGIN Install hooks from plugin
--all Install all available hooks
--dry-run Preview installation
# Examples
/hookify:install memory-palace-web-processor
/hookify:install --plugin conserve
/hookify:install --all --dry-run
/hookify:configure
Configure hook settings.
# Usage
/hookify:configure [HOOK_NAME] [--enable|--disable] [--set KEY=VALUE]
# Options
HOOK_NAME Hook to configure
--enable Enable hook
--disable Disable hook
--set KEY=VALUE Set configuration value
--reset Reset to defaults
# Examples
/hookify:configure memory-palace --set research_mode=cache_first
/hookify:configure context-warning --disable
/hookify:list
List installed hooks.
# Usage
/hookify:list [--plugin PLUGIN] [--status]
# Options
--plugin PLUGIN Filter by plugin
--status Show enabled/disabled status
--verbose Show full configuration
# Examples
/hookify:list
/hookify:list --plugin memory-palace --status
Leyline Plugin
/leyline:reinstall-all-plugins
Refresh all plugins.
# Usage
/reinstall-all-plugins [--force] [--clean]
# Options
--force Force reinstall even if up-to-date
--clean Clean install (remove then reinstall)
--verify Verify installation after reinstall
# Examples
/reinstall-all-plugins
/reinstall-all-plugins --clean --verify
/leyline:update-all-plugins
Update all plugins.
# Usage
/update-all-plugins [--check] [--exclude PLUGINS]
# Options
--check Check for updates only
--exclude PLUGINS Comma-separated plugins to skip
--major Include major version updates
# Examples
/update-all-plugins
/update-all-plugins --check
/update-all-plugins --exclude "experimental,beta"
Core plugins: Abstract, Attune, Conserve, Imbue, Sanctum
See also: Skills | Agents | Hooks | Workflows
Superpowers Integration
How Claude Night Market plugins integrate with the superpowers skills.
Last synced: superpowers v5.0.7 (2026-03-31)
Overview
Many Night Market capabilities achieve their full potential when used alongside superpowers. While all plugins work standalone, superpowers provides foundational methodology skills that enhance workflows.
Since v4.0.0, superpowers enforces workflows via hard gates, DOT flowcharts, and mandatory checklists rather than simply describing them. Since v5.0.6, inline self-review replaces subagent review loops, cutting review overhead from ~25 minutes to ~30 seconds.
Installation
# Add the superpowers marketplace
/plugin marketplace add obra/superpowers
# Install the superpowers plugin
/plugin install superpowers@superpowers-marketplace
Dependency Matrix
| Plugin | Component | Type | Superpowers Dependency | Enhancement |
|---|---|---|---|---|
| abstract | /create-skill | Command | brainstorming | Socratic questioning |
| abstract | /create-command | Command | brainstorming | Concept development |
| abstract | /create-hook | Command | brainstorming | Security design |
| abstract | /test-skill | Command | test-driven-development | TDD methodology |
| sanctum | /pr | Command | receiving-code-review, requesting-code-review | PR validation |
| sanctum | /pr-review | Command | receiving-code-review | PR analysis |
| sanctum | /fix-pr | Command | receiving-code-review | Comment resolution |
| sanctum | /do-issue | Command | subagent-driven-development, dispatching-parallel-agents, using-git-worktrees | Full workflow |
| spec-kit | /speckit-clarify | Command | brainstorming | Clarification |
| spec-kit | /speckit-plan | Command | writing-plans | Planning |
| spec-kit | /speckit-tasks | Command | executing-plans, systematic-debugging | Task breakdown |
| spec-kit | /speckit-implement | Command | executing-plans, systematic-debugging | Execution |
| spec-kit | /speckit-analyze | Command | systematic-debugging, verification-before-completion | Consistency |
| spec-kit | /speckit-checklist | Command | verification-before-completion | Validation |
| pensive | /full-review | Command | systematic-debugging, verification-before-completion | Debugging + evidence |
| parseltongue | python-testing | Skill | test-driven-development (includes testing-anti-patterns) | TDD + anti-patterns |
| imbue | scope-guard, proof-of-work | Skill | brainstorming, writing-plans, executing-plans, verification-before-completion | Anti-overengineering, evidence-based completion |
| conserve | /optimize-context | Command | systematic-debugging (includes condition-based-waiting) | Smart waiting |
| minister | issue-management | Skill | systematic-debugging | Bug investigation |
Superpowers Skills Referenced
| Skill | Purpose | Used By |
|---|---|---|
brainstorming | Socratic questioning with hard gates and visual companion | abstract, spec-kit, imbue |
test-driven-development | RED-GREEN-REFACTOR TDD cycle (includes testing-anti-patterns) | abstract, sanctum, parseltongue |
receiving-code-review | Technical rigor for evaluating suggestions | sanctum |
requesting-code-review | Quality gates for code submission | sanctum |
writing-plans | Structured planning with inline self-review | spec-kit, imbue |
executing-plans | Continuous task execution (no longer batches) | spec-kit |
systematic-debugging | Four-phase framework (includes root-cause-tracing, defense-in-depth, condition-based-waiting) | spec-kit, pensive, minister, conserve |
verification-before-completion | Evidence-based review standards | spec-kit, pensive, imbue |
subagent-driven-development | Autonomous subagent orchestration (mandatory on capable harnesses) | sanctum |
dispatching-parallel-agents | Parallel task dispatch for 2+ independent tasks | sanctum |
using-git-worktrees | Isolated implementation in feature branches | sanctum |
finishing-a-development-branch | Branch cleanup, merge strategy, and finalization | sanctum |
writing-skills | Skill authoring with description trap guidance | abstract |
Graceful Degradation
All Night Market plugins work without superpowers:
Without Superpowers
- Commands: Execute core functionality
- Skills: Provide standalone guidance
- Agents: Function with reduced automation
With Superpowers
- Commands: Enhanced methodology phases
- Skills: Integrated methodology patterns
- Agents: Full automation depth
Skill Consolidation Notes (v4.0.0+)
Several standalone skills were merged into parent skills:
| Former Standalone | Now Bundled In | Access |
|---|---|---|
testing-anti-patterns | test-driven-development | Module file within TDD skill |
root-cause-tracing | systematic-debugging | Module file within debugging skill |
defense-in-depth | systematic-debugging | Module file within debugging skill |
condition-based-waiting | systematic-debugging | Module file within debugging skill |
Deprecated Commands
These superpowers slash commands show deprecation notices since v5.0.0. Use the skill equivalents:
| Deprecated | Replacement |
|---|---|
/brainstorm | Skill(superpowers:brainstorming) |
/write-plan | Skill(superpowers:writing-plans) |
/execute-plan | Skill(superpowers:executing-plans) |
Key Patterns
Inline Self-Review (v5.0.6)
Superpowers replaced subagent review loops with inline self-review checklists. This cut review time from ~25 minutes to ~30 seconds with comparable defect detection. Night Market review workflows (pensive, sanctum, imbue) should follow this pattern when delegating to superpowers.
SUBAGENT-STOP Gate
Superpowers skills include <SUBAGENT-STOP> blocks that
prevent subagents from activating full skill workflows.
Night Market dispatch patterns (sanctum:do-issue,
conserve:clear-context) should be aware of this when
delegating work to subagents with superpowers installed.
Instruction Priority Hierarchy
Superpowers enforces: User instructions > Superpowers skills > Default system prompt. Night Market commands respect this ordering when combining skill invocations.
Context Isolation
All superpowers delegation skills now scope subagent context explicitly. Night Market’s parallel execution patterns should follow the same principle.
Example: /do-issue Workflow
Without Superpowers
1. Parse issue
2. Analyze codebase
3. Implement fix
4. Create PR
With Superpowers
1. Parse issue
2. [using-git-worktrees] Create isolated worktree
3. [subagent-driven-development] Plan subagent tasks
4. [dispatching-parallel-agents] Dispatch parallel work
5. [writing-plans] Create structured plan
6. [test-driven-development] Write failing test
7. Implement fix
8. [requesting-code-review] Inline self-review
9. [finishing-a-development-branch] Cleanup and merge
10. Create PR
Recommended Setup
For the full Night Market experience:
# 1. Add marketplaces
/plugin marketplace add obra/superpowers
/plugin marketplace add athola/claude-night-market
# 2. Install superpowers (foundational)
/plugin install superpowers@superpowers-marketplace
# 3. Install Night Market plugins
/plugin install sanctum@claude-night-market
/plugin install spec-kit@claude-night-market
/plugin install pensive@claude-night-market
Checking Integration
Verify superpowers is available:
/plugin list
# Should show superpowers@superpowers-marketplace
Commands will automatically detect and use superpowers when available.
Function Extraction Guidelines
Last Updated: 2025-12-06
Overview
This document provides standards and guidelines for function extraction and refactoring in the Claude Night Market plugin ecosystem. Following these guidelines validates maintainable, testable, and readable code.
Principles
1. Single Responsibility Principle (SRP)
A function should have only one reason to change.
2. Keep Functions Small
- Ideal: 10-20 lines of code
- Acceptable: 20-30 lines with clear logic
- Maximum: 50 lines with strong justification
- Never exceed 100 lines without splitting
3. Limited Parameters
- Ideal: 0-3 parameters
- Acceptable: 4-5 parameters with clear types
- Consider object parameter if 6+ parameters
4. Clear Naming
- Functions should be verbs that describe their action
- Use consistent naming conventions across the codebase
- Avoid abbreviations unless widely understood
When to Extract Functions
Immediate Extraction Required
-
Function exceeds 30 lines
# BAD - Too long def process_large_content(content): lines = content.split('\n') filtered_lines = [] for line in lines: if line.strip(): if not line.startswith('#'): if len(line) < 100: filtered_lines.append(line.strip()) # ... 20 more lines -
Function has multiple responsibilities
# BAD - Multiple responsibilities def analyze_and_optimize(content): # Analysis part complexity = calculate_complexity(content) quality = assess_quality(content) # Optimization part optimized = remove_redundancy(content) optimized = shorten_sentences(optimized) return optimized, complexity, quality -
Nested function depth exceeds 3 levels
# BAD - Too nested def process_data(data): if data: for item in data: if item.valid: for subitem in item.children: if subitem.active: # Deep nesting - extract this process_subitem(subitem)
Consider Extraction
-
Function has 4+ parameters
# CONSIDER - Many parameters def create_report(title, content, author, date, format, include_header, include_footer): pass # BETTER - Use configuration object @dataclass class ReportConfig: title: str content: str author: str date: datetime format: str = "pdf" include_header: bool = True include_footer: bool = True def create_report(config: ReportConfig): pass -
Complex conditional logic
# CONSIDER - Complex conditions def calculate_rate(user, product, time, location, special_offer): if user.premium and product.category in ["electronics", "books"]: if time.hour < 12 and location.country == "US": if special_offer and not user.used_recently: return 0.9 # ... more conditions # BETTER - Extract condition checks def _is_eligible_for_discount(user, product, time, location, special_offer): return (user.premium and product.category in ["electronics", "books"] and time.hour < 12 and location.country == "US" and special_offer and not user.used_recently)
Extraction Patterns
1. Extract Method Pattern
Before:
def generate_report(data):
# Validate data
if not data:
raise ValueError("Data cannot be empty")
if not all(isinstance(item, dict) for item in data):
raise TypeError("All items must be dictionaries")
# Process data
processed = []
for item in data:
processed_item = {
'id': item.get('id'),
'name': item.get('name', '').title(),
'value': float(item.get('value', 0))
}
processed.append(processed_item)
# Calculate totals
total = sum(item['value'] for item in processed)
average = total / len(processed) if processed else 0
return {
'items': processed,
'summary': {
'total': total,
'average': average,
'count': len(processed)
}
}
After:
def generate_report(data):
"""Generate a report from data items."""
_validate_data(data)
processed_items = _process_data_items(data)
summary = _calculate_summary(processed_items)
return {
'items': processed_items,
'summary': summary
}
def _validate_data(data):
"""Validate input data."""
if not data:
raise ValueError("Data cannot be empty")
if not all(isinstance(item, dict) for item in data):
raise TypeError("All items must be dictionaries")
def _process_data_items(data):
"""Process individual data items."""
return [
{
'id': item.get('id'),
'name': item.get('name', '').title(),
'value': float(item.get('value', 0))
}
for item in data
]
def _calculate_summary(items):
"""Calculate summary statistics."""
total = sum(item['value'] for item in items)
return {
'total': total,
'average': total / len(items) if items else 0,
'count': len(items)
}
2. Strategy Pattern for Complex Logic
Before:
def optimize_content(content, strategy_type):
if strategy_type == "aggressive":
# Remove all emphasis
lines = content.split('\n')
cleaned = []
for line in lines:
if not line.strip().startswith('**'):
cleaned.append(line)
return '\n'.join(cleaned)
elif strategy_type == "moderate":
# Shorten code blocks
# ... 20 lines of logic
elif strategy_type == "gentle":
# Only remove images
# ... 20 lines of logic
After:
from abc import ABC, abstractmethod
class OptimizationStrategy(ABC):
"""Base class for content optimization strategies."""
@abstractmethod
def optimize(self, content: str) -> str:
"""Optimize content according to strategy."""
pass
class AggressiveOptimizationStrategy(OptimizationStrategy):
"""Aggressive content optimization."""
def optimize(self, content: str) -> str:
lines = content.split('\n')
cleaned = [
line for line in lines
if not line.strip().startswith('**')
]
return '\n'.join(cleaned)
class ModerateOptimizationStrategy(OptimizationStrategy):
"""Moderate content optimization."""
def optimize(self, content: str) -> str:
# Implementation for moderate optimization
pass
class GentleOptimizationStrategy(OptimizationStrategy):
"""Gentle content optimization."""
def optimize(self, content: str) -> str:
# Implementation for gentle optimization
pass
# Strategy registry
OPTIMIZATION_STRATEGIES = {
"aggressive": AggressiveOptimizationStrategy(),
"moderate": ModerateOptimizationStrategy(),
"gentle": GentleOptimizationStrategy()
}
def optimize_content(content: str, strategy_type: str) -> str:
"""Optimize content using specified strategy."""
if strategy_type not in OPTIMIZATION_STRATEGIES:
raise ValueError(f"Unknown strategy: {strategy_type}")
strategy = OPTIMIZATION_STRATEGIES[strategy_type]
return strategy.optimize(content)
3. Builder Pattern for Complex Construction
Before:
def create_complex_object(name, type, config, options, metadata):
obj = ComplexObject()
obj.name = name
obj.type = type
# Complex configuration
if config.get('enabled', True):
obj.enabled = True
obj.timeout = config.get('timeout', 30)
obj.retries = config.get('retries', 3)
# Options processing
for key, value in options.items():
if key.startswith('custom_'):
obj.custom_fields[key[7:]] = value
else:
setattr(obj, key, value)
# Metadata handling
obj.created_at = metadata.get('created_at', datetime.now())
obj.created_by = metadata.get('created_by', 'system')
return obj
After:
class ComplexObjectBuilder:
"""Builder for ComplexObject instances."""
def __init__(self):
self._object = ComplexObject()
def with_name(self, name: str) -> 'ComplexObjectBuilder':
self._object.name = name
return self
def with_type(self, obj_type: str) -> 'ComplexObjectBuilder':
self._object.type = obj_type
return self
def with_config(self, config: Dict[str, Any]) -> 'ComplexObjectBuilder':
self._object.enabled = config.get('enabled', True)
self._object.timeout = config.get('timeout', 30)
self._object.retries = config.get('retries', 3)
return self
def with_options(self, options: Dict[str, Any]) -> 'ComplexObjectBuilder':
for key, value in options.items():
if key.startswith('custom_'):
self._object.custom_fields[key[7:]] = value
else:
setattr(self._object, key, value)
return self
def with_metadata(self, metadata: Dict[str, Any]) -> 'ComplexObjectBuilder':
self._object.created_at = metadata.get('created_at', datetime.now())
self._object.created_by = metadata.get('created_by', 'system')
return self
def build(self) -> ComplexObject:
return self._object
# Usage
def create_complex_object(name, type, config, options, metadata):
return (ComplexObjectBuilder()
.with_name(name)
.with_type(type)
.with_config(config)
.with_options(options)
.with_metadata(metadata)
.build())
Testing Extracted Functions
1. Unit Test Each Extracted Function
# Test for _validate_data
def test_validate_data_valid():
data = [{'id': 1, 'name': 'test'}]
# Should not raise
_validate_data(data)
def test_validate_data_empty():
with pytest.raises(ValueError, match="Data cannot be empty"):
_validate_data([])
def test_validate_data_invalid_type():
with pytest.raises(TypeError, match="All items must be dictionaries"):
_validate_data([{'id': 1}, "invalid"])
2. Test Strategy Implementations
def test_aggressive_optimization():
content = "**Bold text**\nNormal text\n**More bold**"
strategy = AggressiveOptimizationStrategy()
result = strategy.optimize(content)
assert "Normal text" in result
assert "**" not in result
3. Integration Tests
def test_generate_report_integration():
data = [
{'id': 1, 'name': 'test item', 'value': 100},
{'id': 2, 'name': 'another item', 'value': 200}
]
report = generate_report(data)
assert report['summary']['total'] == 300
assert report['summary']['average'] == 150
assert len(report['items']) == 2
Code Review Checklist
When reviewing code for function extraction:
Function Size
- Function is under 30 lines
- If over 30 lines, there’s a clear justification
- No function exceeds 100 lines
Responsibilities
- Function has a single, clear purpose
- Function name describes its purpose accurately
- Function doesn’t mix abstraction levels
Parameters
- Function has 0-5 parameters
- Parameters are well-typed
- Related parameters are grouped into objects
Complexity
- Cyclomatic complexity is under 10
- Nesting depth is under 4 levels
- No deeply nested ternary operators
Testability
- Function can be tested independently
- Function has no hidden dependencies
- Side effects are clearly documented
Documentation
- Function has a clear docstring
- Parameters are documented
- Return value is documented
- Exceptions are documented
Refactoring Workflow
1. Identify Refactoring Candidates
# Find long functions
find . -name "*.py" -exec wc -l {} \; | sort -n | tail -20
# Find complex functions (manual code review)
# Look for functions with:
# - Multiple return statements
# - Deep nesting
# - Many parameters
# - Mixed responsibilities
2. Create Tests First
# Write failing tests for the current behavior
def test_existing_behavior():
# Test the function as it exists now
pass
3. Extract Incrementally
- Extract small, private helper functions
- Run tests after each extraction
- Gradually extract larger functions
- Keep the public API stable
4. Optimize Imports and Dependencies
- Remove unused imports
- Group related imports
- Consider circular dependency issues
5. Update Documentation
- Update function docstrings
- Update API documentation
- Add examples for complex functions
Tools and Automation
1. Complexity Analysis
# Using radon (complexity analyzer)
pip install radon
radon cc your_file.py -a
# Using flake8 with complexity plugin
pip install flake8-mccabe
flake8 --max-complexity 10 your_file.py
2. Automated Refactoring Tools
# Using rope (refactoring library)
pip install rope
rope refactor.py -e
# Using black for formatting (maintains consistency)
pip install black
black your_file.py
3. Pre-commit Hooks
# .pre-commit-config.yaml
repos:
- repo: https://github.com/PyCQA/flake8
rev: 4.0.1
hooks:
- id: flake8
args: [--max-complexity=10, --max-line-length=100]
- repo: https://github.com/psf/black
rev: 22.3.0
hooks:
- id: black
language_version: python3
Examples from the Codebase
Before: GrowthController.generate_control_strategies()
The original function was 60+ lines and handled multiple responsibilities.
After Refactoring:
def generate_control_strategies(self, growth_rate: float) -> StrategyPlan:
"""Generate detailed control strategies for growth management."""
strategies = self._select_control_strategies(growth_rate)
monitoring = self._define_monitoring_needs(strategies)
implementation = self._plan_implementation(strategies, monitoring)
return StrategyPlan(strategies, monitoring, implementation)
def _select_control_strategies(self, growth_rate: float) -> List[Strategy]:
"""Select appropriate control strategies based on growth rate."""
# Extracted strategy selection logic
def _define_monitoring_needs(self, strategies: List[Strategy]) -> MonitoringPlan:
"""Define monitoring requirements for selected strategies."""
# Extracted monitoring logic
def _plan_implementation(self, strategies: List[Strategy],
monitoring: MonitoringPlan) -> ImplementationPlan:
"""Plan implementation steps for strategies and monitoring."""
# Extracted implementation planning
This refactoring:
- Reduced main function to 5 lines
- Created three focused helper functions
- Made each function independently testable
- Improved readability and maintainability
Conclusion
Following these function extraction guidelines will:
- Improve Maintainability: Smaller, focused functions are easier to understand and modify
- Enhance Testability: Each function can be tested in isolation
- Increase Reusability: Extracted functions can be reused in different contexts
- Reduce Bugs: Simpler functions have fewer edge cases and are easier to verify
- Improve Code Review: Smaller functions are easier to review and understand
Remember: The goal is not just to make functions smaller, but to make the code more readable, maintainable, and testable.
Achievement System
Track your learning progress through the Claude Night Market documentation.
How It Works
As you explore the documentation, complete tutorials, and try plugins, you earn achievements. Progress is saved in your browser’s local storage.
Your Progress
Available Achievements
Getting Started
| Achievement | Description | Status |
|---|---|---|
| Marketplace Pioneer | Add the Night Market marketplace | |
| Skill Apprentice | Use your first skill | |
| PR Pioneer | Prepare your first pull request |
Documentation Explorer
| Achievement | Description | Status |
|---|---|---|
| Plugin Explorer | Read all plugin documentation pages | |
| Domain Master | Use all domain specialist plugins |
Tutorial Completion
| Achievement | Description | Status |
|---|---|---|
| First Steps | Complete Your First Session | |
| Full Cycle | Complete Feature Development Lifecycle | |
| PR Pro | Complete Code Review and PR Workflow | |
| Bug Squasher | Complete Debugging and Issue Resolution | |
| Knowledge Keeper | Complete Memory Palace tutorial | |
| Tutorial Master | Complete all tutorials |
Plugin Mastery
| Achievement | Description | Status |
|---|---|---|
| Foundation Builder | Install all foundation layer plugins | |
| Utility Expert | Install all utility layer plugins | |
| Full Stack | Install all plugins |
Advanced
| Achievement | Description | Status |
|---|---|---|
| Spec Master | Complete a full spec-kit workflow | |
| Review Expert | Complete a full pensive review | |
| Palace Architect | Build your first memory palace |
Reset Progress
Warning: This cannot be undone.
Achievement Tiers
| Tier | Achievements | Badge |
|---|---|---|
| Bronze | 1-5 | Night Market Visitor |
| Silver | 6-10 | Night Market Regular |
| Gold | 11-14 | Night Market Expert |
| Platinum | 15 | Night Market Master |