Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Claude Night Market

Claude Night Market contains 16 plugins for Claude Code that automate git operations, code review, and specification-driven development. Each plugin operates independently, allowing you to install only the components required for your specific workflow.

Architecture

The ecosystem uses a layered architecture to manage dependencies and token usage.

  1. Domain Specialists: Plugins like pensive (code review) and minister (issue tracking) provide high-level task automation.
  2. Utility Layer: Provides resource management services, such as token conservation in conserve.
  3. Foundation Layer: Implements core mechanics used across the ecosystem, including permission handling in sanctum.
  4. Meta Layer: abstract provides tools for cross-plugin validation and enforcement of project standards.

Design Philosophy

The project prioritizes token efficiency through shallow dependency chains. Progressive loading ensures that plugin logic enters the system prompt only when a specific feature is active. We enforce a “specification-first” workflow, requiring a written design phase before code generation begins.

Claude Code Integration

Plugins require Claude Code 2.1.0 or later to use features like:

  • Hot-reloading: Skills update immediately upon file modification.
  • Context Forking: Risky operations run in isolated context windows.
  • Lifecycle Hooks: Frontmatter hooks execute logic at specific execution points.
  • Wildcard Permissions: Pre-approved tool access reduces manual confirmation prompts.

Integration with Superpowers

These plugins integrate with the superpowers marketplace. While Night Market handles high-level process and workflow orchestration, superpowers provides the underlying methodology for TDD, debugging, and execution analysis.

Quick Start

# 1. Add the marketplace
/plugin marketplace add athola/claude-night-market

# 2. Install a plugin
/plugin install sanctum@claude-night-market

# 3. Use a command
/pr

# 4. Invoke a skill
Skill(sanctum:git-workspace-review)

Getting Started

This section will guide you through setting up Claude Night Market and using your first plugins.

Overview

This section covers:

  • Installing the marketplace and plugins
  • Invoking skills, commands, and agents
  • Plugin dependency structure

Prerequisites

  1. Claude Code installed and configured.
  2. A terminal.
  3. Git (for version control workflows).

Quick Overview

The Claude Night Market provides three types of capabilities:

TypeDescriptionHow to Use
SkillsReusable methodology guidesSkill(plugin:skill-name)
CommandsQuick actions with slash syntax/command-name
AgentsAutonomous task executorsReferenced in skill workflows

Sections

  1. Installation: Add the marketplace and install plugins
  2. Your First Plugin: Hands-on tutorial with sanctum
  3. Quick Start Guide: Common workflows and patterns

Achievement: Getting Started

Complete the installation steps to unlock the Marketplace Pioneer badge.

Install the marketplace to unlock: Marketplace Pioneer

Installation

This guide walks you through adding the Claude Night Market to your Claude Code setup.

Prerequisites

  • Claude Code 2.1.16+ (2.1.32+ for agent teams features)
  • Python 3.9+ — required for hook execution. macOS ships Python 3.9.6 as the system interpreter; hooks run under this rather than virtual environments. Plugin packages may target higher versions (3.10+, 3.12+) via uv.

Step 1: Add the Marketplace

Open Claude Code and run:

/plugin marketplace add athola/claude-night-market

This registers the marketplace, making all plugins available for installation.

Achievement Unlocked: Marketplace Pioneer

Step 2: Browse Available Plugins

View the marketplace contents:

/plugin marketplace list

You’ll see plugins organized by layer:

LayerPluginsPurpose
MetaabstractPlugin infrastructure
Foundationimbue, sanctum, leylineCore workflows
Utilityconserve, conjureResource optimization
Domainarchetypes, pensive, parseltongue, memory-palace, spec-kit, minister, attuneSpecialized tasks

Step 3: Install Individual Plugins

Install plugins based on your needs:

# Git and workspace operations
/plugin install sanctum@claude-night-market

# Specification-driven development
/plugin install spec-kit@claude-night-market

# Code review toolkit
/plugin install pensive@claude-night-market

# Python development
/plugin install parseltongue@claude-night-market

Step 4: Verify Installation

Check that plugins loaded correctly:

/plugin list

Installed plugins appear with their available skills and commands.

Optional: Install Superpowers

For enhanced methodology integration:

# Add superpowers marketplace
/plugin marketplace add obra/superpowers

# Install superpowers
/plugin install superpowers@superpowers-marketplace

Superpowers provides TDD, debugging, and review patterns that enhance Night Market plugins.

Alternative: opkg (OpenPackage)

Each plugin ships an openpackage.yml manifest for installation via opkg:

opkg i gh@athola/claude-night-market --plugins sanctum
opkg i gh@athola/claude-night-market --plugins pensive,spec-kit

Plugins that depend on shared runtime skills (attune, conjure, imbue, memory-palace, parseltongue, sanctum) automatically pull packages/core as a dependency.

Minimal Setup

For basic git workflows:

/plugin install sanctum@claude-night-market

Development Setup

For active feature development:

/plugin install sanctum@claude-night-market
/plugin install imbue@claude-night-market
/plugin install spec-kit@claude-night-market

Full Setup

For detailed workflow coverage:

/plugin install abstract@claude-night-market
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
/plugin install leyline@claude-night-market
/plugin install conserve@claude-night-market
/plugin install pensive@claude-night-market
/plugin install spec-kit@claude-night-market

Post-Installation Setup

Several plugins register Setup hooks that run one-time initialization (directory creation, index building, configuration). Trigger them after installing:

# One-time initialization
claude --init

# Periodic maintenance (weekly or monthly)
claude --maintenance

--init runs setup tasks like creating knowledge garden directories (memory-palace) and initializing caches (conserve). --maintenance handles heavier operations like rebuilding indexes, cleaning stale captures, and rotating logs. Neither runs automatically on every session.

Troubleshooting

Plugin not loading?

  1. Verify marketplace was added: /plugin marketplace list
  2. Check for typos in plugin name
  3. Restart Claude Code session

Conflicts between plugins?

Plugins are composable. If you experience issues:

  1. Check the plugin’s README for dependency requirements
  2. Validate foundation plugins (imbue, leyline) are installed if using domain plugins

Next Steps

Continue to Your First Plugin for a hands-on tutorial.

Your First Plugin: sanctum

This hands-on tutorial walks you through using the sanctum plugin for git and workspace operations.

What You’ll Build

By the end of this tutorial, you’ll:

  • Review your git workspace state
  • Generate a conventional commit message
  • Prepare a pull request description

Prerequisites

  • sanctum plugin installed: /plugin install sanctum@claude-night-market
  • A git repository with some uncommitted changes

Part 1: Workspace Review

Before any git operation, understand your current state.

Invoke the Skill

Skill(sanctum:git-workspace-review)

This skill runs a preflight checklist:

  • Current branch and remote tracking
  • Staged vs unstaged changes
  • Recent commit history
  • Untracked files

What to Expect

Claude will analyze your repository and report:

Repository: my-project
Branch: feature/add-login
Tracking: origin/feature/add-login (up to date)

Staged Changes:
  M src/auth/login.ts
  A src/auth/types.ts

Unstaged Changes:
  M README.md

Untracked:
  src/auth/tests/login.test.ts
Achievement Unlocked: Skill Apprentice

Part 2: Commit Message Generation

Now generate a conventional commit message for your staged changes.

Using the Command

/commit-msg

Or invoke the skills directly:

Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)

Understanding the Output

Claude analyzes staged changes and generates:

feat(auth): add login form with validation

- Implement LoginForm component with email/password fields
- Add form validation using zod schema
- Create auth types for login request/response

Closes #42

The commit follows Conventional Commits format:

  • Type: feat, fix, docs, style, refactor, test, chore
  • Scope: Optional context (auth, api, ui)
  • Description: Imperative mood, present tense
  • Body: Bullet points explaining what changed
  • Footer: Issue references

Part 3: PR Preparation

Finally, prepare a pull request description.

Using the Command

/pr

This runs the full PR preparation workflow:

  1. Workspace review
  2. Quality gates check
  3. Change summarization
  4. PR description generation

Quality Gates

Before generating the PR, Claude checks:

Quality Gates:
  [x] Code compiles
  [x] Tests pass
  [x] Linting clean
  [x] No console.log statements
  [x] Documentation updated

Generated PR Description

## Summary

Add user authentication with login form validation.

## Changes

- **New Feature**: Login form component with email/password validation
- **Types**: Auth request/response type definitions
- **Tests**: Unit tests for login validation logic

## Testing

- [x] Manual testing of form submission
- [x] Unit tests pass (15 new tests)
- [x] Integration tests pass

## Screenshots

[Add screenshots if UI changes]

## Checklist

- [x] Tests added
- [x] Documentation updated
- [x] No breaking changes
Achievement Unlocked: PR Pioneer

Workflow Chaining

These skills work together. The recommended flow:

git-workspace-review (foundation)
├── commit-messages (depends on workspace state)
├── pr-prep (depends on workspace state)
├── doc-updates (depends on workspace state)
└── version-updates (depends on workspace state)

Always run git-workspace-review first to establish context.

Common Patterns

Pre-Commit Workflow

# Stage your changes
git add -p

# Review and commit
Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)

# Apply the message
git commit -m "<generated message>"

Pre-PR Workflow

# Run quality checks
make fmt && make lint && make test

# Prepare PR
/pr

# Create on GitHub
gh pr create --title "<title>" --body "<generated body>"

Next Steps

Achievements Earned

  • Skill Apprentice: Used your first skill
  • PR Pioneer: Prepared your first PR
Section Progress: 3/3 complete

Quick Start Guide

Common workflows and patterns for Claude Night Market plugins.

Workflow Recipes

Feature Development

Start features with a specification:

# (Optional) Resume persistent speckit context for this repo/session
/speckit-startup

# Create specification from idea
/speckit-specify Add user authentication with OAuth2

# Generate implementation plan
/speckit-plan

# Create ordered tasks
/speckit-tasks

# Execute tasks
/speckit-implement

# Verify artifacts stay consistent
/speckit-analyze

Code Review

Run a detailed code review:

# Full review with intelligent skill selection
/full-review

# Or specific review types
/architecture-review    # Architecture assessment
/api-review            # API surface evaluation
/bug-review            # Bug hunting
/test-review           # Test quality
/rust-review           # Rust-specific (if applicable)

Context Recovery

Get up to speed on changes:

# Quick catchup on recent changes
/catchup

# Or with sanctum's git-specific variant
/git-catchup

Context Optimization

Monitor and optimize context usage:

# Analyze context window usage
/optimize-context

# Check skill growth patterns (consolidated into bloat-scan)
/bloat-scan

Skill Invocation Patterns

Basic Skill Usage

# Standard format
Skill(plugin:skill-name)

# Examples
Skill(sanctum:git-workspace-review)
Skill(imbue:diff-analysis)
Skill(conservation:context-optimization)

Skill Chaining

Some skills depend on others:

# Pensive depends on imbue and sanctum
Skill(sanctum:git-workspace-review)
Skill(imbue:review-core)
Skill(pensive:architecture-review)

Skill with Dependencies

Check a plugin’s README for dependency chains:

spec-kit depends on imbue
pensive depends on imbue + sanctum
sanctum depends on imbue (for some skills)

Command Quick Reference

Git Operations (sanctum)

CommandPurpose
/commit-msgGenerate commit message
/prPrepare pull request
/fix-prAddress PR review comments
/do-issueFix GitHub issues
/update-docsUpdate documentation
/update-docsUpdate documentation (includes README)
/update-testsMaintain tests
/update-versionBump versions

Specification (spec-kit)

CommandPurpose
/speckit-specifyCreate specification
/speckit-planGenerate plan
/speckit-tasksCreate tasks
/speckit-implementExecute tasks
/speckit-analyzeCheck consistency
/speckit-clarifyAsk clarifying questions

Review (pensive)

CommandPurpose
/full-reviewUnified review
/architecture-reviewArchitecture check
/api-reviewAPI surface review
/bug-reviewBug hunting
/test-reviewTest quality

Analysis (imbue)

CommandPurpose
/catchupQuick context recovery
/structured-reviewStructured review with evidence
Skill(imbue:scope-guard)Feature prioritization (consolidated into scope-guard)

Plugin Management (leyline)

CommandPurpose
/reinstall-all-pluginsRefresh all plugins
/update-all-pluginsUpdate all plugins

Environment Variables

Some plugins support configuration via environment variables:

Conservation

# Skip optimization guidance for fast processing
CONSERVATION_MODE=quick claude

# Full guidance with extended allowance
CONSERVATION_MODE=deep claude

Memory Palace

# Set embedding provider
MEMORY_PALACE_EMBEDDINGS_PROVIDER=hash  # or local

Tips

1. Start with Foundation

Install foundation plugins first:

/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market

Then add domain specialists as needed.

2. Use TodoWrite Integration

Most skills output TodoWrite items for tracking:

git-review:repo-confirmed
git-review:status-overview
pr-prep:quality-gates

Monitor these for workflow progress.

3. Chain Skills Intentionally

Don’t invoke all skills at once. Build understanding incrementally:

# First: understand state
Skill(sanctum:git-workspace-review)

# Then: perform action
Skill(sanctum:commit-messages)

4. Use Superpowers

If superpowers is installed, commands gain enhanced capabilities:

  • /create-skill uses brainstorming
  • /test-skill uses TDD methodology
  • /pr uses code review patterns

Next Steps

Common Workflows Guide

When and how to use commands, skills, and subagents for typical development tasks.

Quick Reference

TaskPrimary ToolPlugin
Initialize a project/attune:arch-initattune
Review a PR/full-reviewpensive
Fix PR feedback/fix-prsanctum
Prepare a PR/prsanctum
Catch up on changes/catchupimbue
Write specifications/speckit-specifyspec-kit
Improve system/speckit-analyzespec-kit
Debug an issueSkill(superpowers:systematic-debugging)superpowers
Manage knowledge/palacememory-palace

Initializing a New Project

When: Starting a new project from scratch or setting up a new codebase.

Step 1: Architecture-Aware Initialization

Start with an architecture-aware initialization to select the right project structure based on team size and domain complexity. This process guides you through project type selection, online research into best practices, and template customization.

# Interactive architecture selection with research
/attune:arch-init --name my-project

Output: Complete project structure with ARCHITECTURE.md, ADR, and paradigm-specific directories.

Step 2: Standard Initialization

If the architecture is decided, use standard initialization to generate language-specific boilerplate including Makefiles, CI/CD pipelines, and pre-commit hooks.

# Quick initialization when you know the architecture
/attune:init --lang python --name my-project

Step 3: Establish Persistent State

Establish a persistent state to manage artifacts and constraints across sessions. This maintains non-negotiable principles and supports consistent progress tracking.

# (Once) Define non-negotiable principles for the project
/speckit-constitution

# (Each Claude session) Load speckit context + progress tracking
/speckit-startup

Optional enhancements:

  • Install spec-kit for spec-driven artifacts: /plugin install spec-kit@claude-night-market
  • Install superpowers for rigorous methodology loops:
/plugin marketplace add obra/superpowers
/plugin install superpowers@superpowers-marketplace

Alternative: Brainstorming Workflow

For complex projects requiring exploration, begin by brainstorming the problem space and creating a detailed specification before planning the architecture and tasks.

# 1. Brainstorm the problem space
/attune:brainstorm --domain "my problem area"

# 2. Create detailed specification
/attune:specify

# 3. Plan architecture and tasks
/attune:blueprint

# 4. Initialize with chosen architecture
/attune:arch-init --name my-project

# 5. Execute implementation
/attune:execute

What You Get

ArtifactDescription
pyproject.toml / Cargo.toml / package.jsonBuild configuration
MakefileDevelopment targets (test, lint, format)
.pre-commit-config.yamlCode quality hooks
.github/workflows/CI/CD pipelines
ARCHITECTURE.mdArchitecture overview
docs/adr/Architecture decision records

Reviewing a Pull Request

When: Reviewing code changes in a PR or before merging.

Full Multi-Discipline Review

# Full review with skill selection
/full-review

This orchestrates multiple specialized reviews:

  • Architecture assessment
  • API surface evaluation
  • Bug hunting
  • Test quality analysis

Specific Review Types

# Architecture-focused review
/architecture-review

# API surface evaluation
/api-review

# Bug hunting
/bug-review

# Test quality assessment
/test-review

# Rust-specific review (for Rust projects)
/rust-review

Using Skills Directly

For more control, invoke skills:

# First: understand the workspace state
Skill(sanctum:git-workspace-review)

# Then: run specific review
Skill(pensive:architecture-review)
Skill(pensive:api-review)
Skill(pensive:bug-review)

External PR Review

# Review a GitHub PR by URL
/pr-review https://github.com/org/repo/pull/123

# Or just the PR number in current repo
/pr-review 123

Fixing PR Feedback

When: Addressing review comments on your PR.

Quick Fix

# Address PR review comments
/fix-pr

# Or with specific PR reference
/fix-pr 123

This:

  1. Reads PR review comments
  2. Identifies actionable feedback
  3. Applies fixes systematically
  4. Prepares follow-up commit

Manual Workflow

# 1. Review the feedback
Skill(sanctum:git-workspace-review)

# 2. Apply fixes
# (make your changes)

# 3. Prepare commit message
/commit-msg

# 4. Update PR
git push

Preparing a Pull Request

When: Code is complete and ready for review.

Pre-PR Checklist

Run these commands before creating a PR:

# 1. Update documentation
/sanctum:update-docs

# 2. Update README if needed (consolidated into update-docs)
/sanctum:update-docs

# 3. Review and update tests
/sanctum:update-tests

# 4. Update Makefile demo targets (for plugins)
/abstract:make-dogfood

# 5. Final quality check
make lint && make test

Create the PR

# Full PR preparation
/pr

# This handles:
# - Branch status check
# - Commit message quality
# - Documentation updates
# - PR description generation

Using Skills for PR Prep

# Review workspace before PR
Skill(sanctum:git-workspace-review)

# Generate quality commit message
Skill(sanctum:commit-messages)

# Check PR readiness
Skill(sanctum:pr-preparation)

Catching Up on Changes

When: Returning to a project after time away, or joining an ongoing project.

Quick Catchup

# Standard catchup on recent changes
/catchup

# Git-specific catchup
/git-catchup

Detailed Understanding

# 1. Review workspace state
Skill(sanctum:git-workspace-review)

# 2. Analyze recent diffs
Skill(imbue:diff-analysis)

# 3. Understand branch context
Skill(sanctum:branch-comparison)

Session Recovery

# Resume a previous Claude session
claude --resume

# Or continue with context
claude --continue

Writing Specifications

When: Planning a feature before implementation.

Spec-Driven Development Workflow

# 1. Create specification from idea
/speckit-specify Add user authentication with OAuth2

# 2. Generate implementation plan
/speckit-plan

# 3. Create ordered tasks
/speckit-tasks

# 4. Execute tasks with tracking
/speckit-implement

Persistent Presence Loop (World Model + Agent Model)

Treat SDD artifacts as a self-modeling architecture where the repo state serves as the world model and the loaded skills as the agent model. Experiments are run with small diffs and verified through rigorous loops (tests, linters, repro scripts), while model updates refine both the code artifacts and the orchestration methodology to optimize future loops.

Curriculum generation via /speckit-tasks keeps actions grounded and dependency-ordered, while the skill library and iterative refinement ensure the plan adapts to reality. The cycle moves from planning to action to reflection via /speckit-plan, /speckit-implement, and /speckit-analyze.

Background reading:

  • MineDojo: https://minedojo.org/ (internet-scale knowledge + benchmarks)
  • Voyager: https://voyager.minedojo.org/ (arXiv: https://arxiv.org/abs/2305.16291) (automatic curriculum + skill library)
  • GTNH_Agent: https://github.com/sefiratech/GTNH_Agent (persistent, modular Minecraft automation)

Clarification and Analysis

# Ask clarifying questions about requirements
/speckit-clarify

# Analyze specification consistency
/speckit-analyze

Using Skills

# Invoke spec writing skill directly
Skill(spec-kit:spec-writing)

# Task planning skill
Skill(spec-kit:task-planning)

Meta-Development

When: Improving claude-night-market itself (skills, commands, templates, orchestration).

When improving the system itself, treat the repo as the world model and available tools as the agent model. Run experiments with minimal diffs behind verification, evaluate them with evidence-first methods like /speckit-analyze and Skill(superpowers:verification-before-completion), and update both the artifacts and the methodology so the next loop is cheaper.

Optional pattern: split roles (planner/critic/executor) for long-horizon work, similar to multi-role agent stacks used in open-ended Minecraft agents.

Useful tools:

# Use speckit to keep artifacts + principles explicit
/speckit-constitution
/speckit-analyze

# Use superpowers to enforce evidence
Skill(superpowers:systematic-debugging)
Skill(superpowers:verification-before-completion)

Debugging Issues

When: Investigating bugs or unexpected behavior.

With Superpowers Integration

# Systematic debugging methodology
Skill(superpowers:systematic-debugging)

# This provides:
# - Hypothesis formation
# - Evidence gathering
# - Root cause analysis
# - Fix validation

GitHub Issue Resolution

# Fix a GitHub issue
/do-issue 42

# Or with URL
/do-issue https://github.com/org/repo/issues/42

Analysis Tools

# Test analysis (parseltongue)
/analyze-tests

# Performance profiling
/run-profiler

# Context optimization
/optimize-context

Managing Knowledge

When: Capturing insights, decisions, or learnings.

Memory Palace

# Open knowledge management
/palace

# Access digital garden
/garden

Knowledge Capture

# Capture insight during work
Skill(memory-palace:knowledge-capture)

# Link related concepts
Skill(memory-palace:concept-linking)

Plugin Development

When: Creating or maintaining Night Market plugins.

Create a New Plugin

# Scaffold new plugin
make create-plugin NAME=my-plugin

# Or using attune for plugins
/attune:init --type plugin --name my-plugin

Validate Plugin Structure

# Check plugin structure
/abstract:validate-plugin

# Audit skill quality
/abstract:skill-audit

Update Plugin Documentation

# Update all documentation
/sanctum:update-docs

# Update Makefile demo targets
/abstract:make-dogfood

# Sync templates with reference projects
/attune:sync-templates

Testing

# Run plugin tests
make test

# Validate structure
make validate

# Full quality check
make lint && make test && make build

Context Management

When: Managing token usage or context window.

Monitor Usage

# Check context window usage
/context

# Analyze context optimization
/optimize-context

Reduce Context

# Clear context for fresh start
/clear

# Then catch up
/catchup

# Or scan for bloat
/bloat-scan

Optimization Skills

# Context optimization skill
Skill(conserve:context-optimization)

# Growth analysis (consolidated into bloat-scan)
/bloat-scan

Subagent Delegation

When: Delegating specialized work to focused agents.

Available Subagents

SubagentPurposeWhen to Use
abstract:plugin-validatorValidate plugin structureBefore publishing plugins
abstract:skill-auditorAudit skill qualityDuring skill development
pensive:code-reviewerFocused code reviewReviewing specific files
attune:project-architectArchitecture designPlanning new features
attune:project-implementerTask executionSystematic implementation

Example: Code Review Delegation

# Delegate to specialized reviewer
Agent(pensive:code-reviewer) Review src/auth/ for security issues

Example: Plugin Validation

# Delegate validation to subagent
Agent(abstract:plugin-validator) Check plugins/my-plugin

End-to-End Example: New Feature

Here’s a complete workflow for adding a new feature:

# 1. PLANNING PHASE
/speckit-specify Add caching layer for API responses
/speckit-plan
/speckit-tasks

# 2. IMPLEMENTATION PHASE
# Create branch
git checkout -b feature/add-caching

# Implement with Iron Law TDD
Skill(imbue:proof-of-work)  # Enforces: NO IMPLEMENTATION WITHOUT FAILING TEST FIRST

# Or with superpowers TDD
Skill(superpowers:tdd)

# Execute planned tasks
/speckit-implement

# 3. QUALITY PHASE
# Run reviews
/architecture-review
/test-review

# Fix any issues
# (make changes)

# 4. PR PREPARATION PHASE
/sanctum:update-docs
/sanctum:update-tests
make lint && make test

# 5. CREATE PR
/pr

Command vs Skill vs Agent

TypeSyntaxWhen to Use
Command/command-nameQuick actions, one-off tasks
SkillSkill(plugin:skill-name)Methodologies, detailed workflows
AgentAgent(plugin:agent-name)Delegated work, specialized focus

Examples

# Command: Quick action
/pr

# Skill: Detailed methodology
Skill(sanctum:pr-preparation)

# Agent: Delegated specialized work
Agent(pensive:code-reviewer) Review authentication module

Skill Invocation: Secondary Strategy

The Skill tool is a Claude Code feature that may not be available in all environments. When the Skill tool is unavailable:

Secondary Pattern:

# 1. If Skill tool fails or is unavailable, read the skill file directly:
Read plugins/{plugin}/skills/{skill-name}/SKILL.md

# 2. Follow the skill content as instructions
# The skill file contains the complete methodology to execute

Example:

# Instead of: Skill(sanctum:commit-messages)
# Secondary:  Read plugins/sanctum/skills/commit-messages/SKILL.md
#             Then follow the instructions in that file

Skill file locations:

  • Plugin skills: plugins/{plugin}/skills/{skill-name}/SKILL.md
  • User skills: ~/.claude/skills/{skill-name}/SKILL.md

This allows workflows to function across different environments.


Claude Code 2.1.0 Features

New Capabilities

FeatureDescriptionUsage
Skill Hot-ReloadSkills auto-reload without restartEdit SKILL.md, immediately available
Plan Mode ShortcutEnter plan mode directly/plan
Forked ContextRun skills in isolated contextcontext: fork in frontmatter
Agent FieldSpecify agent for skill executionagent: agent-name in frontmatter
Frontmatter HooksLifecycle hooks in skills/agentshooks: section in frontmatter
Wildcard PermissionsFlexible Bash patternsBash(npm *), Bash(* install)
Skill VisibilityControl slash menu visibilityuser-invocable: false

Skill Development Workflow (Hot-Reload)

With Claude Code 2.1.0, skill development is faster:

# 1. Create/edit skill
vim ~/.claude/skills/my-skill/SKILL.md

# 2. Save changes (no restart needed!)

# 3. Skill is immediately available
Skill(my-skill)

# 4. Iterate rapidly

Using Forked Context

For isolated operations that shouldn’t pollute main context:

# In skill frontmatter
---
name: isolated-analysis
context: fork  # Runs in separate context
---

Use cases:

  • Heavy file analysis that would bloat context
  • Experimental operations that might fail
  • Parallel workflows

Frontmatter Hooks

Define hooks scoped to skill/agent/command lifecycle:

---
name: validated-workflow
hooks:
  PreToolUse:
    - matcher: "Bash"
      command: "./validate.sh"
      once: true  # Run only once per session
  PostToolUse:
    - matcher: "Write|Edit"
      command: "./format.sh"
  Stop:
    - command: "./teardown.sh"
---

Permission Wildcards

New wildcard patterns for flexible permissions:

allowed-tools:
  - Bash(npm *)      # All npm commands
  - Bash(* install)  # Any install command
  - Bash(git * main) # Git with main branch

Note (2.1.20+): Bash(*) is now treated as equivalent to plain Bash. Use scoped wildcards like Bash(npm *) for targeted permissions, or plain Bash for unrestricted access.

Disabling Specific Agents

Control which agents can be invoked:

# Via CLI
claude --disallowedTools "Task(expensive-agent)"

# Via settings.json
{
  "permissions": {
    "deny": ["Task(expensive-agent)"]
  }
}

Subagent Resilience

Subagents are designed to continue operations after a permission denial by attempting alternative approaches instead of failing immediately. this behavior results in more reliable agent workflows when interacting with restrictive environments.

Agent-Aware Hooks (2.1.2+)

SessionStart hooks receive agent_type field when launched with --agent:

import json, sys
input_data = json.loads(sys.stdin.read())
agent_type = input_data.get("agent_type", "")

if agent_type in ["code-reviewer", "quick-query"]:
    context = "Minimal context"  # Skip heavy context
else:
    context = full_context

print(json.dumps({"hookSpecificOutput": {"additionalContext": context}}))

This reduces context overhead by 200-800 tokens for lightweight agents.


See Also

Technical Debt Migration Guide

Last Updated: 2025-12-06

Overview

Use this guide to migrate plugin code to shared constants and follow function extraction guidelines.

Quick Start

1. Update Your Plugin to Use Shared Constants

Replace scattered magic numbers with centralized constants:

# BEFORE
def check_file_size(content):
    if len(content) > 15000:  # Magic number!
        return "File too large"
    if len(content) > 5000:   # Another magic number!
        return "File is large"

# AFTER
from plugins.shared.constants import MAX_SKILL_FILE_SIZE, LARGE_SIZE_LIMIT

def check_file_size(content):
    if len(content) > MAX_SKILL_FILE_SIZE:
        return "File too large"
    if len(content) > LARGE_SIZE_LIMIT:
        return "File is large"

2. Apply Function Extraction Guidelines

Use the patterns from the guidelines to refactor complex functions:

# BEFORE - Complex function with multiple responsibilities
def analyze_and_optimize_skill(content, strategy):
    # Validation
    if not content:
        raise ValueError("Content cannot be empty")

    # Analysis
    tokens = estimate_tokens(content)
    complexity = calculate_complexity(content)

    # Optimization
    if strategy == "aggressive":
        # 20 lines of optimization logic
        pass
    elif strategy == "moderate":
        # 20 lines of optimization logic
        pass

    return optimized_content, tokens, complexity

# AFTER - Extracted and organized
def analyze_and_optimize_skill(content: str, strategy: str) -> OptimizationResult:
    """Analyze and optimize skill content."""
    _validate_content(content)

    analysis = _analyze_content(content)
    optimized = _optimize_content(content, strategy)

    return OptimizationResult(optimized, analysis)

def _validate_content(content: str) -> None:
    """Validate input content."""
    if not content:
        raise ValueError("Content cannot be empty")

def _analyze_content(content: str) -> ContentAnalysis:
    """Analyze content properties."""
    tokens = estimate_tokens(content)
    complexity = calculate_complexity(content)
    return ContentAnalysis(tokens, complexity)

def _optimize_content(content: str, strategy: str) -> str:
    """Optimize content using specified strategy."""
    optimizer = get_strategy_optimizer(strategy)
    return optimizer.optimize(content)

Detailed Migration Steps

1. Audit Plugin

Find all magic numbers and complex functions:

# Find magic numbers (search for numeric literals in conditions)
grep -n -E "(if|when|while).*[0-9]+" your_plugin/**/*.py

# Find long functions
find your_plugin -name "*.py" -exec wc -l {} + | awk '$1 > 30 {print}'

# Find functions with many parameters
grep -n "def .*\(.*," your_plugin/**/*.py | grep -oE "\([^)]*\)" | grep -o "," | wc -l

2. Plan Migration

Create a migration plan for your plugin:

  1. Identify Constants

    • List all magic numbers
    • Categorize by purpose (timeouts, sizes, thresholds)
    • Check if they exist in shared constants
  2. Identify Functions to Refactor

    • Functions > 30 lines
    • Functions with > 4 parameters
    • Functions with multiple responsibilities
  3. Create Migration Tasks

    • Update constants first (lowest risk)
    • Refactor simple functions next
    • Tackle complex functions last

3. Replace Magic Numbers

File Size Constants

# Replace these patterns:
if len(content) > 15000:
if file_size > 100000:
if line_count > 200:

# With:
from plugins.shared.constants import (
    MAX_SKILL_FILE_SIZE,
    MAX_TOTAL_SKILL_SIZE,
    LARGE_FILE_LINES
)

Timeout Constants

# Replace these patterns:
timeout=10
timeout=300
time.sleep(30)

# With:
from plugins.shared.constants import (
    DEFAULT_SERVICE_CHECK_TIMEOUT,
    DEFAULT_EXECUTION_TIMEOUT,
    MEDIUM_TIMEOUT
)

Quality Thresholds

# Replace these patterns:
if quality_score > 70.0:
if quality_score > 80.0:
if quality_score > 90.0:

# With:
from plugins.shared.constants import (
    MINIMUM_QUALITY_THRESHOLD,
    HIGH_QUALITY_THRESHOLD,
    EXCELLENT_QUALITY_THRESHOLD
)

4. Refactor Complex Functions

Follow this iterative approach:

4.1 Write Tests First

# Test the current behavior
def test_function_to_refactor():
    result = your_complex_function(input_data)
    assert result.expected_field == expected_value
    # Add more assertions based on current behavior

4.2 Extract Small Helper Functions

# Start with small, obvious extractions
def _calculate_value(item):
    """Extract value calculation from complex function."""
    return item.base * item.multiplier + item.offset

def _validate_input(data):
    """Extract input validation."""
    if not data:
        raise ValueError("Data required")
    return True

4.3 Extract Strategy Classes

For functions with conditional logic:

# Before: Complex conditional function
def process_item(item, mode):
    if mode == "fast":
        # Fast processing logic
        pass
    elif mode == "thorough":
        # Thorough processing logic
        pass
    elif mode == "minimal":
        # Minimal processing logic
        pass

# After: Strategy pattern
class ItemProcessor(ABC):
    @abstractmethod
    def process(self, item):
        pass

class FastProcessor(ItemProcessor):
    def process(self, item):
        # Fast processing implementation
        pass

class ThoroughProcessor(ItemProcessor):
    def process(self, item):
        # Thorough processing implementation
        pass

# Registry
PROCESSORS = {
    "fast": FastProcessor(),
    "thorough": ThoroughProcessor(),
    "minimal": MinimalProcessor()
}

def process_item(item, mode):
    processor = PROCESSORS.get(mode)
    if not processor:
        raise ValueError(f"Unknown mode: {mode}")
    return processor.process(item)

5. Update Configuration

If your plugin has configuration files:

# config.yaml - Use shared defaults
plugin_name: your_plugin

# Import shared defaults and override only what's needed
shared_constants:
  import: file_limits, timeouts, quality

# Plugin-specific settings
specific_settings:
  custom_threshold: 42
  feature_enabled: true

Migration Checklist

Pre-Migration

  • Run existing tests to establish baseline
  • Create backup of current code
  • Document current behavior
  • Identify all dependencies

Constants Migration

  • List all magic numbers in your plugin
  • Map to appropriate shared constants
  • Update imports
  • Replace magic numbers
  • Run tests to verify no breaking changes

Function Refactoring

  • Identify functions > 30 lines
  • Write tests for each function
  • Extract small helper functions first
  • Apply strategy pattern where appropriate
  • Keep public APIs stable
  • Update documentation

Post-Migration

  • Run full test suite
  • Update documentation
  • Verify performance
  • Update CHANGELOG
  • Create migration notes for users

Common Migration Patterns

1. Gradual Migration

Don’t refactor everything at once. Use feature flags:

# Gradually migrate to new implementation
def legacy_function(data):
    if USE_NEW_IMPLEMENTATION:
        return new_refactored_function(data)
    else:
        return old_implementation(data)

# Set this in config when ready
USE_NEW_IMPLEMENTATION = os.getenv("USE_NEW_IMPLEMENTATION", "false").lower() == "true"

2. Adapter Pattern

Keep old API while using new implementation:

def old_api_function(param1, param2, param3):
    """Legacy API - delegates to new implementation."""
    config = LegacyConfig(param1, param2, param3)
    return new_refactored_function(config)

# New, cleaner API
def new_refactored_function(config: Config):
    """New, improved implementation."""
    pass

3. Parallel Implementation

Run both old and new implementations in parallel to verify:

def process_with_validation(data):
    """Run both implementations and compare."""
    old_result = old_implementation(data)
    new_result = new_implementation(data)

    if not results_equivalent(old_result, new_result):
        log_discrepancy(old_result, new_result)
        # Return old result for safety
        return old_result

    return new_result

Testing Your Migration

1. Property-Based Testing

Use hypothesis to test refactored functions:

from hypothesis import given, strategies as st

@given(st.lists(st.integers()))
def test_sort_refactor(data):
    """Test that refactored sort produces same result."""
    old_result = old_sort_function(data.copy())
    new_result = new_sort_function(data.copy())
    assert old_result == new_result

2. Integration Tests

Verify the whole workflow still works:

def test_complete_workflow():
    """Test that refactoring didn't break the workflow."""
    input_data = create_test_data()

    # Run through entire process
    result = your_plugin_workflow(input_data)

    # Verify key properties
    assert result is not None
    assert result.quality_score >= 70
    assert len(result.processed_data) > 0

3. Performance Tests

Verify refactoring didn’t hurt performance:

import time

def test_performance():
    """Verify refactoring didn't degrade performance."""
    data = create_large_dataset()

    start = time.time()
    old_result = old_implementation(data)
    old_time = time.time() - start

    start = time.time()
    new_result = new_implementation(data)
    new_time = time.time() - start

    # New implementation shouldn't be more than 10% slower
    assert new_time < old_time * 1.1

Rollback Plan

If Migration Fails

  1. Immediate Rollback

    git revert <migration-commit>
    
  2. Partial Rollback

    • Keep constants migration
    • Revert function refactoring
    • Fix issues and retry
  3. Feature Flag Rollback

    # Disable new implementation
    os.environ["USE_NEW_IMPLEMENTATION"] = "false"
    

Documenting Issues

If you encounter problems:

  1. Document the specific issue
  2. Note the affected functionality
  3. Create a bug report with:
    • Migration step that failed
    • Error messages
    • Minimal reproduction case
    • Expected vs actual behavior

Getting Help

Resources

Support

  • Create an issue for migration problems
  • Join the #migration Slack channel
  • Review example migrations in other plugins

Contributing

  • Share your migration experience
  • Suggest improvements to guidelines
  • Add new shared constants as needed

Migration Examples

Example: Memory Palace Plugin

Challenges:

  • 15 magic numbers scattered across files
  • Functions averaging 45 lines
  • Complex conditional logic

Solution:

  • Replaced all magic numbers with shared constants
  • Refactored 8 functions using extraction patterns
  • Introduced strategy pattern for content processing

Results:

  • 40% reduction in code complexity
  • Improved test coverage from 60% to 85%
  • Easier to add new content types

Example: Parseltongue Plugin

Challenges:

  • Complex analysis functions with 8+ parameters
  • Duplicated logic across multiple analyzers
  • Hard to test individual components

Solution:

  • Extracted configuration objects for parameters
  • Created shared analysis utilities
  • Applied builder pattern for complex objects

Results:

  • Functions reduced to average 15 lines
  • Parameter count reduced to 3-4 per function
  • 100% test coverage for core logic

Conclusion

Migrating to shared constants and following function extraction guidelines improves code quality and maintainability.

Key Steps:

  • Migrate incrementally: Don’t try to do everything at once.
  • Test thoroughly: Verify behavior doesn’t change.
  • Document changes: Help others understand the migration.
  • Ask for help: Use the community’s experience.

Plugin Overview

The Claude Night Market organizes plugins into four layers, each building on the foundations below.

Architecture

graph TB
    subgraph Meta[Meta Layer]
        abstract[abstract<br/>Plugin infrastructure]
    end

    subgraph Foundation[Foundation Layer]
        imbue[imbue<br/>Intelligent workflows]
        sanctum[sanctum<br/>Git & workspace ops]
        leyline[leyline<br/>Pipeline building blocks]
    end

    subgraph Utility[Utility Layer]
        conserve[conserve<br/>Resource optimization]
        conjure[conjure<br/>External delegation]
    end

    subgraph Domain[Domain Specialists]
        archetypes[archetypes<br/>Architecture patterns]
        pensive[pensive<br/>Code review toolkit]
        parseltongue[parseltongue<br/>Python development]
        memory_palace[memory-palace<br/>Spatial memory]
        spec_kit[spec-kit<br/>Spec-driven dev]
        minister[minister<br/>Release management]
        attune[attune<br/>Full-cycle development]
        scribe[scribe<br/>Documentation review]
        cartograph[cartograph<br/>Codebase visualization]
    end

    abstract --> leyline
    pensive --> imbue
    pensive --> sanctum
    sanctum --> imbue
    conjure --> leyline
    spec_kit --> imbue
    scribe --> imbue
    scribe --> conserve

    style Meta fill:#fff3e0,stroke:#e65100
    style Foundation fill:#e1f5fe,stroke:#01579b
    style Utility fill:#f3e5f5,stroke:#4a148c
    style Domain fill:#e8f5e8,stroke:#1b5e20

Layer Summary

LayerPurposePlugins
MetaPlugin infrastructure and evaluationabstract
FoundationCore workflow methodologiesimbue, sanctum, leyline
UtilityResource optimization and delegationconserve, conjure
DomainSpecialized task executionarchetypes, pensive, parseltongue, memory-palace, spec-kit, minister, attune, scribe, cartograph

Dependency Rules

  1. Downward Only: Plugins depend on lower layers, never upward
  2. Foundation First: Most domain plugins work better with foundation plugins installed
  3. Graceful Degradation: Plugins function standalone but gain capabilities with dependencies

Quick Installation

Minimal (Git Workflows)

/plugin install sanctum@claude-night-market

Standard (Development)

/plugin install sanctum@claude-night-market
/plugin install imbue@claude-night-market
/plugin install spec-kit@claude-night-market

Full (All Capabilities)

/plugin install abstract@claude-night-market
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
/plugin install leyline@claude-night-market
/plugin install conserve@claude-night-market
/plugin install conjure@claude-night-market
/plugin install archetypes@claude-night-market
/plugin install pensive@claude-night-market
/plugin install parseltongue@claude-night-market
/plugin install memory-palace@claude-night-market
/plugin install spec-kit@claude-night-market
/plugin install minister@claude-night-market
/plugin install attune@claude-night-market
/plugin install scribe@claude-night-market

Browse by Layer

Browse by Plugin

PluginDescription
abstractMeta-skills for plugin development
imbueAnalysis and evidence gathering
sanctumGit and workspace operations
leylineInfrastructure building blocks
conserveContext and resource optimization
conjureExternal LLM delegation
archetypesArchitecture paradigms
pensiveCode review toolkit
parseltonguePython development
memory-palaceKnowledge organization
spec-kitSpecification-driven development
ministerRelease management
attuneFull-cycle project development
scribeDocumentation review and AI slop detection
Read all plugin pages to unlock: Plugin Explorer

Meta Layer

The meta layer provides infrastructure for building, evaluating, and maintaining plugins themselves.

Purpose

While other layers focus on user-facing workflows, the meta layer focuses on:

  • Plugin Development: Tools for creating new skills, commands, and hooks
  • Quality Assurance: Evaluation frameworks for plugin quality
  • Architecture Guidance: Patterns for modular, maintainable plugins

Plugins

PluginDescription
abstractMeta-skills infrastructure for plugin development

When to Use

Use meta layer plugins when:

  • Creating a new plugin for the marketplace
  • Evaluating existing skill quality
  • Refactoring large skills into modules
  • Validating plugin structure before publishing

Key Capabilities

Plugin Validation

/validate-plugin [path]

Checks plugin structure against official requirements.

Skill Creation

/create-skill

Scaffolds new skills using best practices and TDD methodology.

Quality Assessment

/skills-eval

Scores skill quality and suggests improvements.

Architecture Position

Meta Layer
    |
    v
Foundation Layer (imbue, sanctum, leyline)
    |
    v
Utility Layer (conservation, conjure)
    |
    v
Domain Specialists

The meta layer sits above all others, providing tools to build and maintain the entire ecosystem.

abstract

Meta-skills infrastructure for the plugin ecosystem - skill authoring, hook development, and quality evaluation.

Overview

The abstract plugin provides tools for building, evaluating, and maintaining Claude Code plugins. It’s the toolkit for plugin developers.

Installation

/plugin install abstract@claude-night-market

Skills

SkillDescriptionWhen to Use
skill-authoringTDD methodology with Iron Law enforcementCreating new skills with quality standards
hook-authoringSecurity-first hook developmentBuilding safe, effective hooks
modular-skillsModular design patternsBreaking large skills into modules
rules-evalClaude Code rules validationAuditing .claude/rules/ for frontmatter, glob patterns, and content quality
skills-evalSkill quality assessmentAuditing skills for token efficiency
hooks-evalHook security scanningVerifying hook safety
escalation-governanceModel escalation decisionsDeciding when to escalate models
methodology-curatorExpert framework curationGrounding skills in proven methodologies
shared-patternsPlugin development patternsReusable templates
subagent-testingSubagent test patternsTesting subagent interactions

Commands

CommandDescription
/validate-plugin [path]Check plugin structure against requirements
/create-skillScaffold new skill with best practices
/create-commandScaffold new command
/create-hookScaffold hook with security-first design
/analyze-skillGet modularization recommendations
/bulletproof-skillAnti-rationalization workflow for hardening
/context-reportContext optimization report
/hooks-evaldetailed hook evaluation
/make-dogfoodAnalyze and enhance Makefiles
/rules-evalEvaluate Claude Code rules quality
/skills-evalRun skill quality assessment
/test-skillSkill testing with TDD methodology
/validate-hookValidate hook compliance

Agents

AgentDescription
meta-architectDesigns plugin ecosystem architectures
plugin-validatorValidates plugin structure
skill-auditorAudits skills for quality and compliance

Hooks

HookTypeDescription
homeostatic_monitor.pyPostToolUseReads stability gap metrics, queues degrading skills for auto-improvement
aggregate_learnings_daily.pyUserPromptSubmitDaily learning aggregation with severity-based issue creation
pre_skill_execution.pyPreToolUseSkill execution tracking
skill_execution_logger.pyPostToolUseSkill metrics logging
post-evaluation.jsonConfigQuality scoring and improvement tracking
pre-skill-load.jsonConfigPre-load validation for dependencies

Insight Engine

The insight engine transforms raw skill execution metrics into diverse findings posted to GitHub Discussions. Four trigger points feed a pluggable lens architecture through a deduplication registry.

Architecture

Stop Hook (lightweight) ──┐
Scheduled agent (deep) ───┤
/pr-review ───────────────┤
/code-refinement ─────────┘
        │
        v
  insight_analyzer.py
  (loads lenses, runs analysis)
        │
        v
  InsightRegistry
  (content-hash dedup, 30-day expiry)
        │
        v
  post_insights_to_discussions.py
  (posts to "Insights" category)

Lenses

Four built-in lightweight lenses run on every Stop hook:

LensWhat it detects
TrendLensDegradation or improvement over time
PatternLensShared failure modes across skills
HealthLensUnused skills, orphaned hooks, config drift
DeltaLensChanges since the last posted snapshot

LLM-augmented lenses (BugLens, OptimizationLens, ImprovementLens) run in the scheduled agent only.

Custom lenses drop into scripts/lenses/ and auto-discover via the LENS_META + analyze() convention.

Deduplication

Findings pass through four layers before posting:

  1. Content hash: deterministic SHA-256 from type, skill, and summary prevents re-posting identical findings.
  2. Snapshot diff: DeltaLens compares current metrics to the last snapshot and only surfaces changes.
  3. Staleness expiry: hashes expire after 30 days so persistent problems resurface with fresh data.
  4. Semantic dedup: Jaccard similarity against existing Discussions links related findings or skips near-duplicates.

Insight Types

TypePrefixSource
Trend[Trend]Script
Pattern[Pattern]Script
Bug Alert[Bug Alert]Agent
Optimization[Optimization]Agent
Improvement[Improvement]Agent
PR Finding[PR Finding]PR review
Health Check[Health Check]Script

See ADR 0007 for the GitHub Discussions integration design and the palace bridge for cross-plugin knowledge flow.

Self-Adapting System

A closed-loop system that monitors skill health and auto-triggers improvements:

  1. homeostatic_monitor.py checks stability gap after each Skill invocation
  2. Skills with gap > 0.3 are queued in improvement_queue.py
  3. After 3+ flags, the skill-improver agent runs automatically
  4. skill_versioning.py tracks changes via YAML frontmatter
  5. rollback_reviewer.py creates GitHub issues if regressions are detected
  6. experience_library.py stores successful trajectories for future context

Cross-plugin dependency: reads stability metrics from memory-palace’s .history.json.

Usage Examples

Create a New Skill

/create-skill

# Claude will:
# 1. Use brainstorming for idea refinement
# 2. Apply TDD methodology
# 3. Generate skill scaffold
# 4. Create tests

Evaluate Skill Quality

Skill(abstract:skills-eval)

# Scores skills on:
# - Token efficiency
# - Documentation quality
# - Trigger clarity
# - Modular structure

Validate Plugin Structure

/validate-plugin /path/to/my-plugin

# Checks:
# - plugin.json structure
# - Required files present
# - Skill format compliance
# - Command syntax

Best Practices

Skill Design

  1. Single Responsibility: Each skill does one thing well
  2. Clear Triggers: Include “Use when…” in descriptions
  3. Token Efficiency: Keep skills under 2000 tokens
  4. TodoWrite Integration: Output actionable items

Hook Security

  1. No Secrets: Never log sensitive data
  2. Fail Safe: Default to allowing operations
  3. Minimal Scope: Request only needed permissions
  4. Audit Trail: Log decisions for review
  5. Agent-Aware (2.1.2+): SessionStart hooks receive agent_type to customize context

Superpowers Integration

When superpowers is installed:

CommandEnhancement
/create-skillUses brainstorming for idea refinement
/create-commandUses brainstorming for concept development
/create-hookUses brainstorming for security design
/test-skillUses test-driven-development for TDD cycles
  • leyline: Infrastructure patterns abstract builds on
  • imbue: Review patterns for skill evaluation

Foundation Layer

The foundation layer provides core workflow methodologies that other plugins build upon.

Purpose

Foundation plugins establish:

  • Analysis Patterns: How to approach investigation and review tasks
  • Workspace Operations: Git and file system interactions
  • Infrastructure Utilities: Reusable patterns for building plugins

Plugins

PluginDescriptionKey Use Case
imbueWorkflow methodologiesAnalysis, evidence gathering
sanctumGit operationsCommits, PRs, documentation
leylineBuilding blocksError handling, authentication

Dependency Flow

imbue (standalone)
  |
sanctum --> imbue
  |
leyline (standalone)
  • imbue: No dependencies, purely methodology
  • sanctum: Uses imbue for review patterns
  • leyline: No dependencies, infrastructure patterns

When to Use

imbue

Use when you need to:

  • Structure a detailed review
  • Analyze changes systematically
  • Capture evidence for decisions
  • Prevent overengineering (scope-guard)

sanctum

Use when you need to:

  • Understand repository state
  • Generate commit messages
  • Prepare pull requests
  • Update documentation

leyline

Use when you need to:

  • Implement error handling patterns
  • Add authentication flows
  • Build plugin infrastructure
  • Standardize testing approaches

Key Workflows

Pre-Commit Flow

Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)

Review Flow

Skill(imbue:review-core)
Skill(imbue:proof-of-work)
Skill(imbue:structured-output)

PR Preparation

Skill(sanctum:git-workspace-review)
Skill(sanctum:pr-prep)

Installation

# Minimal foundation
/plugin install imbue@claude-night-market

# Full foundation
/plugin install imbue@claude-night-market
/plugin install sanctum@claude-night-market
/plugin install leyline@claude-night-market

imbue

Workflow methodologies for analysis, evidence gathering, and structured output.

Overview

Imbue provides reusable patterns for approaching analysis tasks. It’s a methodology plugin - the patterns apply to various inputs (git diffs, specs, logs) and chain together for complex workflows.

Core Philosophy: “NO IMPLEMENTATION WITHOUT A FAILING TEST FIRST” - The Iron Law enforced through proof-of-work validation.

Installation

/plugin install imbue@claude-night-market

Principles

  • Generalizable: Patterns work across different input types
  • Composable: Skills chain together naturally
  • Evidence-based: Emphasizes capturing proof for reproducibility
  • TDD-First: Iron Law enforcement prevents cargo cult testing

Skills

Review Patterns

SkillDescriptionWhen to Use
review-coreScaffolding for detailed reviewsStarting architecture, security, or code quality reviews
structured-outputOutput formatting patternsPreparing final reports

Analysis Methods

SkillDescriptionWhen to Use
diff-analysisSemantic changeset analysisUnderstanding impact of changes
catchupContext recoveryGetting up to speed after time away

Workflow Guards

SkillDescriptionWhen to Use
scope-guardAnti-overengineering with RICE+WSJF scoringEvaluating features, sprint planning, roadmap reviews
proof-of-workEvidence-based validation with output-contracts and retry-protocol modulesEnforcing Iron Law TDD discipline
rigorous-reasoningAnti-sycophancy guardrailsAnalyzing conflicts, evaluating contested claims

Workflow Automation

SkillDescriptionWhen to Use
workflow-monitorExecution monitoring and issue creationAfter workflow failures or inefficiencies

Commands

CommandDescription
/catchupQuick context recovery from recent changes
/structured-reviewStart structured review workflow with evidence logging

Agents

AgentDescription
review-analystAutonomous structured reviews with evidence gathering

Hooks

HookTypeDescription
session-start.shSessionStartInitializes scope-guard, Iron Law, and learning mode
user-prompt-submit.shUserPromptSubmitValidates prompts against scope thresholds
tdd_bdd_gate.pyPreToolUseEnforces Iron Law at write-time
pre-pr-scope-check.shManualChecks scope before PR creation
proof-enforcement.mdDesignIron Law TDD compliance enforcement

Usage Examples

Structured Review

Skill(imbue:review-core)

# Required TodoWrite items:
# 1. review-core:context-established
# 2. review-core:scope-inventoried
# 3. review-core:evidence-captured
# 4. review-core:deliverables-structured
# 5. review-core:contingencies-documented

Diff Analysis

Skill(imbue:diff-analysis)

# Answers: "What changed and why does it matter?"
# - Categorizes changes by function
# - Assesses risks
# - Summarizes implications

Quick Catchup

/catchup

# Summarizes:
# - Recent commits
# - Changed files
# - Key decisions
# - Action items

Scope Guard

The scope-guard skill prevents overengineering via four components:

ComponentPurpose
decision-frameworkWorthiness formula and scoring
anti-overengineeringRules to prevent scope creep
branch-managementThreshold monitoring (lines, commits, days)
github-integrationIssue creation and optional Discussion linking for deferrals
baseline-scenariosValidated test scenarios

Iron Law TDD Enforcement

The proof-of-work skill enforces the Iron Law:

NO IMPLEMENTATION WITHOUT A FAILING TEST FIRST

This prevents “Cargo Cult TDD” where tests validate pre-conceived implementations.

Self-Check Protocol

Thought PatternViolationAction
“Let me plan the implementation first”Skipping REDWrite failing test FIRST
“I know what tests we need”Pre-conceived implDocument failure, THEN design
“The design is straightforward”Skipping uncertaintyLet design EMERGE from tests

TodoWrite Items

proof:iron-law-red     - Failing test documented
proof:iron-law-green   - Minimal code to pass
proof:iron-law-refactor - Code improved, tests green
proof:iron-law-coverage - Coverage gates verified

See iron-law-enforcement.md module for full enforcement patterns.

Rigorous Reasoning

The rigorous-reasoning skill prevents sycophantic patterns through structured analysis:

ComponentPurpose
priority-signalsOverride principles (no courtesy agreement, checklist over intuition)
conflict-analysisHarm/rights checklist for interpersonal conflicts
debate-methodologyTruth claims and contested territory handling
red-flag monitoringDetect sycophantic thought patterns

Red Flag Self-Check

Thought PatternReality CheckAction
“I agree that…”Did you validate?Apply harm/rights checklist
“You’re right that…”Is this proven?Check for evidence
“That’s a fair point”Fair by what standard?Specify the standard

TodoWrite Integration

All skills output TodoWrite items for progress tracking:

review-core:context-established
review-core:scope-inventoried
diff-analysis:baseline-established
diff-analysis:changes-categorized
catchup:context-confirmed
catchup:delta-captured

Integration Pattern

Imbue is foundational - other plugins build on it:

# Sanctum uses imbue for review patterns
Skill(imbue:review-core)
Skill(sanctum:git-workspace-review)

# Pensive uses imbue for evidence gathering
Skill(imbue:proof-of-work)
Skill(pensive:architecture-review)

Superpowers Integration

SkillEnhancement
scope-guardUses brainstorming, writing-plans, execute-plan
  • sanctum: Uses imbue for review scaffolding
  • pensive: Uses imbue for evidence gathering
  • spec-kit: Uses imbue for analysis patterns

sanctum

Git and workspace operations for active development workflows.

Overview

Sanctum handles the practical side of development: commits, PRs, documentation updates, and version management. It’s the plugin you’ll use most during active coding.

Installation

/plugin install sanctum@claude-night-market

Skills

SkillDescriptionWhen to Use
git-workspace-reviewPreflight repo state analysisBefore any git operation
file-analysisCodebase structure mappingUnderstanding project layout
commit-messagesConventional commit generationAfter staging changes
pr-prepPR preparation with quality gatesBefore creating PRs
pr-reviewPR analysis and feedback, supports --local for file outputReviewing others’ PRs
doc-consolidationMerge ephemeral docsConsolidating LLM-generated docs
doc-updatesDocumentation maintenanceSyncing docs with code
test-updatesTest generation and enhancementMaintaining test suites
version-updatesVersion bumpingManaging semantic versions
workflow-improvementWorkflow retrospectivesImproving development processes
tutorial-updatesTutorial maintenanceKeeping tutorials current

Commands

CommandDescription
/git-catchupGit repository catchup
/commit-msgDraft conventional commit message
/prPrepare PR with quality gates
/pr-reviewEnhanced PR review
/fix-prAddress PR review comments
/do-issueFix GitHub issues systematically
/fix-workflowImprove recent workflow
/merge-docsConsolidate ephemeral docs
/update-docsUpdate documentation
/update-pluginsAudit and sync plugin.json registrations
/update-testsMaintain tests
/update-tutorialUpdate tutorial content
/update-versionBump versions
/update-dependenciesUpdate project dependencies
/create-tagCreate git tags for releases
/resolve-threadsResolve PR review threads

Agents

AgentDescription
git-workspace-agentRepository state analysis
commit-agentCommit message generation
pr-agentPR preparation specialist
workflow-recreate-agentWorkflow slice reconstruction
workflow-improvement-*Workflow improvement pipeline
dependency-updaterDependency version management

Hooks

HookTypeDescription
post_implementation_policy.pySessionStartRequires docs/tests/readme updates
security_pattern_check.pyPreToolUseSecurity anti-pattern detection on Write/Edit
deferred_item_watcher.pyPostToolUseDetect deferred items in Skill output
config_change_audit.pyConfigChangeAudit configuration changes
verify_workflow_complete.pyStopVerifies workflow completion
session_complete_notify.pyStopToast notification when awaiting input
deferred_item_sweep.pyStopSweep session ledger and file GitHub issues

Usage Examples

Pre-Commit Workflow

# Stage changes
git add -p

# Review workspace
Skill(sanctum:git-workspace-review)

# Generate commit message
Skill(sanctum:commit-messages)

# Apply
git commit -m "<generated message>"

PR Preparation

# Run quality checks first
make fmt && make lint && make test

# Prepare PR
/pr

# Creates:
# - Summary
# - Change list
# - Testing checklist
# - Quality gate results

Fix PR Review Comments

/fix-pr

# Claude will:
# 1. Read PR comments
# 2. Triage by priority
# 3. Implement fixes
# 4. Resolve threads on GitHub

Fix GitHub Issue

/do-issue 42

# Uses subagent-driven-development:
# 1. Analyze issue
# 2. Create plan
# 3. Implement fix
# 4. Test
# 5. Prepare PR

Shared Modules

Sanctum uses shared modules under commands/shared/ to deduplicate logic across commands.

ModuleUsed ByPurpose
test-plan-injection/fix-pr, /pr-reviewDetect, generate, and inject test plans into PR descriptions

The test plan injection module checks whether a PR description already contains a test plan section (recognized heading + 3 or more checkbox items). When missing, it generates one from triage data and injects it before the review summary or appends it to the body.

Skill Dependencies

Most sanctum skills depend on git-workspace-review:

git-workspace-review (foundation)
├── commit-messages
├── pr-prep
├── doc-updates
└── version-updates

file-analysis (standalone)

Always run git-workspace-review first to establish context.

TodoWrite Integration

git-review:repo-confirmed
git-review:status-overview
git-review:diff-stat
git-review:diff-details
pr-prep:workspace-reviewed
pr-prep:quality-gates
pr-prep:changes-summarized
pr-prep:testing-documented
pr-prep:pr-drafted

Workflow Patterns

Pre-Commit

git add -p
Skill(sanctum:git-workspace-review)
Skill(sanctum:commit-messages)

Pre-PR

make fmt && make lint && make test
Skill(sanctum:git-workspace-review)
Skill(sanctum:pr-prep)

Post-Review

/fix-pr
# Implements fixes, resolves threads

Release

Skill(sanctum:git-workspace-review)
Skill(sanctum:version-updates)
Skill(sanctum:doc-updates)
git commit && git tag

Superpowers Integration

CommandEnhancement
/prUses receiving-code-review for validation
/pr-reviewUses receiving-code-review for analysis
/fix-prUses receiving-code-review for resolution
/do-issueUses multiple superpowers for full workflow
  • imbue: Provides review scaffolding sanctum uses
  • pensive: Code review complements sanctum’s git operations

leyline

Infrastructure and pipeline building blocks for plugins.

Overview

Leyline provides reusable infrastructure patterns that other plugins build on. Think of it as a standard library for plugin development - error handling, authentication, storage, and testing patterns.

Installation

/plugin install leyline@claude-night-market

Skills

SkillDescriptionWhen to Use
quota-managementRate limiting and quotasBuilding services that consume APIs
usage-loggingTelemetry trackingLogging tool usage for analytics
service-registryService discovery patternsManaging external tool connections
error-patternsStandardized error handling patternsProduction-grade error recovery
damage-controlRecovery protocols for broken agent stateCrash recovery, context overflow, merge conflicts
content-sanitizationSanitization for external contentLoading Issues, PRs, Discussions, or WebFetch results
markdown-formattingLine wrapping and style conventionsGenerating or editing markdown prose
authentication-patternsAuth flow patternsHandling API keys and OAuth
evaluation-frameworkDecision thresholdsBuilding evaluation criteria
progressive-loadingDynamic content loadingLazy loading strategies
risk-classificationInline 4-tier risk classification for agent tasksRisk-based task routing with war-room escalation
pytest-configPytest configurationStandardized test configuration
storage-templatesStorage abstractionFile and database patterns
stewardshipCross-cutting stewardship principles with five virtues (Care, Curiosity, Humility, Diligence, Foresight)Working with project health, codebase improvement, or virtue-aligned development
testing-quality-standardsTest quality guidelinesEnsuring high-quality tests
deferred-captureContract for unified deferred-item capture across pluginsImplementing or testing deferred-capture wrappers
git-platformGit platform detection and cross-platform commandsAbstracting GitHub/GitLab/Bitbucket differences
supply-chain-advisoryKnown-bad version detection, lockfile auditing, incident responseAfter supply chain advisories, dependency audits, or suspected compromise
sem-integrationsem CLI detection, install-on-first-use, fallback patternsSkills consuming git diff output that benefit from entity-level diffs

Commands

CommandDescription
/reinstall-all-pluginsUninstall and reinstall all plugins to refresh cache
/update-all-pluginsUpdate all installed plugins from marketplaces
/verify-pluginVerify plugin trust via ERC-8004 Reputation Registry

Usage Examples

Plugin Management

# Refresh all plugins (fixes version mismatches)
/reinstall-all-plugins

# Update to latest versions
/update-all-plugins

Using as Dependencies

Leyline skills are typically used as dependencies in other plugins:

# In your skill's SKILL.md frontmatter
dependencies:
  - leyline:error-patterns
  - leyline:quota-management

Error Handling Pattern

Skill(leyline:error-patterns)

# Provides:
# - Structured error types
# - Recovery strategies
# - Logging standards
# - User-friendly messages

Authentication Pattern

Skill(leyline:authentication-patterns)

# Covers:
# - API key management
# - OAuth flows
# - Token refresh
# - Secret storage

Testing Standards

Skill(leyline:testing-quality-standards)

# Enforces:
# - Test naming conventions
# - Coverage requirements
# - Mocking guidelines
# - Fixture patterns

Modules

frontmatter

Canonical YAML frontmatter parser shared across plugins.

from leyline.frontmatter import parse_frontmatter

content = """---
name: my-skill
category: testing
---

# My Skill
"""
meta = parse_frontmatter(content)
# {'name': 'my-skill', 'category': 'testing'}

When PyYAML is installed, it uses yaml.safe_load. When unavailable, it falls back to a minimal key-value parser that handles simple key: value pairs (no nested structures). Returns None for content without frontmatter.

Other plugins should import this instead of reimplementing frontmatter parsing.

Pattern Categories

Rate Limiting

# quota-management pattern
from leyline import QuotaManager

manager = QuotaManager(
    daily_limit=1000,
    hourly_limit=100,
    burst_limit=10
)

if manager.can_proceed():
    # Make API call
    manager.record_usage()

Telemetry

# usage-logging pattern
from leyline import UsageLogger

logger = UsageLogger(output="telemetry.csv")
logger.log_tool_use("WebFetch", tokens=500, latency_ms=1200)

Storage Abstraction

# storage-templates pattern
from leyline import Storage

storage = Storage.from_config()
storage.save("key", data)
data = storage.load("key")

Discussion Operations (GitHub Only)

The git-platform skill’s command-mapping module provides GraphQL templates for GitHub Discussions. These templates are consumed by attune (war room publishing), imbue (scope-guard linking), memory-palace (knowledge promotion), and minister (playbook rituals).

Supported operations: create, comment, threaded reply, mark-as-answer, search, get-by-number, update, and list-by-category. Category resolution from slug to nodeId is included as a prerequisite step.

On non-GitHub platforms (GitLab, Bitbucket), all Discussion operations are skipped with a warning.

A fetch-recent-discussions.sh SessionStart hook queries the 5 most recent “Decisions” discussions at session start and injects a summary (<600 tokens) so that new sessions can discover prior deliberations.

An auto-star-repo.sh SessionStart hook stars the repository if not already starred. The hook is idempotent (checks status before acting), never unstars, and fails silently if no auth method is available.

Integration

Leyline is used by:

  • abstract: Plugin validation uses error patterns
  • conjure: Delegation uses quota management
  • conservation: Context optimization uses MECW patterns

Best Practices

  1. Don’t Duplicate: Use leyline patterns instead of reimplementing
  2. Compose Patterns: Combine multiple patterns for complex needs
  3. Test with Standards: Use pytest-config for consistent testing
  4. Log Everything: Use usage-logging for debugging and analytics
  • abstract: Uses leyline for plugin infrastructure
  • conjure: Uses leyline for quota and service management
  • conservation: Uses leyline for MECW implementation

Utility Layer

The utility layer provides resource optimization and external integration capabilities.

Purpose

Utility plugins handle:

  • Resource Management: Context window optimization, token conservation
  • External Delegation: Offloading tasks to external LLM services
  • Performance Monitoring: CPU/GPU and memory tracking

Plugins

PluginDescriptionKey Use Case
conserveResource optimizationContext management
conjureExternal delegationLong-context tasks
hookifyBehavioral rulesPreventing unwanted actions

When to Use

conserve

Use when you need to:

  • Monitor context window usage
  • Optimize token consumption
  • Handle large codebases efficiently
  • Track resource usage patterns

conjure

Use when you need to:

  • Process files too large for Claude’s context
  • Delegate bulk processing tasks
  • Use specialized external models
  • Manage API quotas across services

hookify

Use when you need to:

  • Prevent accidental destructive actions (force push, etc.)
  • Enforce coding standards via pattern matching
  • Create project-specific behavioral constraints
  • Add safety guardrails for automated workflows

Key Capabilities

Context Optimization

/optimize-context

Analyzes current context usage and suggests MECW (Minimum Effective Context Window) strategies.

Growth Analysis

/bloat-scan

Predicts context budget impact of skill growth patterns. (Growth analysis has been consolidated into /bloat-scan.)

External Delegation

make delegate-auto PROMPT="Summarize" FILES="src/"

Auto-selects the best external service for a task.

Conserve Modes

The conserve plugin supports different modes via environment variables:

ModeCommandBehavior
NormalclaudeFull conservation guidance
QuickCONSERVE_MODE=quick claudeSkip guidance for fast tasks
DeepCONSERVE_MODE=deep claudeExtended resource allowance

Key Thresholds

Context Usage

  • < 30%: LOW - Normal operation
  • 30-50%: MODERATE - Consider optimization
  • > 50%: CRITICAL - Optimize immediately

Token Quotas

  • 5-hour rolling cap
  • Weekly cap
  • Check with /status

Installation

# Resource optimization
/plugin install conserve@claude-night-market

# External delegation
/plugin install conjure@claude-night-market

Integration with Other Layers

Utility plugins enhance all other layers:

Domain Specialists
       |
       v
   Utility Layer (optimization, delegation)
       |
       v
 Foundation Layer

For example, conjure can delegate large file processing before sanctum analyzes the results.

conserve

Resource optimization and performance monitoring for context window management.

Overview

Conserve helps you work efficiently within Claude’s context limits. It automatically loads optimization guidance at session start and provides tools for monitoring and reducing context usage.

Installation

/plugin install conserve@claude-night-market

Skills

SkillDescriptionWhen to Use
context-optimizationMECW principles, 50% context rule, findings-format, memory-tiers, session-routing modulesWhen context usage > 30%
token-conservationToken usage strategies and quota trackingSession start, before heavy loads
cpu-gpu-performanceResource monitoring and selective testingBefore builds, tests, or training
mcp-code-executionMCP patterns for data pipelinesProcessing data outside context
bloat-detectorDetect bloated documentation, dead code, dead wrappersDuring documentation reviews, code cleanup
clear-contextContext window management strategiesWhen approaching context limits

Commands

CommandDescription
/bloat-scanDetect code bloat, dead code, and dead wrapper scripts
/unbloatRemove detected bloat with progressive analysis
/optimize-contextAnalyze and optimize context window usage

Agents

AgentDescription
context-optimizerAutonomous context optimization and MECW compliance

Hooks

HookTypeDescription
session-start.shSessionStartLoads conservation guidance at startup

Usage Examples

Context Optimization

/optimize-context

# Analyzes:
# - Current context usage
# - Token distribution
# - Compression opportunities
# - MECW compliance

Manual Skill Invocation

Skill(conservation:context-optimization)

# Provides:
# - MECW principles
# - 50% context rule
# - Compression strategies
# - Eviction priorities

Bypass Modes

Control conservation behavior via environment variables:

ModeCommandBehavior
NormalclaudeFull conservation guidance
QuickCONSERVATION_MODE=quick claudeSkip guidance for fast processing
DeepCONSERVATION_MODE=deep claudeExtended resource allowance

Examples

# Quick mode for simple tasks
CONSERVATION_MODE=quick claude

# Deep mode for complex analysis
CONSERVATION_MODE=deep claude

Key Thresholds

Context Usage

LevelUsageAction
LOW< 30%Normal operation
MODERATE30-50%Consider optimization
CRITICAL> 50%Optimize immediately

Token Quotas

  • 5-hour rolling cap: Prevents burst usage
  • Weekly cap: validates sustainable usage
  • Check status: Use /status to see current usage

MECW Principles

Minimum Effective Context Window strategies:

  1. Summarize Early: Compress large outputs before they accumulate
  2. Load on Demand: Fetch file contents only when needed
  3. Evict Stale: Remove information no longer relevant
  4. Prioritize Recent: Weight recent context higher than old

Optimization Strategies

For Large Files

# Don't load entire file
# Instead, use targeted reads
Read file.py --offset 100 --limit 50

For Search Results

# Limit search output
Grep --head_limit 20

For Git Operations

# Use stats instead of full diffs
git diff --stat
git log --oneline -10

CPU/GPU Performance

The cpu-gpu-performance skill monitors resource usage:

Skill(conservation:cpu-gpu-performance)

# Provides:
# - Baseline establishment
# - Resource monitoring
# - Selective test execution
# - Build optimization

MCP Code Execution

For processing data too large for context:

Skill(conservation:mcp-code-execution)

# Patterns for:
# - External data processing
# - Pipeline optimization
# - Result summarization

Superpowers Integration

CommandEnhancement
/optimize-contextUses condition-based-waiting for smart optimization
  • leyline: Provides MECW pattern implementations
  • abstract: Uses conservation for skill optimization
  • conjure: Delegates to external services when context limited

conjure

Delegation to external LLM services for long-context or bulk tasks.

Overview

Conjure provides a framework for delegating tasks to external LLM services (Gemini, Qwen) when Claude’s context window is insufficient or when specialized models are better suited.

Installation

/plugin install conjure@claude-night-market

Skills

SkillDescriptionWhen to Use
delegation-coreFramework for delegation decisionsAssessing if tasks should be offloaded
gemini-delegationGemini CLI integrationProcessing massive context windows
qwen-delegationQwen MCP integrationTasks requiring specific privacy needs

Commands (Makefile)

CommandDescriptionExample
make delegate-autoAuto-select best servicemake delegate-auto PROMPT="Summarize" FILES="src/"
make quota-statusShow current quota usagemake quota-status
make usage-reportSummarize token usage and costsmake usage-report

Hooks

HookTypeDescription
bridge.on_tool_startPreToolUseSuggests delegation when files exceed thresholds
bridge.after_tool_usePostToolUseSuggests delegation if output is truncated

Usage Examples

Auto-Delegation

make delegate-auto PROMPT="Summarize all files" FILES="src/"

# Conjure will:
# 1. Assess file sizes
# 2. Check quota availability
# 3. Select optimal service
# 4. Execute delegation
# 5. Return results

Check Quota Status

make quota-status

# Output:
# Gemini: 450/1000 tokens used (5h rolling)
# Qwen: 200/500 tokens used (5h rolling)

Usage Report

make usage-report

# Output:
# This week:
#   Gemini: 2,500 tokens, $0.05
#   Qwen: 800 tokens, $0.02
# Total: 3,300 tokens, $0.07

Manual Service Selection

# Force Gemini for large context
Skill(conjure:gemini-delegation)

# Force Qwen for privacy-sensitive tasks
Skill(conjure:qwen-delegation)

Delegation Decision Framework

The delegation-core skill evaluates:

FactorWeightDescription
Context SizeHighDoes input exceed Claude’s context?
Task TypeMediumIs task better suited for another model?
Privacy NeedsHighAre there data residency requirements?
Quota AvailableHighDo we have capacity on target service?
CostLowIs delegation cost-effective?

Service Comparison

ServiceStrengthsBest For
GeminiLarge context (1M+ tokens)Bulk file processing, long documents
QwenLocal/private inferenceSensitive data, offline work

Hook Behavior

Pre-Tool Use Hook

When reading large files:

[Conjure Bridge] File exceeds context threshold
Suggested action: Delegate to Gemini
Estimated tokens: 125,000
Quota available: Yes

Post-Tool Use Hook

When output is truncated:

[Conjure Bridge] Output truncated at 50,000 chars
Suggested action: Re-run with delegation
Recommended service: Gemini

Configuration

Environment Variables

# Gemini API key
export GEMINI_API_KEY=your-key

# Qwen MCP endpoint
export QWEN_MCP_ENDPOINT=http://localhost:8080

Quota Configuration

Edit conjure/config/quotas.yaml:

gemini:
  hourly_limit: 1000
  daily_limit: 10000

qwen:
  hourly_limit: 500
  daily_limit: 5000

Integration Patterns

With Conservation

# Conservation detects high context usage
# Suggests delegation via conjure
Skill(conservation:context-optimization)
# -> Recommends: Skill(conjure:delegation-core)

With Sanctum

# Large repo analysis
Skill(sanctum:git-workspace-review)
# If repo too large:
# -> Suggests: make delegate-auto FILES="."

Dependencies

Conjure uses leyline for infrastructure:

conjure
    |
    v
leyline (quota-management, service-registry)

Best Practices

  1. Check Quota First: Run make quota-status before large delegations
  2. Use Auto Mode: Let conjure select the optimal service
  3. Monitor Costs: Review make usage-report weekly
  4. Cache Results: Store delegation results locally to avoid repeat calls
  • leyline: Provides quota management and service registry
  • conservation: Detects when delegation is beneficial

hookify

Create custom behavioral rules through markdown configuration files.

Overview

Hookify provides a framework for defining behavioral rules that prevent unwanted actions through pattern matching. Rules are defined in markdown files and can be enabled, disabled, or customized per project.

Installation

/plugin install hookify@claude-night-market

Skills

SkillDescriptionWhen to Use
writing-rulesGuide for authoring behavioral rulesCreating new rules
rule-catalogPre-built behavioral rule templatesInstalling common rules

Commands

CommandDescription
/hookifyCreate behavioral rules to prevent unwanted actions
/hookify:installInstall hookify rule from catalog
/hookify:listList all hookify rules with status
/hookify:configureInteractive rule enable/disable interface
/hookify:helpDisplay hookify help and documentation

Usage Examples

Install a Rule

# Install from catalog
/hookify:install no-force-push

# List installed rules
/hookify:list --status

Create Custom Rule

# Create a new rule interactively
/hookify

# Configure existing rule
/hookify:configure no-force-push --disable

Rule Structure

Rules are markdown files with frontmatter:

---
name: no-force-push
trigger: PreToolUse
matcher: Bash
pattern: "git push.*--force"
action: block
message: "Force push blocked. Use --force-with-lease instead."
---

# No Force Push Rule

Prevents accidental force pushes that could overwrite remote history.

Integration

Hookify integrates with:

  • abstract: Rule validation and testing
  • imbue: Scope guard integration
  • sanctum: Git workflow protection

egregore

Autonomous agent orchestrator for full development lifecycles with session budget management and crash recovery.

Overview

Egregore spawns autonomous Claude Code sessions that execute multi-step development tasks without human input. It manages session budgets, provides crash recovery via a watchdog daemon, and validates output quality before merging.

Installation

/plugin install egregore@claude-night-market

Skills

SkillDescriptionWhen to Use
summonSpawn autonomous session with budgetDelegating full tasks
quality-gatePre-merge quality validationBefore merging autonomous work
install-watchdogInstall crash-recovery watchdogSetting up monitoring
uninstall-watchdogRemove watchdogCleaning up monitoring

Commands

CommandDescription
/summonSpawn autonomous agent session
/dismissTerminate autonomous session
/statusCheck session status
/install-watchdogInstall crash-recovery daemon
/uninstall-watchdogRemove watchdog daemon

Agents

AgentDescription
orchestratorManages autonomous development lifecycle
sentinelWatchdog agent for crash recovery

Usage Examples

Spawn an Autonomous Session

# Summon with default budget
/summon "Implement feature X"

# Check status
/status

# Dismiss when done
/dismiss

Install Watchdog

# Set up crash recovery monitoring
/install-watchdog

# Remove when no longer needed
/uninstall-watchdog

Hooks

HookEventDescription
session_start_hook.pySessionStartInjects manifest context into new sessions
user_prompt_hook.pyUserPromptSubmitReminds orchestrator to resume after user interrupts
stop_hook.pyStopPrevents early exit while work items remain

The UserPromptSubmit hook lets users interact with a running egregore session without breaking the orchestration loop. After handling the user’s request, the orchestrator re-reads the manifest and resumes where it left off.

Self-Healing Heartbeat

A recurring cron (*/5 * * * *) detects stalled pipelines and re-enters the orchestration loop automatically. This catches edge cases where context compaction or unexpected errors break the loop despite the hooks.

Architecture

Egregore uses a convention-based approach where autonomous sessions follow project conventions stored in conventions/. The orchestrator agent manages the session lifecycle, while the sentinel agent monitors for crashes and restarts sessions as needed.

Parallel Execution

Independent work items run concurrently via git worktrees (up to 3 by default). Within the quality stage, independent steps execute in parallel waves using dependency-graph scheduling from stage_parallel.py.

Agent Specialization

Specialist agents (reviewer, documenter, tester) handle specific pipeline steps and accumulate context across sessions. Profiles persist in .egregore/specialists/.

Cross-Item Learning

The learning module extracts patterns from decision logs (tech stack choices, failure modes, architecture decisions) and generates briefings for new work items based on historical success rates.

Multi-Repository Support

RepoRegistry manages work across multiple repositories, routing items by labels and tracking per-repo configuration in .egregore/repos.json.

GitHub Discussions Publishing

Discoveries, insights, and retrospectives from autonomous sessions are published to GitHub Discussions with rate limiting and deduplication.

herald

Shared notification library for Claude Code plugins.

Overview

Herald was extracted from egregore to provide independent notification capabilities. Any plugin can send alerts through herald without depending on the full egregore orchestrator.

Herald is a pure library plugin: it declares no skills, commands, agents, or hooks. Plugins import its Python API directly (guarded with try/except per ADR-0001).

Installation

/plugin install herald@claude-night-market

Features

  • GitHub issue creation via gh CLI
  • Webhook delivery to Slack, Discord, or generic endpoints
  • SSRF protection with URL validation
  • Configurable source labels for multi-plugin use

Alert Events

EventValueDescription
CRASHcrashProcess or agent crash
RATE_LIMITrate_limitAPI quota exceeded
PIPELINE_FAILUREpipeline_failureBuild or deploy failure
COMPLETIONcompletionTask finished
WATCHDOG_RELAUNCHwatchdog_relaunchWatchdog restarted agent

Usage

from notify import AlertEvent, alert

alert(
    event=AlertEvent.CRASH,
    detail="Worker process crashed",
    source="my-plugin",
)

See the herald README for webhook examples.

oracle

Local ONNX Runtime inference daemon for ML-enhanced plugin capabilities.

Overview

Oracle runs a sidecar HTTP daemon that serves ONNX model inference on localhost. Plugins opt in explicitly by writing a sentinel file. It uses a dedicated Python 3.11+ venv managed by uv and does not touch the system Python environment.

Installation

/plugin install oracle@claude-night-market

Skills

  • setup - Install and configure the oracle ONNX inference daemon

Commands

  • /oracle-setup - Install and configure the oracle ONNX inference daemon, including venv creation and model placement

Domain Specialists

Domain specialist plugins provide deep expertise in specific areas of software development.

Purpose

Domain plugins offer:

  • Deep Expertise: Specialized knowledge for specific domains
  • Workflow Automation: End-to-end processes for common tasks
  • Best Practices: Curated patterns and anti-patterns

Plugins

PluginDomainKey Use Case
cartographVisualizationCodebase diagrams via Mermaid
archetypesArchitectureParadigm selection
pensiveCode ReviewMulti-faceted reviews
parseltonguePythonModern Python development
phantomDesktopComputer use automation
memory-palaceKnowledgeSpatial memory organization
spec-kitSpecificationsSpec-driven development
ministerReleasesInitiative tracking
attuneProjectsFull-cycle project development
scryMediaDocumentation recordings
scribeDocumentationAI slop detection and cleanup

When to Use

archetypes

Use when you need to:

  • Choose an architecture for a new system
  • Evaluate trade-offs between patterns
  • Get implementation guidance for a paradigm

pensive

Use when you need to:

  • Conduct thorough code reviews
  • Audit security and architecture
  • Review APIs, tests, or Makefiles

parseltongue

Use when you need to:

  • Write modern Python (3.12+)
  • Implement async patterns
  • Package projects with uv
  • Profile and optimize performance

phantom

Use when you need to:

  • Drive desktop environments through vision and action
  • Automate GUI interactions with screenshot capture
  • Control mouse and keyboard programmatically
  • Run autonomous desktop agent loops

memory-palace

Use when you need to:

  • Organize complex knowledge
  • Build spatial memory structures
  • Maintain digital gardens
  • Cache research efficiently

spec-kit

Use when you need to:

  • Define features before implementation
  • Generate structured task lists
  • Maintain specification consistency
  • Track implementation progress

minister

Use when you need to:

  • Track GitHub initiatives
  • Monitor release readiness
  • Generate stakeholder reports

attune

Use when you need to:

  • Brainstorm project ideas
  • Create specifications from concepts
  • Plan architecture and tasks
  • Initialize projects with tooling
  • Execute systematic implementation

scry

Use when you need to:

  • Record terminal demos with VHS
  • Capture browser sessions with Playwright
  • Generate GIFs for documentation
  • Compose multi-source tutorials

scribe

Use when you need to:

  • Detect AI-generated content markers
  • Clean up documentation slop
  • Learn and apply writing styles
  • Verify documentation accuracy

Dependencies

Most domain plugins depend on foundation layers:

archetypes (standalone)
pensive --> imbue, sanctum
parseltongue (standalone)
phantom (standalone)
memory-palace (standalone)
spec-kit --> imbue
minister (standalone)
attune --> spec-kit, imbue
scry (standalone)
scribe --> imbue, conserve

Example Workflows

Architecture Decision

Skill(archetypes:architecture-paradigms)
# Interactive paradigm selection
# Returns: Detailed implementation guide

Full Code Review

/full-review
# Runs multiple review types:
# - architecture-review
# - api-review
# - bug-review
# - test-review

Python Project Setup

Skill(parseltongue:python-packaging)
Skill(parseltongue:python-testing)

Feature Development

/speckit-specify Add user authentication
/speckit-plan
/speckit-tasks
/speckit-implement

Full Project Lifecycle

/attune:brainstorm
# Socratic questioning to explore project idea

/attune:specify
# Create specification from brainstorm

/attune:blueprint
# Design architecture and break down tasks

/attune:init
# Initialize project with tooling

/attune:execute
# Execute implementation with TDD

Media Recording

/record-terminal
# Creates VHS tape script and records terminal to GIF

/record-browser
# Records browser session with Playwright

Documentation Cleanup

Skill(scribe:slop-detector)
# Scans for AI-generated content markers

/doc-polish README.md
# Interactive cleanup of AI slop

Agent(scribe:doc-verifier)
# Validates documentation claims

Installation

Install based on your needs:

# Architecture work
/plugin install archetypes@claude-night-market

# Code review
/plugin install pensive@claude-night-market

# Python development
/plugin install parseltongue@claude-night-market

# Desktop automation
/plugin install phantom@claude-night-market

# Knowledge management
/plugin install memory-palace@claude-night-market

# Specification-driven development
/plugin install spec-kit@claude-night-market

# Release management
/plugin install minister@claude-night-market

# Full-cycle project development
/plugin install attune@claude-night-market

# Media recording
/plugin install scry@claude-night-market

# Documentation review
/plugin install scribe@claude-night-market
Use all domain specialist plugins to unlock: Domain Master

cartograph

Codebase visualization through architecture, data flow, dependency, workflow, and class diagrams rendered via Mermaid Chart MCP.

Overview

Cartograph analyzes code structure and generates Mermaid diagrams. A codebase explorer agent extracts modules, imports, and relationships, then diagram-specific skills convert the structural model into rendered visuals.

Installation

/plugin install cartograph@claude-night-market

Skills

SkillDescriptionWhen to Use
architecture-diagramComponent relationship diagramsSystem structure, plugin architecture
data-flowData movement between componentsRequest paths, API flows
dependency-graphImport and dependency relationshipsCoupling analysis, circular deps
workflow-diagramProcess steps and state transitionsCI/CD pipelines, dev workflows
class-diagramClasses, interfaces, inheritanceOOP structure, type hierarchies

Commands

CommandDescription
/visualizeGenerate a codebase diagram

Agents

AgentDescription
codebase-explorerAnalyzes modules, imports, and relationships

Usage Examples

Architecture Diagram

/visualize architecture plugins/sanctum

Generates a flowchart showing component relationships within the specified scope.

Dependency Graph

/visualize dependency plugins/

Shows import relationships between modules. Useful for spotting circular dependencies or tight coupling.

Data Flow

/visualize data-flow plugins/conserve

Produces a sequence diagram tracing data movement through the system.

Workflow Diagram

/visualize workflow

Maps process steps, decision points, and state transitions for development workflows or CI/CD pipelines.

Class Diagram

/visualize class plugins/gauntlet

Shows classes, interfaces, inheritance, and composition within a module.

How It Works

  1. The /visualize command routes to a diagram skill
  2. The skill dispatches the codebase-explorer agent
  3. The agent analyzes code structure and produces a JSON structural model
  4. The skill generates Mermaid syntax from the model
  5. The Mermaid Chart MCP server renders the diagram

Requirements

  • Mermaid Chart MCP server (included with Claude Code)
  • scry: Terminal and browser recordings for demos
  • pensive: Architecture review complements visual diagrams with written assessments
  • archetypes: Architecture paradigm selection pairs with architectural visualization

archetypes

Architecture paradigm selection and implementation planning.

Overview

Archetypes helps you choose the right architecture for your system. It provides an interactive paradigm selector and detailed implementation guides for 13 architectural patterns.

Installation

/plugin install archetypes@claude-night-market

Skills

Orchestrator

SkillDescriptionWhen to Use
architecture-paradigmsInteractive paradigm selectorChoosing architecture for new systems

Paradigm Guides

SkillArchitectureBest For
architecture-paradigm-layeredN-tierSimple web apps, internal tools
architecture-paradigm-hexagonalPorts & AdaptersInfrastructure independence
architecture-paradigm-microservicesDistributed servicesLarge-scale enterprise
architecture-paradigm-event-drivenAsync communicationReal-time processing
architecture-paradigm-serverlessFunction-as-a-ServiceEvent-driven with minimal infra
architecture-paradigm-pipelinePipes-and-filtersETL, media processing
architecture-paradigm-cqrs-esCQRS + Event SourcingAudit trails, event replay
architecture-paradigm-microkernelPlugin-basedMinimal core with extensions
architecture-paradigm-modular-monolithInternal boundariesModule separation without distribution
architecture-paradigm-space-basedData-gridHigh-scale stateful workloads
architecture-paradigm-service-basedCoarse-grained SOAModular without microservices
architecture-paradigm-functional-coreFunctional Core, Imperative ShellSuperior testability
architecture-paradigm-client-serverClient-serverClear client/server responsibilities

Usage Examples

Interactive Selection

Skill(archetypes:architecture-paradigms)

# Claude will:
# 1. Ask about your requirements
# 2. Evaluate trade-offs
# 3. Recommend paradigms
# 4. Provide implementation guidance

Direct Paradigm Access

# Get specific paradigm details
Skill(archetypes:architecture-paradigm-hexagonal)

# Returns:
# - Core concepts
# - When to use
# - Implementation patterns
# - Example code
# - Trade-offs

Paradigm Comparison

By Complexity

LevelParadigms
LowLayered, Client-Server
MediumModular Monolith, Service-Based, Functional Core
HighMicroservices, Event-Driven, CQRS-ES, Space-Based

By Team Size

TeamRecommended
1-3Layered, Functional Core, Modular Monolith
4-10Hexagonal, Service-Based, Pipeline
10+Microservices, Event-Driven

By Scalability Need

NeedParadigms
Single serverLayered, Modular Monolith
HorizontalMicroservices, Serverless
ExtremeSpace-Based, Event-Driven

Selection Criteria

The paradigm selector evaluates:

  1. Team size and structure
  2. Scalability requirements
  3. Deployment constraints
  4. Data consistency needs
  5. Development velocity priorities
  6. Operational maturity

Example Output

Hexagonal Architecture

## Hexagonal Architecture (Ports & Adapters)

### Core Concepts
- Domain logic at center
- Ports define interfaces
- Adapters implement ports
- Infrastructure is pluggable

### When to Use
- Need to swap databases/frameworks
- Test-driven development focus
- Long-lived applications
- Multiple integration points

### Structure
src/
├── domain/           # Pure business logic
│   ├── models/
│   └── services/
├── ports/            # Interface definitions
│   ├── inbound/
│   └── outbound/
└── adapters/         # Implementations
    ├── web/
    ├── persistence/
    └── external/

### Trade-offs
+ Easy testing via port mocking
+ Framework-agnostic domain
+ Clear dependency direction
- More initial structure
- Learning curve

Best Practices

  1. Start Simple: Begin with layered, evolve as needed
  2. Match Team: Don’t use microservices with a small team
  3. Consider Ops: Complex architectures need operational maturity
  4. Plan Evolution: Design for change, not perfection

Decision Tree

Start
  |
  v
Simple CRUD? --> Yes --> Layered
  |
  No
  |
  v
Need testability? --> Yes --> Functional Core or Hexagonal
  |
  No
  |
  v
High scale? --> Yes --> Event-Driven or Space-Based
  |
  No
  |
  v
Multiple teams? --> Yes --> Microservices or Service-Based
  |
  No
  |
  v
Modular Monolith
  • pensive: Architecture review complements paradigm selection
  • spec-kit: Use after paradigm selection for implementation planning

pensive

Code review and analysis toolkit with specialized review skills.

Overview

Pensive provides deep code review capabilities across multiple dimensions: architecture, APIs, bugs, tests, and more. It orchestrates reviews intelligently, selecting the right skills for each codebase.

Installation

/plugin install pensive@claude-night-market

Skills

SkillDescriptionWhen to Use
unified-reviewReview orchestrationStarting reviews (Claude picks tools)
api-reviewAPI surface evaluationReviewing OpenAPI specs, library exports
architecture-reviewArchitecture assessmentChecking ADR alignment, design principles
bug-reviewBug huntingSystematic search for logic errors
rust-reviewRust-specific checkingAuditing unsafe code, borrow patterns
test-reviewTest quality reviewEnsuring tests verify behavior
makefile-reviewMakefile best practicesReviewing Makefile quality
math-reviewMathematical correctnessReviewing mathematical logic
shell-reviewShell script auditingExit codes, portability, safety patterns
safety-critical-patternsNASA Power of 10 rulesRobust, verifiable code with context-appropriate rigor
code-refinementCode quality analysisDuplication, efficiency, clean code violations
tiered-auditThree-tier escalation auditCodebase audits starting from git history

Commands

CommandDescription
/full-reviewUnified review with intelligent skill selection
/api-reviewRun API surface review
/architecture-reviewRun architecture assessment
/bug-reviewRun bug hunting
/rust-reviewRun Rust-specific review
/test-reviewRun test quality review
/makefile-reviewRun Makefile review
/math-reviewRun mathematical review
/shell-reviewRun shell script safety review
/skill-reviewAnalyze skill runtime metrics and stability gaps (canonical)
/skill-historyView recent skill executions

Note: For static skill quality analysis (frontmatter, structure), use abstract:skill-auditor instead.

Agents

AgentDescription
code-reviewerExpert code review for bugs, security, quality
architecture-reviewerPrincipal-level architecture specialist
rust-auditorExpert Rust security and safety auditor

Usage Examples

Full Review

/full-review

# Claude will:
# 1. Analyze codebase structure
# 2. Select relevant review skills
# 3. Execute reviews in priority order
# 4. Synthesize findings
# 5. Provide actionable recommendations

Specific Reviews

# Architecture review
/architecture-review

# API review
/api-review

# Bug hunting
/bug-review

# Test quality
/test-review

Manual Skill Invocation

Skill(pensive:architecture-review)

# Checks:
# - ADR compliance
# - Dependency direction
# - Layer violations
# - Design pattern usage

Review Depth

Each review skill operates at multiple levels:

LevelDescriptionTime
QuickHigh-level scan1-2 min
StandardThorough review5-10 min
DeepExhaustive analysis15+ min

Specify depth when invoking:

/architecture-review --depth deep

Review Categories

Architecture Review

  • ADR alignment
  • Dependency analysis
  • Layer boundary violations
  • Pattern consistency
  • Coupling metrics

API Review

  • Endpoint consistency
  • Error response patterns
  • Authentication/authorization
  • Versioning strategy
  • Documentation completeness

Bug Review

  • Logic errors
  • Edge cases
  • Race conditions
  • Resource leaks
  • Error handling gaps

Test Review

  • Coverage gaps
  • Test isolation
  • Assertion quality
  • Mocking patterns
  • Edge case coverage

Rust Review

  • Unsafe code audit
  • Borrow checker patterns
  • Memory safety
  • Concurrency safety
  • Idiomatic usage
  • Silent return value checks
  • Collection type selection
  • SQL injection detection
  • #[cfg(test)] misuse patterns
  • Error message quality
  • Duplicate validator detection
  • Builtin preference (From/Into/TryFrom/Default over helpers)

Dependencies

Pensive builds on foundation plugins:

pensive
    |
    +--> imbue (review-core, proof-of-work)
    |
    +--> sanctum (git-workspace-review)

Workflow Integration

Pre-PR Review

# Before opening PR
Skill(sanctum:git-workspace-review)
/full-review

# Address findings
# Then create PR

Post-Merge Review

# After merge, deep review
/architecture-review --depth deep

Targeted Review

# Review specific area
/api-review src/api/

Superpowers Integration

CommandEnhancement
/full-reviewUses systematic-debugging for four-phase analysis
/full-reviewUses verification-before-completion for evidence

Output Format

Reviews produce structured output:

## Review Summary

### Critical Issues
1. [BUG] Race condition in UserService.update()
   - Location: src/services/user.ts:45
   - Impact: Data corruption under load
   - Recommendation: Add mutex lock

### Warnings
1. [ARCH] Layer violation detected
   - Controllers importing from repositories
   - Recommendation: Add service layer

### Suggestions
1. [TEST] Missing edge case coverage
   - UserService.delete() lacks null check test
  • imbue: Provides review scaffolding
  • sanctum: Provides workspace context
  • archetypes: Paradigm context for architecture review

phantom

Computer use toolkit for desktop automation via Claude’s vision and action API.

Overview

Phantom enables Claude to interact with desktop environments through screenshot capture, mouse/keyboard control, and an autonomous agent loop. It wraps Claude’s Computer Use API for sandboxed GUI automation workflows.

Security Precautions

Computer use grants Claude direct control over mouse, keyboard, and screen reading. Follow these precautions:

  • Run in a sandboxed environment (VM, container, or dedicated machine). Never run on a machine with access to production systems or sensitive credentials.
  • Review tasks before execution. The /control-desktop command displays the planned actions. Confirm before allowing execution.
  • Limit network access. Restrict outbound connections from the sandbox to prevent data exfiltration if the agent navigates to an unintended URL.
  • Do not store credentials in the sandbox environment. If a workflow requires login, use temporary tokens with narrow scope.
  • Monitor active sessions. The desktop-pilot agent runs autonomously. Watch for unexpected navigation or input actions and terminate if behavior deviates.

Installation

/plugin install phantom@claude-night-market

Skills

SkillDescriptionWhen to Use
computer-controlDesktop automation via screenshot capture, mouse/keyboard controlAutomating GUI tasks in sandboxed environments

Commands

CommandDescription
/control-desktopRun a computer use task on the desktop

Agents

AgentDescription
desktop-pilotAutonomous desktop control with multi-step GUI workflows

Usage Examples

Control a Desktop

# Run a GUI automation task
/control-desktop "Open the browser and navigate to example.com"

# Use the agent for multi-step workflows
Agent(phantom:desktop-pilot)

parseltongue

Modern Python development suite for testing, performance, async patterns, and packaging.

Overview

Parseltongue brings Python 3.12+ best practices to your workflow. It covers the full development lifecycle: testing with pytest, performance optimization, async patterns, and modern packaging with uv.

Installation

/plugin install parseltongue@claude-night-market

Skills

SkillDescriptionWhen to Use
python-testingPytest and TDD workflowsWriting and running tests
python-performanceProfiling and optimizationDebugging slow code
python-asyncAsync programming patternsImplementing asyncio
python-packagingModern packaging with uvManaging pyproject.toml

Commands

CommandDescription
/analyze-testsReport on test suite health
/run-profilerProfile code execution
/check-asyncValidate async patterns

Agents

AgentDescription
python-proMaster Python 3.12+ with modern features
python-testerExpert testing for pytest, TDD, mocking
python-optimizerExpert performance optimization

Usage Examples

Test Analysis

/analyze-tests

# Reports:
# - Coverage metrics
# - Test distribution
# - Slow tests
# - Missing coverage areas
# - Anti-patterns detected

Profiling

/run-profiler src/heavy_function.py

# Outputs:
# - CPU time breakdown
# - Memory usage
# - Hot paths
# - Optimization suggestions

Async Validation

/check-async src/async_module.py

# Checks:
# - Proper await usage
# - Event loop handling
# - Async context managers
# - Concurrency patterns

Skill Invocation

Skill(parseltongue:python-testing)

# Provides:
# - Pytest configuration patterns
# - TDD workflow guidance
# - Mocking strategies
# - Fixture patterns

Python 3.12+ Features

Parseltongue emphasizes modern Python:

Type Hints

# Modern syntax (3.10+)
def process(data: list[str] | None) -> dict[str, int]:
    ...

Pattern Matching

# Structural pattern matching (3.10+)
match response:
    case {"status": "ok", "data": data}:
        return data
    case {"status": "error", "message": msg}:
        raise ValueError(msg)

Exception Groups

# Exception groups (3.11+)
try:
    async with asyncio.TaskGroup() as tg:
        tg.create_task(task1())
        tg.create_task(task2())
except* ValueError as eg:
    for exc in eg.exceptions:
        handle(exc)

Testing Patterns

TDD Workflow

Skill(parseltongue:python-testing)

# RED-GREEN-REFACTOR:
# 1. Write failing test
# 2. Implement minimal code
# 3. Refactor with tests green

Fixture Patterns

# Recommended patterns
@pytest.fixture
def db_session(tmp_path):
    """Session-scoped database fixture."""
    db = Database(tmp_path / "test.db")
    yield db
    db.close()

@pytest.fixture
def user(db_session):
    """User fixture depending on db."""
    return db_session.create_user("test")

Mocking Strategies

# Strategic mocking
def test_api_call(mocker):
    mock_response = mocker.patch("requests.get")
    mock_response.return_value.json.return_value = {"status": "ok"}

    result = fetch_data()

    assert result["status"] == "ok"

Performance Optimization

Profiling Tools

# cProfile integration
python -m cProfile -s cumtime script.py

# Memory profiling
from memory_profiler import profile

@profile
def memory_heavy():
    ...

Optimization Patterns

  • Generators over lists: Save memory
  • Local variables: Faster lookup
  • Built-in functions: C-optimized
  • Lazy evaluation: Defer computation

Async Patterns

async def main():
    async with aiohttp.ClientSession() as session:
        tasks = [fetch(session, url) for url in urls]
        results = await asyncio.gather(*tasks)
    return results

if __name__ == "__main__":
    asyncio.run(main())

Anti-Patterns to Avoid

  • Blocking calls in async functions
  • Creating event loops inside coroutines
  • Ignoring exceptions in fire-and-forget tasks

Packaging with uv

pyproject.toml

[project]
name = "my-package"
version = "1.0.0"
dependencies = ["requests>=2.28"]

[project.optional-dependencies]
dev = ["pytest", "ruff", "mypy"]

[tool.uv]
index-url = "https://pypi.org/simple"

Commands

# Install with uv
uv pip install -e ".[dev]"

# Lock dependencies
uv pip compile pyproject.toml -o requirements.lock

# Sync environment
uv pip sync requirements.lock

Superpowers Integration

SkillEnhancement
python-testingUses test-driven-development for TDD cycles
python-testingUses testing-anti-patterns for detection
  • leyline: Provides pytest-config patterns
  • sanctum: Test updates integrate with test-updates skill

memory-palace

Knowledge organization using spatial memory techniques.

Overview

Memory Palace applies the ancient method of loci to digital knowledge management. It helps you build “palaces” - structured knowledge repositories that use spatial metaphors for organization and retrieval.

Installation

/plugin install memory-palace@claude-night-market

Skills

SkillDescriptionWhen to Use
memory-palace-architectBuilding virtual palacesOrganizing complex concepts
knowledge-locatorSpatial searchFinding stored information
knowledge-intakeIntake and curationProcessing new information
digital-garden-cultivatorDigital garden maintenanceLong-term knowledge base care
session-palace-builderSession-specific palacesTemporary working knowledge

Commands

CommandDescription
/palaceManage memory palaces
/gardenManage digital gardens
/navigateSearch and traverse palaces

Agents

AgentDescription
palace-architectDesigns memory palace architectures
knowledge-navigatorSearches and retrieves from palaces
knowledge-librarianEvaluates and routes knowledge
garden-curatorMaintains digital gardens

Hooks

HookTypeDescription
research_interceptor.pyPreToolUseChecks local knowledge before web searches
url_detector.pyUserPromptSubmitDetects URLs for intake
local_doc_processor.pyPostToolUseProcesses local docs after reads
web_research_handler.pyPostToolUseProcesses web content and prompts for knowledge storage

Usage Examples

Create a Palace

/palace create "Python Async Patterns"

# Creates:
# - Palace structure
# - Entry rooms
# - Navigation paths

Add Knowledge

Skill(memory-palace:knowledge-intake)

# Processes:
# - New information
# - Categorization
# - Spatial placement
# - Cross-references
/navigate "async context managers"

# Returns:
# - Matching rooms
# - Related concepts
# - Cross-references
# - Source citations

Maintain Garden

/garden cultivate

# Performs:
# - Pruning outdated content
# - Strengthening connections
# - Identifying gaps
# - Suggesting additions

Cache Modes

The research interceptor supports four modes:

ModeBehaviorUse Case
cache_onlyDeny web when no cache matchOffline work, audits
cache_firstCheck cache, fall back to webDefault research
augmentBlend cache with live resultsWhen freshness matters
web_onlyBypass Memory PalaceIncident response

Set mode in hooks/memory-palace-config.yaml:

research_mode: cache_first

Palace Architecture

Palaces use spatial metaphors:

Palace: "Python Async"
├── Entry Hall
│   └── Overview concepts
├── Library Wing
│   ├── asyncio basics
│   ├── coroutines
│   └── event loops
├── Practice Room
│   ├── code examples
│   └── exercises
└── Reference Archive
    ├── official docs
    └── external sources

Knowledge Intake Flow

New Information
      |
      v
[Semantic Dedup] --> Near-duplicate? --> Increment counter, skip
      |
      No
      v
[Domain Alignment] --> Matches interests? --> Flag for intake
      |
      Yes
      v
[Palace Placement] --> Store in appropriate room
      |
      v
[Cross-Reference] --> Link to related concepts

The SemanticDeduplicator uses FAISS cosine similarity (threshold: 0.8) to detect near-duplicate content before storage. When FAISS is unavailable, it falls back to Jaccard word-set similarity. Suppressed duplicates increment a counter rather than being stored, keeping the corpus dense.

Semantic Deduplication

FAISS-based duplicate detection is included as a mandatory dependency. The SemanticDeduplicator.should_store() API uses cosine similarity on L2-normalized vectors to detect near-duplicates before storage.

Embedding Support

Optional semantic search via embeddings:

# Build embeddings
cd plugins/memory-palace
uv run python scripts/build_embeddings.py --provider local

# Toggle at runtime
export MEMORY_PALACE_EMBEDDINGS_PROVIDER=local

Telemetry

Track research decisions:

# data/telemetry/memory-palace.csv
timestamp,query,decision,novelty_score,domains,duplicates
2025-01-15,async patterns,cache_hit,0.2,python,entry-123

Curation Workflow

Regular maintenance keeps palaces useful:

  1. Review intake queue: data/intake_queue.jsonl
  2. Approve/reject items: Based on value and fit
  3. Update vitality scores: Mark evergreen vs. probationary
  4. Prune stale content: Archive outdated information
  5. Document in curation log: docs/curation-log.md

Digital Gardens

Unlike palaces (structured), gardens are organic:

/garden status

# Shows:
# - Growth rate
# - Connection density
# - Orphan nodes
# - Suggested links

Knowledge Promotion to Discussions

Evergreen corpus entries can be promoted to a GitHub Discussion in the “Knowledge” (Q&A) category. The discussion-promotion module in knowledge-intake checks entry maturity — only entries at the evergreen lifecycle stage are eligible. Promotion creates a structured Discussion with title, summary, key findings, and source references. Entries that already have a discussion_url field are updated rather than duplicated.

  • conservation: Memory Palace helps reduce redundant web fetches
  • imbue: Evidence logging integrates with knowledge intake

spec-kit

Specification-Driven Development (SDD) toolkit for structured feature development.

Overview

Spec-Kit enforces “define before implement” - you write specifications first, generate plans, create tasks, then execute. This reduces wasted effort and validates features match requirements.

Installation

/plugin install spec-kit@claude-night-market

Skills

SkillDescriptionWhen to Use
spec-writingSpecification authoringWriting requirements from ideas
task-planningTask generationBreaking specs into tasks
speckit-orchestratorWorkflow coordinationManaging spec-to-code lifecycle

Commands

CommandDescription
/speckit-specifyCreate a new specification
/speckit-planGenerate implementation plan
/speckit-tasksGenerate ordered tasks
/speckit-implementExecute tasks
/speckit-analyzeCheck artifact consistency
/speckit-checklistGenerate custom checklist
/speckit-clarifyAsk clarifying questions
/speckit-constitutionCreate project constitution
/speckit-startupBootstrap workflow at session start

Agents

AgentDescription
spec-analyzerValidates artifact consistency
task-generatorCreates implementation tasks
implementation-executorExecutes tasks and writes code

Usage Examples

Full SDD Workflow

# 1. Create specification
/speckit-specify Add user authentication with OAuth2

# 2. Clarify requirements
/speckit-clarify

# 3. Generate plan
/speckit-plan

# 4. Create tasks
/speckit-tasks

# 5. Execute implementation
/speckit-implement

# 6. Verify consistency
/speckit-analyze

Quick Specification

/speckit-specify Add dark mode toggle

# Claude will:
# 1. Ask clarifying questions
# 2. Generate spec.md
# 3. Identify dependencies
# 4. Suggest next steps

Session Startup

/speckit-startup

# Loads:
# - Existing spec.md
# - Current plan.md
# - Outstanding tasks
# - Progress status
# - Constitution (principles/constraints)

Artifact Structure

Spec-Kit creates three main artifacts:

spec.md

# Feature: User Authentication

## Overview
OAuth2-based authentication for web application.

## Requirements
- [ ] Google OAuth integration
- [ ] Session management
- [ ] Token refresh

## Acceptance Criteria
1. Users can sign in with Google
2. Sessions persist for 7 days
3. Tokens refresh automatically

## Non-Functional Requirements
- Login latency < 2s
- 99.9% availability

plan.md

# Implementation Plan

## Phase 1: OAuth Setup
- Configure Google OAuth credentials
- Implement OAuth callback handler

## Phase 2: Session Management
- Design session schema
- Implement token storage

## Phase 3: Integration
- Connect to frontend
- Add logout functionality

tasks.md

# Tasks

## Phase 1 Tasks
- [ ] Create OAuth config module
- [ ] Implement /auth/login endpoint
- [ ] Implement /auth/callback endpoint

## Phase 2 Tasks
- [ ] Design session table schema
- [ ] Create session service
- [ ] Implement token refresh logic

Constitution

Project constitution defines principles:

/speckit-constitution

# Creates:
# - Coding standards
# - Architecture principles
# - Testing requirements
# - Documentation standards

Consistency Analysis

/speckit-analyze

# Checks:
# - spec.md requirements map to plan.md
# - plan.md phases map to tasks.md
# - No orphan tasks
# - No missing implementations

Checklist Generation

/speckit-checklist

# Generates custom checklist:
# - [ ] All acceptance criteria met
# - [ ] Tests written
# - [ ] Documentation updated
# - [ ] Security reviewed

Dependencies

Spec-Kit uses imbue for analysis:

spec-kit
    |
    v
imbue (diff-analysis, proof-of-work)

Superpowers Integration

CommandEnhancement
/speckit-clarifyUses brainstorming for questions
/speckit-planUses writing-plans for structure
/speckit-tasksUses executing-plans, systematic-debugging
/speckit-implementUses executing-plans, systematic-debugging
/speckit-analyzeUses systematic-debugging, verification-before-completion
/speckit-checklistUses verification-before-completion

Best Practices

  1. Specify First: Never skip the specification phase
  2. Clarify Ambiguity: Use /speckit-clarify liberally
  3. Small Tasks: Break into 1-2 hour chunks
  4. Verify Often: Run /speckit-analyze after changes
  5. Update Artifacts: Keep spec/plan/tasks in sync

Workflow Tips

Starting New Feature

/speckit-specify [feature description]
/speckit-clarify
/speckit-plan

Resuming Work

/speckit-startup
# Review current state
/speckit-implement

Before PR

/speckit-analyze
/speckit-checklist
  • imbue: Provides analysis patterns
  • sanctum: Integrates for PR preparation after implementation

minister

GitHub initiative tracking and release management.

Overview

Minister helps you track project initiatives, monitor release readiness, and generate stakeholder reports. It bridges the gap between development work and project management.

Installation

/plugin install minister@claude-night-market

Skills

SkillDescriptionWhen to Use
github-initiative-pulseInitiative progress trackingWeekly status reports
release-health-gatesRelease readiness checksBefore releasing

Scripts

ScriptDescription
tracker.pyCLI for initiative database and reporting

Usage Examples

Initiative Tracking

Skill(minister:github-initiative-pulse)

# Generates:
# - Issue completion rates
# - Milestone progress
# - Velocity trends
# - Risk flags

Release Readiness

Skill(minister:release-health-gates)

# Checks:
# - CI status
# - Documentation completeness
# - Breaking change inventory
# - Risk assessment

CLI Usage

# List initiatives
python tracker.py list

# Show initiative details
python tracker.py show auth-v2

# Generate weekly report
python tracker.py report --week

# Update status
python tracker.py update auth-v2 --status in-progress

Initiative Structure

Initiatives track work across issues and PRs:

initiative:
  id: auth-v2
  title: "Authentication v2"
  status: in-progress
  milestones:
    - name: "OAuth Setup"
      due: 2025-01-30
      issues: [#42, #43, #44]
    - name: "Session Management"
      due: 2025-02-15
      issues: [#45, #46]
  metrics:
    velocity: 3.5 issues/week
    completion: 65%
    risk: low

Health Gates

Release health gates verify readiness:

GateChecks
CIAll checks passing, no flaky tests
DocsREADME updated, CHANGELOG complete
BreakingBreaking changes documented
SecurityNo critical vulnerabilities
CoverageTest coverage above threshold

Gate Output

## Release Health: v2.0.0

### CI Status: PASS
- All 156 tests passing
- Build time: 3m 42s
- No flaky tests detected

### Documentation: PASS
- README updated
- CHANGELOG has v2.0.0 section
- API docs generated

### Breaking Changes: WARN
- 2 breaking changes identified
- Migration guide needed for UserService API

### Security: PASS
- No critical/high vulnerabilities
- Dependencies up to date

### Coverage: PASS
- 87% coverage (threshold: 80%)

## Recommendation: CONDITIONAL RELEASE
Address breaking change documentation before release.

Reporting

Weekly Report

python tracker.py report --week

# Outputs:
# - Initiatives summary
# - This week's completions
# - Next week's focus
# - Blockers and risks

Stakeholder Summary

python tracker.py report --stakeholder

# Generates executive summary:
# - High-level progress
# - Key achievements
# - Timeline updates
# - Resource needs

Integration with GitHub

Minister reads from GitHub:

# Sync initiative from GitHub milestone
python tracker.py sync --milestone "v2.0"

# Pull issue status
python tracker.py refresh auth-v2

Superpowers Integration

SkillEnhancement
issue-managementUses systematic-debugging for investigation

Configuration

tracker.yaml

github:
  repo: athola/my-project
  token_env: GITHUB_TOKEN

initiatives_dir: .minister/initiatives
reports_dir: .minister/reports

health_gates:
  coverage_threshold: 80
  max_critical_vulns: 0
  require_changelog: true

Workflow Examples

Sprint Planning

# Check initiative status
python tracker.py list

# Update priorities
python tracker.py update auth-v2 --priority high

# Generate planning report
python tracker.py report --planning

Release Preparation

# Run health gates
Skill(minister:release-health-gates)

# Address any failures
# Then re-run until all pass

# Tag release
git tag v2.0.0

Weekly Standup

# Generate pulse report
Skill(minister:github-initiative-pulse)

# Share with team
# Update tracker based on discussion

Playbooks

Minister includes operational playbooks in docs/playbooks/:

PlaybookPurpose
github-program-rituals.mdWeekly cadences: Risk Radar, Velocity Digest, Executive Packet
release-train-health.mdRelease gate checklists for CI, docs, and support signals

These playbooks use GitHub Discussions via GraphQL mutations (not the non-existent gh discussion CLI subcommand). Discussion creation and commenting follow the templates in leyline:git-platform’s command-mapping module.

  • sanctum: PR preparation integrates with release workflow
  • imbue: Feature review complements initiative tracking

Attune

Full-cycle project development from ideation to implementation.

Overview

Attune integrates the brainstorm-plan-execute workflow from superpowers with spec-driven development from spec-kit to provide a complete project lifecycle.

Workflow

graph LR
    A[Brainstorm] --> B[War Room]
    B --> C[Specify]
    C --> D[Plan]
    D --> E[Initialize]
    E --> F[Execute]

    style A fill:#e1f5fe
    style B fill:#fff9c4
    style C fill:#f3e5f5
    style D fill:#fff3e0
    style E fill:#e8f5e8
    style F fill:#fce4ec

Commands

CommandPhaseDescription
/attune:brainstorm1. IdeationSocratic questioning to explore problem space
/attune:war-room2. DeliberationMulti-LLM expert deliberation with reversibility-based routing
/attune:specify3. SpecificationCreate detailed specs from war-room decision
/attune:blueprint4. PlanningDesign architecture and break down tasks
/attune:init5. InitializationGenerate or update project structure with tooling
/attune:execute6. ImplementationExecute tasks with TDD discipline
/attune:upgrade-projectMaintenanceAdd configs to existing projects
/attune:missionFull CycleRun entire lifecycle as a single mission with state detection
/attune:validateQualityValidate project structure

Supported Languages

  • Python: uv, pytest, ruff, mypy, pre-commit
  • Rust: cargo, clippy, rustfmt, CI workflows
  • TypeScript/React: npm/pnpm/yarn, vite, jest, eslint, prettier

What Gets Configured

  • Git initialization with detailed .gitignore
  • ✅ GitHub Actions workflows (test, lint, typecheck, publish)
  • ✅ Pre-commit hooks (formatting, linting, security)
  • ✅ Makefile with standard development targets
  • ✅ Dependency management (uv/cargo/package managers)
  • ✅ Project structure (src/, tests/, README.md)

Quick Start

New Python Project

# Interactive mode
/attune:init

# Non-interactive
/attune:init --lang python --name my-project --author "Your Name"

Full Cycle Workflow

# 1. Brainstorm the idea
/attune:brainstorm

# 2. War room deliberation (auto-routes by complexity)
/attune:war-room --from-brainstorm

# 3. Create specification
/attune:specify

# 4. Plan architecture
/attune:blueprint

# 5. Initialize project
/attune:init

# 6. Execute implementation
/attune:execute

Skills

SkillPurpose
project-brainstormingSocratic ideation workflow
war-roomMulti-LLM expert council with Type 1/2 decision routing
war-room-checkpointInline RS assessment for embedded escalation during workflow
project-specificationSpec creation from war-room decision
project-planningArchitecture and task breakdown
project-initInteractive project initialization
project-executionSystematic implementation
makefile-generationGenerate language-specific Makefiles
mission-orchestratorUnified brainstorm-specify-plan-execute lifecycle orchestrator
workflow-setupConfigure CI/CD pipelines
precommit-setupSet up code quality hooks

Agents

AgentRole
project-architectGuides full-cycle workflow (brainstorm → plan)
project-implementerExecutes implementation with TDD

Integration

Attune combines capabilities from:

  • superpowers: Brainstorming, planning, execution workflows
  • spec-kit: Specification-driven development
  • abstract: Plugin and skill authoring for plugin projects

War Room Integration

The war room is a mandatory phase after brainstorming. It automatically routes to the appropriate deliberation intensity based on Reversibility Score (RS):

ModeRS RangeDurationDescription
Express≤ 0.40<2 minQuick decision by Chief Strategist
Lightweight0.41-0.605-10 min3-expert panel
Full Council0.61-0.8015-30 min7-expert deliberation
Delphi> 0.8030-60 minIterative consensus for critical decisions

The war-room-checkpoint skill can also trigger additional deliberation during planning or execution when high-stakes decisions arise.

Discussion Publishing

After the Supreme Commander synthesis (Phase 7), the war room offers to publish the decision to a GitHub Discussion in the “Decisions” category. This requires user approval and checks for prior decisions on the same topic to avoid duplicates. The published Discussion includes the full decision record with alternatives considered, scoring breakdown, and implementation guidance. Local strategeion files remain the primary record; the Discussion is an additional cross-session discovery channel.

Examples

Initialize Python CLI Project

/attune:init --lang python --type cli

Creates:

  • pyproject.toml with uv configuration
  • Makefile with test/lint/format targets
  • GitHub Actions workflows
  • Pre-commit hooks for ruff and mypy
  • Basic CLI structure

Upgrade Existing Project

# Add missing configs
/attune:upgrade-project

# Validate structure
/attune:validate

Configuration

Custom Templates

Place custom templates in:

  • ~/.claude/attune/templates/ (user-level)
  • .attune/templates/ (project-level)
  • $ATTUNE_TEMPLATES_PATH (environment variable)

Reference Projects

Templates sync from reference projects:

  • simple-resume (Python)
  • skrills (multi-language)
  • importobot (automation)
Initialize your first project with /attune:init to unlock: Project Architect

scribe

Documentation review, cleanup, and generation with AI slop detection.

Overview

Scribe helps maintain high-quality documentation by detecting AI-generated content patterns (“slop”), learning writing styles from exemplars, and generating or remediating documentation. It integrates with sanctum’s documentation workflows.

Installation

/plugin install scribe@claude-night-market

Skills

SkillDescriptionWhen to Use
slop-detectorDetect AI-generated content markersScanning docs for AI tells
style-learnerExtract writing style from exemplar textCreating style profiles
doc-generatorGenerate/remediate documentationWriting or fixing docs
doc-importerImport external documents (PDF, DOCX, PPTX) to markdownConverting non-markdown files for editing
tech-tutorialPlan, draft, and refine technical tutorialsWriting step-by-step developer guides
session-to-postConvert sessions into blog posts or case studiesSharing session outcomes
session-replayConvert session JSONL into GIF/MP4/WebM replaysCreating animated session recordings

Commands

CommandDescription
/style-learnCreate style profile from examples
/doc-polishClean up AI-generated content
/doc-generateGenerate new documentation
/session-to-postConvert current session into a blog post or case study
/session-replayGenerate GIF/MP4/WebM replay from session JSONL

Agents

AgentDescription
doc-editorInteractive documentation editing
slop-hunterFull-document slop detection
doc-verifierQA validation using proof-of-work methodology

Usage Examples

Detect AI Slop

# Scan using the slop-detector skill
Skill(scribe:slop-detector)

# Or use the slop-hunter agent for thorough detection
Agent(scribe:slop-hunter)

Clean Up Content

# Interactive polish
/doc-polish docs/guide.md

# Polish all markdown files
/doc-polish **/*.md

Learn a Style

# Create style profile from examples
/style-learn good-examples/*.md --name house-style

# Generate with learned style
/doc-generate readme --style house-style

Replay a Session

# Generate a GIF replay from a Claude Code session
/session-replay ~/.claude/projects/myproject/sessions/abc123.jsonl

# Codex sessions are auto-detected
/session-replay codex-session.jsonl --format mp4

Verify Documentation

# Verify README claims and commands (now agent-only)
Agent(scribe:doc-verifier)

# For targeted verification, use the doc-generator skill
Skill(scribe:doc-generator)

AI Slop Detection

Scribe detects patterns that reveal AI-generated content:

Tier 1 Words (Highest Confidence)

Words that appear far more often in AI text than human text. See Skill(scribe:slop-detector) for the full word list and scoring weights.

Phrase Patterns

Formulaic constructions: vapid openers, empty emphasis, and attribution cliches. The detector scores these at 2-4 points each.

Structural Markers

Overuse of em dashes, excessive bullet points, uniform sentence length, perfect grammar without contractions.

Writing Principles

Scribe enforces these principles:

  1. Ground every claim: Use specifics, not adjectives
  2. Trim crutches: No formulaic openers or closers
  3. Show perspective: Include reasoning and trade-offs
  4. Vary structure: Mix sentence lengths, balance bullets with prose
  5. Use active voice: Direct statements over passive constructions

Vocabulary Substitutions

Scribe suggests plain replacements for flagged words. See Skill(scribe:slop-detector) for the full substitution table with context-aware alternatives.

Examples

These examples show slop remediation in practice. Each pair includes a score reduction from the detector.

Example 1: Vocabulary Slop (8/10 to 1/10)

A sentence with five Tier 1 words was reduced to plain language. The fix replaced jargon verbs with “uses” and “check,” and removed unnecessary adjectives.

After:

“This solution uses modern tools to check documentation quality.”

Example 2: Structural Patterns (7/10 to 1/10)

Four em dashes in a single sentence were collapsed into one flowing statement using “and” and “to.”

After:

“The system processes requests and handles validation to ensure data integrity before returning results.”

Example 3: Phrase Patterns (9/10 to 1/10)

A vapid opener, a filler hedge, and an empty emphasis phrase were all removed. The rewrite states the tool’s purpose directly.

After:

“This tool improves documentation quality by detecting and flagging AI-generated patterns.”

Integration

Scribe integrates with sanctum documentation workflows:

Sanctum CommandScribe Integration
/pr-reviewRuns slop-detector on changed .md files
/update-docsRuns slop-detector on edited docs
/update-docs --readmeRuns slop-detector on README
/prepare-prVerifies PR descriptions with slop-detector

Dependencies

Scribe uses skills from other plugins:

  • imbue:proof-of-work: Evidence-based verification (used by doc-verifier)
  • conserve:bloat-detector: Token optimization
Clean up AI slop in 10 files to unlock: Documentation Purist

scry

Media generation for terminal recordings, browser recordings, GIF processing, and media composition.

Overview

Scry creates documentation assets through terminal recordings (VHS), browser automation recordings (Playwright), GIF processing, and multi-source media composition. Use it to build tutorials, demos, and README assets.

Installation

/plugin install scry@claude-night-market

Skills

SkillDescriptionWhen to Use
vhs-recordingTerminal recordings using VHS tape scriptsCLI demos, tool tutorials
browser-recordingBrowser recordings using PlaywrightWeb UI walkthroughs
gif-generationGIF processing and optimizationREADME assets, docs
media-compositionCombine multiple media sourcesFull tutorials

Commands

CommandDescription
/record-terminalCreate terminal recording with VHS
/record-browserRecord browser session with Playwright

Usage Examples

Terminal Recording

/record-terminal

# Or use the skill directly
Skill(scry:vhs-recording)

Creates a VHS tape script and records terminal output to GIF or video.

Browser Recording

/record-browser

# Or use the skill directly
Skill(scry:browser-recording)

Records browser sessions with Playwright for web UI documentation.

GIF Generation

Skill(scry:gif-generation)

# Optimizes recordings for documentation:
# - Resize for README display
# - Compress file size
# - Adjust frame rate

Media Composition

Skill(scry:media-composition)

# Combines assets:
# - Terminal + browser recordings
# - Multiple clips into tutorials
# - Add transitions and captions

VHS Tape Script Example

VHS uses tape scripts to define recordings:

# demo.tape
Output demo.gif

Set FontSize 16
Set Width 1200
Set Height 600

Type "echo 'Hello, World!'"
Sleep 500ms
Enter
Sleep 2s

Run with:

vhs demo.tape

Dependencies

VHS (Terminal Recording)

macOS:

brew install charmbracelet/tap/vhs
brew install ttyd ffmpeg

Linux (Debian/Ubuntu):

sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://repo.charm.sh/apt/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/charm.gpg
echo "deb [signed-by=/etc/apt/keyrings/charm.gpg] https://repo.charm.sh/apt/ * *" | sudo tee /etc/apt/sources.list.d/charm.list
sudo apt update && sudo apt install vhs
sudo apt install ffmpeg

Playwright (Browser Recording)

npm install -g playwright
npx playwright install

FFmpeg (Media Processing)

Required for GIF generation and media composition.

# macOS
brew install ffmpeg

# Linux
sudo apt install ffmpeg

Workflow Patterns

Tutorial Creation

  1. Record terminal demo with vhs-recording
  2. Record web UI walkthrough with browser-recording
  3. Combine with media-composition
  4. Optimize output with gif-generation

Quick Demo

/record-terminal
# Creates demo.gif ready for README

Documentation Assets

# Generate multiple GIFs for docs
Skill(scry:vhs-recording)
Skill(scry:gif-generation)
# Move outputs to docs/images/

Integration with sanctum

Scry integrates with sanctum for PR and documentation workflows:

# Generate demo for PR description
/record-terminal

# Include in PR body
/sanctum:pr
  • sanctum: PR preparation uses scry for demo assets
  • memory-palace: Store and organize media assets

gauntlet

Codebase learning through knowledge extraction, challenges, and spaced repetition.

Overview

Gauntlet prevents knowledge atrophy for experienced developers and accelerates onboarding for new ones. It extracts knowledge from the codebase and tests understanding through adaptive challenges.

Installation

/plugin install gauntlet@claude-night-market

Skills

  • extract - Analyze codebase and build a knowledge base
  • challenge - Adaptive difficulty challenge session
  • onboard - Guided five-stage onboarding path
  • curate - Add or edit knowledge annotations

Commands

  • /gauntlet - Run an ad-hoc challenge session
  • /gauntlet-extract - Rebuild the knowledge base
  • /gauntlet-progress - Show accuracy stats and streak
  • /gauntlet-onboard - Start or resume onboarding
  • /gauntlet-curate - Add or edit a knowledge annotation

ML Scoring

Gauntlet uses a pluggable Scorer protocol to evaluate answers. Two implementations ship by default:

  • YamlScorer (default): heuristic scoring based on YAML rule files. Always available, no external dependencies.
  • OnnxSidecarScorer: upgrades scoring quality by calling the oracle sidecar daemon for ONNX model inference. Activates automatically when oracle is running.

The scorer selection is automatic. When oracle’s port file exists and the health check passes, gauntlet uses the sidecar scorer with configurable blend weights. When the sidecar is unavailable, it falls back to YamlScorer with no user intervention.

See oracle for daemon setup and ADR-0009 for the discovery pattern.

Code Knowledge Graph

The graph module builds a SQLite-backed knowledge graph using Tree-sitter parsing. GraphStore supports context manager usage for safe resource cleanup. Community detection groups related nodes, and blast radius analysis scores the risk of code changes using security keywords from constants.py.

Problem Bank

Curated algorithm problems in data/problems/*.yaml cover arrays, graphs, trees, dynamic programming, and 15 other categories. Each entry includes difficulty level and pattern metadata. The challenge engine draws from this bank for targeted practice sessions.

Agents

  • extractor - Autonomous knowledge extraction agent

tome

Multi-source research plugin for code archaeology, community discourse, academic literature, and TRIZ cross-domain analysis.

Overview

Tome orchestrates research across four channels: GitHub code search, community discourse (HN, Lobsters, Reddit), academic literature (arXiv, Semantic Scholar), and TRIZ analogical reasoning. It classifies domains and adapts search depth automatically.

Installation

/plugin install tome@claude-night-market

Commands

CommandDescription
/tome:researchRun multi-source research session
/tome:digRefine results interactively
/tome:citeGenerate formatted bibliography
/tome:exportExport findings for knowledge-intake

Skills

  • research – orchestrate a full research session
  • code-search – search GitHub implementations
  • discourse – scan community discussions
  • papers – search academic literature
  • triz – cross-domain analogical reasoning
  • synthesize – merge and rank findings
  • dig – interactive refinement

Agents

  • code-searcher – GitHub code search
  • discourse-scanner – community discussion scanning
  • literature-reviewer – academic paper review
  • triz-analyst – cross-domain analysis

Tutorials

Workflow-driven tutorials for real developer scenarios. Each tutorial walks through an actual task using real commands.

Available Tutorials

TutorialDescriptionLevel
Your First SessionInstall, explore skills, run your first commandBeginner
Feature Development LifecycleSpec → implement → test → PR end-to-endIntermediate
Code Review and PR WorkflowReview, commit, PR, and address feedbackBeginner
Debugging and Issue ResolutionTriage a GitHub issue, debug, fix, verifyIntermediate
Memory Palace: Knowledge ManagementBuild and maintain a persistent knowledge baseIntermediate

Suggested Path

New Users

  1. Your First Session - understand skills, commands, and plugins
  2. Code Review and PR Workflow - the most common daily workflow
  3. Feature Development Lifecycle - full feature development cycle

Experienced Users

  1. Debugging and Issue Resolution - issue triage and resolution
  2. Memory Palace: Knowledge Management - persistent knowledge base

Prerequisites

  • Claude Code installed
  • Night Market plugins installed (see Installation)
  • A git repository to work in

Your First Session

You’ve just installed Claude Night Market. This tutorial walks through your first real session: discovering what’s available, running your first skill, and seeing how plugins work together.


Scenario

You’ve followed the installation guide and have Night Market plugins installed. You open Claude Code in a project and want to explore what you can do.

Step 1: See What’s Available

Start by asking Claude Code what skills are available:

What skills do I have installed?

Claude reads the installed plugins and lists available skills. You’ll see entries like:

- sanctum:commit-msg - Draft a conventional commit message
- sanctum:prepare-pr - Complete PR preparation
- pensive:code-reviewer - Code review agent
- imbue:catchup - Quickly understand recent changes
- abstract:validate-plugin - Validate plugin structure

Each skill is identified by plugin:skill-name. The plugin tells you which domain it belongs to, and the skill name tells you what it does.

Step 2: Explore a Plugin

Pick a plugin to understand what it offers. For example, sanctum handles git workflows:

What commands does the sanctum plugin provide?

You’ll see commands like:

CommandWhat it does
/commit-msgGenerate a conventional commit message from staged changes
/prepare-prRun quality gates and prepare a PR description
/do-issueImplement a GitHub issue end-to-end
/fix-prAddress PR review feedback
/git-catchupCatch up on repository changes

Commands (prefixed with /) are the main way you interact with skills. They’re shorthand: /commit-msg invokes the sanctum:commit-msg skill behind the scenes.

Step 3: Run Your First Skill

Let’s use /catchup to understand the current state of the repository:

/catchup

This invokes the imbue:catchup skill, which:

  1. Reads recent git history
  2. Analyzes what changed and why
  3. Summarizes the current state of the project

The output gives you a summary of recent commits, active branches, what areas of the code changed, and what work is in progress.

Step 4: Try a Review

If you have uncommitted changes or a branch with work on it, try a code review:

/code-review

This invokes the pensive plugin’s review system. It analyzes your changes and reports findings by category: bugs, style issues, architecture concerns, test coverage gaps.

For a more targeted review, you can use specific variants:

/bug-review          # Focus on potential bugs
/architecture-review # Focus on design patterns
/test-review         # Focus on test quality

Step 5: Understand How Skills Compose

Skills often work together. For example, preparing a PR typically involves:

  1. /commit-msg - generate a commit message for staged changes
  2. /prepare-pr - run quality gates and create the PR description

The PR preparation skill runs workspace analysis, checks for scope drift, and produces a PR description, all by composing underlying skills.

This composition happens on its own. You don’t need to orchestrate it. Just invoke the top-level command and the skill handles the rest.

What You’ve Learned

  • Skills are the building blocks. Each does one thing well.
  • Commands (/command) are the main interface for invoking skills.
  • Plugins group related skills by domain (git, review, analysis, etc.).
  • Composition lets skills chain together into workflows without manual orchestration.

Next Steps

TutorialWhen to read it
Feature Development LifecycleYou want to build a feature from spec to PR
Code Review and PR WorkflowYou’re ready to review code and submit PRs
Debugging and Issue ResolutionYou need to triage and fix a bug
Memory Palace: Knowledge ManagementYou want to build a persistent knowledge base

Difficulty: Beginner Prerequisites: Claude Code installed, Night Market plugins installed Duration: 5 minutes

Feature Development Lifecycle

Walk through building a feature from specification to merged PR. This tutorial covers the full development cycle using real commands across multiple plugins.


Scenario

You’ve been asked to add a new capability to your project. You need to specify what you’re building, plan the implementation, write the code, and get it reviewed and merged.

Step 1: Start with a Specification

Don’t jump straight to code. Start by defining what you’re building:

/speckit-specify Add rate limiting to the API endpoints

This invokes the spec-kit plugin’s specification skill. It will:

  1. Ask clarifying questions about requirements (limits, scope, behavior)
  2. Create a spec.md with user stories, acceptance criteria, and constraints
  3. Identify edge cases you might not have considered

The spec becomes the source of truth for the feature.

Refine the Spec

If the spec needs clarification:

/speckit-clarify

This asks targeted questions to resolve ambiguities. “Should rate limits be per-user or per-IP?” “What HTTP status code for rate-limited requests?”

Step 2: Plan the Implementation

With a clear spec, generate an implementation plan:

/speckit-plan

This produces a phased plan showing:

  • Which files to create or modify
  • Dependencies between changes
  • Test strategy for each phase
  • Estimated scope per phase

Generate Tasks

Break the plan into ordered tasks:

/speckit-tasks

This creates a tasks.md with dependency-ordered implementation steps. Each task is specific enough to implement independently.

Step 3: Implement

Execute the tasks:

/speckit-implement

This processes tasks from tasks.md in dependency order. For each task, it:

  1. Reads the task requirements
  2. Writes a failing test (TDD approach)
  3. Implements the minimum code to pass
  4. Moves to the next task

You can also implement tasks selectively:

/speckit-implement --phase 1

Check Consistency

After implementing, verify the spec, plan, and code are aligned:

/speckit-analyze

This cross-checks all artifacts: spec requirements against tests, plan phases against implementation, task completion against acceptance criteria.

Step 4: Review Your Work

Before committing, review what you’ve built:

/code-review

This runs pensive’s review system against your changes. For a feature like this, you might also run:

/architecture-review

This checks whether your implementation fits the existing architecture. Are you adding rate limiting in the right layer? Does it follow existing patterns?

Step 5: Commit and Create a PR

Stage your changes and generate a commit message:

/commit-msg

This analyzes staged changes and drafts a conventional commit message. It classifies the change type (feat, fix, refactor) and summarizes the intent.

Then prepare the pull request:

/prepare-pr

This runs quality gates (tests, lint, scope check) and generates a PR description with:

  • Summary of changes
  • Test plan
  • Breaking changes (if any)

What You’ve Learned

  • spec-kit handles the specification → plan → tasks → implementation pipeline
  • pensive provides code review before you commit
  • sanctum handles git operations: commits, PRs, quality gates
  • Plugins collaborate through the workflow. You don’t orchestrate them manually.

Command Reference

PhaseCommandPlugin
Specify/speckit-specifyspec-kit
Clarify/speckit-clarifyspec-kit
Plan/speckit-planspec-kit
Tasks/speckit-tasksspec-kit
Implement/speckit-implementspec-kit
Analyze/speckit-analyzespec-kit
Review/code-reviewpensive
Commit/commit-msgsanctum
PR/prepare-prsanctum

Difficulty: Intermediate Prerequisites: Your First Session Duration: 15 minutes (following along with a real feature)

Code Review and PR Workflow

The most common daily workflow: review your changes, commit them cleanly, create a PR, and address reviewer feedback.


Scenario

You’ve finished working on a feature branch. You have uncommitted changes and need to get them reviewed, committed, and merged.

Step 1: Understand What Changed

Start by catching up on your own work:

/catchup

This summarizes recent changes: which files were modified, what the commit history looks like, and what’s currently unstaged. Useful even for your own branch, especially after stepping away.

Step 2: Self-Review Before Committing

Run a code review on your changes before anyone else sees them:

/code-review

This analyzes your uncommitted and staged changes. The review covers:

  • Bugs: Logic errors, off-by-one mistakes, null handling
  • Style: Naming, formatting, consistency with existing patterns
  • Architecture: Does the change fit the codebase design?
  • Tests: Are changes covered by tests?

Fix any issues found before proceeding.

Targeted Reviews

If your change is in a specific domain, use a focused review:

/bug-review           # Focus on defect detection
/test-review          # Evaluate test coverage and quality
/architecture-review  # Check design patterns and structure

Step 3: Commit with a Clean Message

Stage your changes and generate a commit message:

/commit-msg

This analyzes staged changes and produces a conventional commit message. It:

  1. Classifies the change type (feat, fix, refactor, docs, test)
  2. Identifies the appropriate scope
  3. Writes a concise description of the intent (why, not what)

Example output:

feat(api): add rate limiting to public endpoints

Implements per-user rate limiting with configurable thresholds.
Requests exceeding the limit receive 429 responses with retry-after headers.

You review the message and approve or edit it before the commit is created.

Step 4: Prepare the Pull Request

With your changes committed, prepare a PR:

/prepare-pr

This runs a multi-step workflow:

  1. Workspace analysis - reviews all commits on the branch
  2. Quality gates - runs tests and lint checks
  3. Scope check - flags if the branch has drifted beyond its original intent
  4. PR description - generates a description with summary, test plan, and checklist

The PR is created with a description that reviewers can actually use.

Step 5: Address Review Feedback

After reviewers comment on your PR, use:

/fix-pr

This reads the PR review comments and works through them:

  1. Fetches all unresolved review threads
  2. Groups feedback by type (required changes, suggestions, questions)
  3. Addresses each item: makes code changes, responds to questions
  4. Resolves threads as changes are made

Resolve Threads in Bulk

After addressing feedback, resolve all completed threads:

/resolve-threads

This batch-resolves review threads that have been addressed by code changes.

Step 6: Review a Teammate’s PR

You can also review PRs from others:

/pr-review 123

This reviews PR #123:

  1. Reads the PR description and all changed files
  2. Checks changes against the stated scope
  3. Identifies potential issues organized by severity
  4. Produces a review with specific feedback

What You’ve Learned

  • Self-review before committing catches issues early
  • Conventional commits via /commit-msg maintain a clean git history
  • PR preparation via /prepare-pr automates quality gates and descriptions
  • Feedback handling via /fix-pr works through review comments one by one
  • PR review via /pr-review gives you a thorough analysis of others’ work

Command Reference

StepCommandPlugin
Catch up/catchupimbue
Self-review/code-reviewpensive
Commit/commit-msgsanctum
Create PR/prepare-prsanctum
Fix feedback/fix-prsanctum
Resolve threads/resolve-threadssanctum
Review others/pr-reviewsanctum

Difficulty: Beginner Prerequisites: Your First Session Duration: 10 minutes

Debugging and Issue Resolution

Walk through the process of triaging a GitHub issue, debugging the problem, implementing a fix, and verifying the solution.


Scenario

A user has filed GitHub issue #42: “API returns 500 when request body is empty.” You need to investigate, fix it, and close the issue.

Step 1: Understand the Context

Before diving in, catch up on recent changes that might be related:

/git-catchup

This shows recent commits, active branches, and areas of change. If someone recently modified the API layer, that context is immediately relevant.

Step 2: Implement the Issue End-to-End

For well-defined issues, use the issue resolution command:

/do-issue 42

This reads the GitHub issue and orchestrates the full fix:

  1. Reads the issue - title, description, labels, comments
  2. Plans the approach - identifies affected files and tests needed
  3. Creates a branch - based on the issue number
  4. Implements the fix - with tests written first (TDD approach)
  5. Prepares a PR - linking back to the issue

This is the fastest path from issue to PR. It handles the orchestration so you focus on reviewing the result.

Step 3: Manual Debugging (When Needed)

Sometimes issues need investigation before you can fix them. For complex bugs, work through the problem step by step.

Investigate the Problem

Start by reading the issue and understanding the reproduction steps. Then explore the relevant code:

Show me the API endpoint handlers that process request bodies

Claude will search the codebase, read the relevant files, and explain the code flow.

Find the Root Cause

Ask Claude to trace the execution path:

Trace what happens when an empty POST body hits the /api/data endpoint

Claude reads the handler code, middleware, and validation layers to identify where the 500 error originates.

Verify the Fix

After implementing a fix, verify it works:

Run the tests for the API endpoint module

Claude runs the relevant test suite and reports results. If tests fail, it analyzes the failure and suggests corrections.

Step 4: Create the Issue (When You Find Bugs)

If you discover a bug while working, create an issue to track it:

/create-issue

This creates a formatted GitHub issue with:

  • Clear title and description
  • Reproduction steps
  • Expected vs. actual behavior
  • Labels and assignees

Step 5: Close Resolved Issues

After your PR is merged, check if the issue can be closed:

/close-issue 42

This analyzes whether the issue’s requirements have been met by reviewing the linked PR and test evidence.

Debugging Tips

Use Catchup for Context

When you inherit a bug you didn’t create, /catchup gives you the recent history that led to the current state. This often reveals what change introduced the bug.

Use Targeted Reviews

If you suspect a specific type of issue:

/bug-review    # Systematic bug hunting in recent changes
/test-review   # Check if tests actually cover the bug scenario

Work Incrementally

For complex bugs:

  1. Reproduce the bug (confirm you can trigger it)
  2. Write a failing test that captures the bug
  3. Fix the code until the test passes
  4. Run the full test suite to check for regressions

What You’ve Learned

  • /do-issue handles the full lifecycle: read → plan → implement → PR
  • /create-issue formats new issues with proper structure
  • /close-issue verifies issues are resolved before closing
  • /git-catchup provides historical context for debugging
  • Targeted reviews (/bug-review, /test-review) focus analysis on specific concerns

Command Reference

StepCommandPlugin
Context/git-catchupsanctum
Full fix/do-issue 42sanctum
Create issue/create-issueminister
Close issue/close-issue 42minister
Bug review/bug-reviewpensive
Test review/test-reviewpensive
Catchup/catchupimbue

Difficulty: Intermediate Prerequisites: Your First Session Duration: 10 minutes

Memory Palace: Knowledge Management

Build a persistent knowledge base that grows with your work. This tutorial covers the core Memory Palace workflows: capturing knowledge, organizing it in palaces, maintaining it over time, and finding what you need.


Scenario

You’re working on a project with technologies you’ll reference repeatedly: API patterns, architecture decisions, library quirks. Instead of re-researching every session, you want a knowledge base that remembers what you’ve learned.

Step 1: Create a Palace

A palace is a themed container for knowledge. Create one for your project’s domain:

/palace create "API Patterns" "rest-api" --metaphor library

This creates a palace named “API Patterns” in the rest-api domain using the library metaphor. Metaphors determine how knowledge is organized:

MetaphorBest for
libraryResearch, documentation
workshopPractical skills, tools
gardenEvolving knowledge
fortressSecurity, production systems
buildingGeneral organization (default)

Check what you have:

/palace list

This shows all palaces with entry counts and last-modified dates.

Step 2: Capture Knowledge

Knowledge enters the palace through two paths.

Automatic Capture

When you research topics during a Claude Code session (web searches, reading docs, analyzing code), the Memory Palace hooks queue findings for later processing. This happens in the background. You don’t need to do anything special.

Check the queue:

/palace status

This shows total palaces, entry counts, and the intake queue size (how many items are waiting to be processed).

Manual Intake

To explicitly capture something you’ve learned:

/garden seed ~/my-garden.json "OAuth2 PKCE Flow" --section auth --links "Authentication,Security"

This adds a new entry with links to related concepts, which helps with navigation later.

Step 3: Process the Queue

Queued research needs to be synced into palaces. Preview first:

/palace sync --dry-run

This shows what would be processed: which items match existing palaces, which would create new entries, and which have no matching palace.

When it looks right:

/palace sync

Items are matched to palaces by domain and tags, then organized into districts within each palace.

Step 4: Find What You Know

Search across all your palaces:

/navigate search "rate limiting" --type semantic

This searches by meaning, not just keywords. It returns matches with:

  • Which palace and district contains the result
  • Relevance score
  • Related concepts nearby

For a specific concept:

/navigate locate "OAuth 2.0"

To explore connections between concepts:

/navigate path "OAuth" "JWT"

This shows the navigation path between two concepts, revealing how your knowledge connects.

Step 5: Maintain the Garden

Knowledge goes stale. Regularly check palace health:

/garden health ~/my-garden.json

This reports metrics like link density (are entries well-connected?) and freshness (when were entries last updated?).

Prune stale entries:

/palace prune --stale-days 90

This identifies entries older than 90 days, low-quality entries, and duplicates. It shows recommendations and asks for your approval before making any changes.

After reviewing:

/palace prune --apply

Garden Metrics

Track the health of your knowledge base over time:

/garden metrics ~/my-garden.json --format brief

Output: plots=42 link_density=3.2 avg_days_since_tend=4.5

Healthy gardens have link density above 2.0 and average staleness under 7 days.

Step 6: Use Knowledge in Reviews

The Memory Palace integrates with PR reviews through the review chamber:

/review-room

This captures review patterns and decisions, building a knowledge base of your team’s code review preferences over time.

What You’ve Learned

  • Palaces organize knowledge by domain with architectural metaphors
  • Automatic capture queues findings from research sessions
  • Sync processes the queue into organized palace entries
  • Navigation finds knowledge using semantic, exact, or fuzzy search
  • Maintenance keeps the knowledge base healthy through pruning and metrics

Command Reference

TaskCommandDescription
Create/palace create <name> <domain>Create a new palace
List/palace listSee all palaces
Status/palace statusQueue size and health
Sync/palace syncProcess intake queue
Search/navigate search "<query>"Find across palaces
Locate/navigate locate "<concept>"Find specific concept
Path/navigate path "<from>" "<to>"Show concept connections
Health/garden health <path>Assess garden health
Prune/palace pruneClean stale entries
Metrics/garden metrics <path>Track garden health

Difficulty: Intermediate Prerequisites: Your First Session, Memory Palace plugin installed Duration: 15 minutes

Capabilities Reference

Quick lookup table of all skills, commands, agents, and hooks in the Claude Night Market.

For full flag documentation and workflow examples: See Capabilities Reference Details.

Quick Reference Index

All Skills (Alphabetical)

SkillPluginDescription
agent-expenditureconservePer-agent token usage tracking
agent-teamsconjureCoordinate Claude Code Agent Teams through filesystem-based protocol
api-reviewpensiveAPI surface evaluation
architecture-aware-initattuneArchitecture-aware project initialization with research
architecture-diagramcartographComponent relationship diagrams
architecture-paradigm-client-serverarchetypesClient-server communication
architecture-paradigm-cqrs-esarchetypesCQRS and Event Sourcing
architecture-paradigm-event-drivenarchetypesAsynchronous communication
architecture-paradigm-functional-corearchetypesFunctional Core, Imperative Shell
architecture-paradigm-hexagonalarchetypesPorts & Adapters architecture
architecture-paradigm-layeredarchetypesTraditional N-tier architecture
architecture-paradigm-microkernelarchetypesPlugin-based extensibility
architecture-paradigm-microservicesarchetypesIndependent distributed services
architecture-paradigm-modular-monolitharchetypesSingle deployment with internal boundaries
architecture-paradigm-pipelinearchetypesPipes-and-filters model
architecture-paradigm-serverlessarchetypesFunction-as-a-Service
architecture-paradigm-service-basedarchetypesCoarse-grained SOA
architecture-paradigm-space-basedarchetypesData-grid architecture
architecture-paradigmsarchetypesOrchestrator for paradigm selection
architecture-reviewpensiveArchitecture assessment
authentication-patternsleylineAuth flow patterns
blast-radiuspensiveCode change blast radius analysis with risk scoring
bloat-detectorconserveDetection algorithms for dead code, God classes, documentation duplication
browser-recordingscryPlaywright browser recordings
bug-reviewpensiveBug hunting
call-chaincartographTrace execution paths through code knowledge graph
catchupimbueContext recovery
challengegauntletAdaptive difficulty challenge session for codebase knowledge testing
class-diagramcartographClass and interface diagrams
clear-contextconserveAuto-clear workflow with session state persistence
code-communitiescartographDetect architectural clusters via community detection
code-quality-principlesconserveCore principles for AI-assisted code quality
code-refinementpensiveDuplication, algorithms, and clean code analysis
code-searchtomeGitHub implementation search
commit-messagessanctumConventional commits
compression-strategyconserveContext compression analysis and recommendations
computer-controlphantomDesktop automation via Claude’s vision and action API
content-sanitizationleylineExternal content sanitization
context-mapconservePre-scan project structure to reduce exploration token waste
context-optimizationconserveMECW principles and 50% context rule
cpu-gpu-performanceconserveResource monitoring and selective testing
curategauntletAdd or edit knowledge annotations with tribal context
damage-controlleylineAgent crash recovery and state reconciliation
data-flowcartographData movement diagrams
decisive-actionconserveDecisive action patterns for efficient workflows
deferred-captureleylineContract for unified deferred-item capture across plugins
delegation-coreconjureFramework for delegation decisions
dependency-graphcartographImport and dependency diagrams
diff-analysisimbueSemantic changeset analysis
digtomeInteractive research refinement
digital-garden-cultivatormemory-palaceDigital garden maintenance
discoursetomeCommunity discussion scanning
do-issuesanctumGitHub issue resolution workflow
doc-consolidationsanctumDocument merging
doc-generatorscribeGenerate and remediate documentation
doc-importerscribeImport external documents to markdown
doc-updatessanctumDocumentation maintenance
document-conversionleylineUniversal document-to-markdown conversion
dorodangoattuneIterative code polishing workflow
error-patternsleylineStandardized error handling
escalation-governanceabstractModel escalation decisions
evaluation-frameworkleylineDecision thresholds
extractgauntletAnalyze codebase and build a knowledge base
feature-reviewimbueFeature prioritization with RICE/WSJF/Kano scoring and optional research enrichment via tome (--research)
file-analysissanctumFile structure analysis
gemini-delegationconjureGemini CLI integration
gif-generationscryGIF processing and optimization
git-platformleylineCross-platform git forge detection and command mapping
git-workspace-reviewsanctumRepo state analysis
github-initiative-pulseministerInitiative progress tracking
graph-buildgauntletBuild or update the code knowledge graph
graph-searchgauntletFTS5 search of the code knowledge graph
hook-authoringabstractSecurity-first hook development
hooks-evalabstractHook security scanning
install-watchdogegregoreInstall crash-recovery watchdog
justifyimbueAnti-additive-bias change audit
knowledge-intakememory-palaceIntake and curation
knowledge-locatormemory-palaceSpatial search
latent-space-engineeringimbueAgent behavior shaping through instruction framing
makefile-generationattuneGenerate language-specific Makefiles
makefile-reviewpensiveMakefile best practices
markdown-formattingleylineLine wrapping and style conventions
math-reviewpensiveMathematical correctness
mcp-code-executionconserveMCP patterns for data pipelines
media-compositionscryMulti-source media stitching
memory-palace-architectmemory-palaceBuilding virtual palaces
metacognitive-self-modabstractHyperagents self-improvement analysis
methodology-curatorabstractSurface expert frameworks for skill development
mission-orchestratorattuneUnified lifecycle orchestrator for project development
modular-skillsabstractModular design patterns
onboardgauntletGuided five-stage onboarding path through a codebase
paperstomeAcademic literature search
plugin-reviewabstractTiered plugin quality review with dependency-aware scoping
pr-prepsanctumPR preparation
pr-reviewsanctumPR review workflows
precommit-setupattuneSet up pre-commit hooks
progressive-loadingleylineDynamic content loading
project-brainstormingattuneSocratic ideation workflow
project-executionattuneSystematic implementation
project-initattuneInteractive project initialization
project-planningattuneArchitecture and task breakdown
project-specificationattuneSpec creation from brainstorm
proof-of-workimbueEvidence-based work validation
pytest-configleylinePytest configuration patterns
python-asyncparseltongueAsync patterns
python-packagingparseltonguePackaging with uv
python-performanceparseltongueProfiling and optimization
python-testingparseltonguePytest/TDD workflows
quality-gateegregorePre-merge quality validation for autonomous sessions
quota-managementleylineRate limiting and quotas
qwen-delegationconjureQwen MCP integration
release-health-gatesministerRelease readiness checks
researchtomeMulti-source research orchestration
response-compressionconserveResponse compression patterns
palace-diagrammemory-palaceVisual palace structure diagrams
review-chambermemory-palacePR review knowledge capture and retrieval
review-coreimbueScaffolding for detailed reviews
rigorous-reasoningimbueAnti-sycophancy guardrails
risk-classificationleylineInline 4-tier risk classification for agent tasks
rule-cataloghookifyPre-built behavioral rule templates
rules-evalabstractEvaluate and validate Claude Code rules in .claude/rules/ directories
rust-reviewpensiveRust-specific checking
safety-critical-patternspensiveNASA Power of 10 rules for robust code
scope-guardimbueAnti-overengineering
sem-integrationleylineSemantic diff CLI detection and fallback
service-registryleylineService discovery patterns
session-managementsanctumSession naming, checkpointing, and resume strategies
session-palace-buildermemory-palaceSession-specific palaces
session-replayscribeConvert session JSONL into GIF/MP4/WebM replays via VHS
session-to-postscribeConvert sessions into shareable blog posts or case studies
setuporacleInstall and configure the oracle ONNX inference daemon
shared-patternsabstractReusable plugin development patterns
shell-reviewpensiveShell script auditing for safety and portability
skill-authoringabstractTDD methodology for skill creation
skills-evalabstractSkill quality assessment
slop-detectorscribeDetect AI-generated content markers
smart-sourcingconserveBalance accuracy with token efficiency
spec-writingspec-kitSpecification authoring
speckit-orchestratorspec-kitWorkflow coordination
stewardshipleylineCross-cutting stewardship principles with layer-specific guidance
storage-templatesleylineStorage abstraction patterns
structured-outputimbueFormatting patterns
style-learnerscribeExtract writing style from exemplar text
subagent-testingabstractTesting patterns for subagent interactions
summonegregoreSpawn autonomous agent session with budget
supply-chain-advisoryleylineKnown-bad version detection, lockfile auditing, incident response
synthesizetomeResearch findings synthesis
task-planningspec-kitTask generation
tech-tutorialscribePlan, draft, and refine technical tutorials
test-reviewpensiveTest quality review
test-updatessanctumTest maintenance
testing-quality-standardsleylineTest quality guidelines
tiered-auditpensiveThree-tier escalation audit (git history, targeted, full)
token-conservationconserveToken usage strategies
triztomeTRIZ cross-domain analogical reasoning
tutorial-updatessanctumTutorial maintenance and updates
unified-reviewpensiveReview orchestration
uninstall-watchdogegregoreRemove crash-recovery watchdog
update-readmesanctumREADME maintenance and updates
usage-loggingleylineTelemetry tracking
utilityleylineUtility-guided action selection for orchestration
version-updatessanctumVersion bumping
vhs-recordingscryTerminal recordings with VHS
voice-extractscribeSICO comparative extraction from writing samples
voice-generatescribeGenerate text in learned writing voice
voice-learnscribeLearning loop from manual edits
voice-reviewscribeDual-gate review against voice profile
war-roomattuneMulti-LLM expert council with Type 1/2 reversibility routing
war-room-checkpointattuneInline reversibility assessment for embedded escalation
workflow-diagramcartographProcess and state transition diagrams
workflow-improvementsanctumWorkflow retrospectives
workflow-monitorimbueWorkflow execution monitoring and issue creation
workflow-setupattuneConfigure CI/CD pipelines
writing-ruleshookifyGuide for authoring behavioral rules

All Commands (Alphabetical)

CommandPluginDescription
/acpsanctumAdd, commit, push to current branch
/aggregate-logsabstractGenerate LEARNINGS.md from skill execution logs
/ai-hygiene-auditconserveAudit codebase for AI-generated code quality issues (vibe coding, Tab bloat, slop)
/analyze-skillabstractSkill complexity analysis
/analyze-testsparseltongueTest suite health report
/api-reviewpensiveAPI surface review
/architecture-reviewpensiveArchitecture assessment
/attune:arch-initattuneInitialize with architecture-aware templates
/attune:blueprintattunePlan architecture and break down tasks
/attune:brainstormattuneBrainstorm project ideas using Socratic questioning
/attune:executeattuneExecute implementation tasks systematically
/attune:missionattuneRun full project lifecycle as a single mission with state detection and recovery
/attune:project-initattuneInitialize project with development infrastructure
/attune:specifyattuneCreate detailed specifications from brainstorm
/attune:upgrade-projectattuneAdd or update configurations in existing project
/attune:validateattuneValidate project structure against best practices
/attune:war-roomattuneMulti-LLM expert deliberation with reversibility-based routing
/bloat-scanconserveProgressive bloat detection (3-tier scan)
/bug-reviewpensiveBug hunting review
/bulletproof-skillabstractAnti-rationalization workflow
/catchupimbueQuick context recovery
/check-asyncparseltongueAsync pattern validation
/close-issueministerAnalyze if GitHub issues can be closed based on commits
/commit-msgsanctumGenerate commit message
/context-reportabstractContext optimization report
/control-desktopphantomRun a computer use task on the desktop
/create-commandabstractScaffold new command
/create-hookabstractScaffold new hook
/create-issueministerCreate GitHub issue with labels and references
/create-skillabstractScaffold new skill
/create-tagsanctumCreate git tags for releases
/dismissegregoreTerminate autonomous agent session
/do-issuesanctumFix GitHub issues
/doc-generatescribeGenerate new documentation
/doc-polishscribeClean up AI-generated content
/evaluate-skillabstractEvaluate skill execution quality
/fix-prsanctumAddress PR review comments
/fix-workflowsanctumWorkflow retrospective with automatic improvement context gathering
/full-reviewpensiveUnified code review
/gardenmemory-palaceManage digital gardens
/gauntletgauntletRun an ad-hoc challenge session (5 questions, random scope)
/gauntlet-curategauntletAdd or edit a knowledge annotation
/gauntlet-extractgauntletRebuild the knowledge base from the current codebase
/gauntlet-graphgauntletBuild, search, and query the code knowledge graph
/gauntlet-onboardgauntletStart or resume a guided onboarding path
/gauntlet-progressgauntletShow challenge accuracy stats, weak areas, and streak
/git-catchupsanctumGit repository catchup
/hookifyhookifyCreate behavioral rules to prevent unwanted actions
/hookify:configurehookifyInteractive rule enable/disable interface
/hookify:from-hookhookifyConvert Python SDK hooks to declarative rules
/hookify:helphookifyDisplay hookify help and documentation
/hookify:installhookifyInstall hookify rule from catalog
/hookify:listhookifyList all hookify rules with status
/hooks-evalabstractHook evaluation
/improve-skillsabstractAuto-improve skills from observability data
/install-watchdogegregoreInstall crash-recovery watchdog
/justifyimbueAudit changes for additive bias
/make-dogfoodabstractMakefile enhancement
/makefile-reviewpensiveMakefile review
/math-reviewpensiveMathematical review
/merge-docssanctumConsolidate ephemeral docs
/navigatememory-palaceSearch palaces
/optimize-contextconserveContext optimization
/oracle-setuporacleInstall and configure the oracle ONNX inference daemon
/palacememory-palaceManage palaces
/plugin-reviewabstractTiered plugin quality review (branch/pr/release)
/pr-reviewsanctumEnhanced PR review
/prepare-prsanctumComplete PR preparation with updates and validation
/promote-discussionsabstractPromote highly-voted community learnings from Discussions to Issues
/record-browserscryRecord browser session
/record-terminalscryCreate terminal recording
/refine-codepensiveAnalyze and improve living code quality
/reinstall-all-pluginsleylineRefresh all plugins
/resolve-threadssanctumResolve PR review threads
/review-roommemory-palaceManage PR review knowledge in palaces
/rules-evalabstractEvaluate Claude Code rules for frontmatter, glob patterns, and content quality
/run-profilerparseltongueProfile code execution
/rust-reviewpensiveRust-specific review
/session-replayscribeGenerate GIF/MP4/WebM replay from session JSONL
/session-to-postscribeConvert session into blog post or case study
/shell-reviewpensiveShell script safety and portability review
/skill-historypensiveView recent skill executions with context
/skill-logsmemory-palaceView skill execution logs
/skill-reviewpensiveAnalyze skill metrics and stability gaps
/skills-evalabstractSkill quality assessment
/speckit-analyzespec-kitCheck artifact consistency
/speckit-checklistspec-kitGenerate checklist
/speckit-clarifyspec-kitClarifying questions
/speckit-constitutionspec-kitProject constitution
/speckit-implementspec-kitExecute tasks
/speckit-planspec-kitGenerate plan
/speckit-specifyspec-kitCreate specification
/speckit-startupspec-kitBootstrap workflow
/speckit-tasksspec-kitGenerate tasks
/speckit-taskstoissuesspec-kitConvert tasks.md entries to GitHub Issues
/statusegregoreCheck autonomous session status
/stewardship-healthimbueDisplay stewardship health dimensions for plugins
/structured-reviewimbueStructured review workflow
/style-learnscribeCreate style profile from examples
/summonegregoreSpawn autonomous agent session with budget
/sync-capabilitiessanctumDetect and fix drift between plugin.json and docs
/test-reviewpensiveTest quality review
/test-skillabstractSkill testing workflow
/tome:citetomeGenerate formatted bibliography
/tome:digtomeRefine research results interactively
/tome:exporttomeExport research findings
/tome:researchtomeRun multi-source research session
/unbloatconserveSafe bloat remediation with interactive approval
/uninstall-watchdogegregoreRemove crash-recovery watchdog
/update-all-pluginsleylineUpdate all plugins
/update-cisanctumUpdate pre-commit hooks and CI/CD workflows
/update-dependenciessanctumUpdate project dependencies
/update-docssanctumUpdate documentation
/update-labelsministerReorganize GitHub issue labels with professional taxonomy
/update-pluginssanctumAudit plugin registrations + automatic performance analysis and improvement recommendations
/update-testssanctumMaintain tests
/update-tutorialsanctumUpdate tutorial content
/update-versionsanctumBump versions
/validate-hookabstractValidate hook compliance
/validate-pluginabstractCheck plugin structure
/verify-pluginleylineVerify plugin behavioral contract history via GitHub Attestations
/visualizecartographGenerate codebase diagrams via Mermaid Chart MCP
/voice-extractscribeExtract writing voice from samples
/voice-generatescribeGenerate text in trained voice
/voice-learnscribeLearn from manual edits
/voice-reviewscribeReview text against voice profile

All Agents (Alphabetical)

AgentPluginDescription
ai-hygiene-auditorconserveAudit codebases for AI-generation warning signs
architecture-reviewerpensivePrincipal-level architecture review
blast-radius-reviewerpensiveGraph-aware code review using blast radius analysis
bloat-auditorconserveOrchestrates bloat detection scans
code-refinerpensiveCode quality refinement orchestrator
code-reviewerpensiveExpert code review
code-searchertomeGitHub code search
craft-reviewerscribeWriting craft evaluation (naming, structure, anchoring)
codebase-explorercartographCodebase structure analysis for diagrams
commit-agentsanctumCommit message generator
context-optimizerconserveContext optimization
continuation-agentconserveContinue work from session state checkpoint
dependency-updatersanctumDependency version management
desktop-pilotphantomAutonomous desktop control via Computer Use API
discourse-scannertomeCommunity discourse scanning
doc-editorscribeInteractive documentation editing
doc-verifierscribeQA validation using proof-of-work methodology
extractorgauntletAutonomous knowledge extraction agent for gauntlet knowledge base
garden-curatormemory-palaceDigital garden maintenance
git-workspace-agentsanctumRepository state analyzer
implementation-executorspec-kitTask executor
knowledge-librarianmemory-palaceKnowledge routing
knowledge-navigatormemory-palacePalace search
literature-reviewertomeAcademic literature review
media-recorderscryAutonomous media generation for demos and GIFs
meta-architectabstractPlugin ecosystem design
orchestratoregregoreAutonomous development lifecycle agent
palace-architectmemory-palacePalace design
plugin-validatorabstractPlugin validation
pr-agentsanctumPR preparation
prose-reviewerscribeAI patterns, banned phrases, voice drift detection
project-architectattuneGuides full-cycle workflow (brainstorm to plan)
project-implementerattuneExecutes implementation with TDD
python-linterparseltongueStrict ruff linting without bypasses
python-optimizerparseltonguePerformance optimization
python-proparseltonguePython 3.9+ expertise
python-testerparseltongueTesting expertise
review-analystimbueStructured reviews
rust-auditorpensiveRust security audit
sentinelegregoreWatchdog agent for crash recovery
skill-auditorabstractSkill quality audit
skill-evaluatorabstractSkill execution evaluator
skill-improverabstractImplements skill improvements from observability
insight-engineabstractDeep analysis for bugs, optimizations, and improvements
slop-hunterscribeFull-document AI slop detection
spec-analyzerspec-kitSpec consistency
task-generatorspec-kitTask creation
triz-analysttomeTRIZ cross-domain analysis
unbloat-remediatorconserveExecutes safe bloat remediation
workflow-improvement-analysis-agentsanctumWorkflow improvement analysis
workflow-improvement-implementer-agentsanctumWorkflow improvement implementation
workflow-improvement-planner-agentsanctumWorkflow improvement planning
workflow-improvement-validator-agentsanctumWorkflow improvement validation
workflow-recreate-agentsanctumWorkflow reconstruction

All Hooks (Alphabetical)

HookPluginTypeDescription
aggregate_learnings_daily.pyabstractUserPromptSubmitDaily learning aggregation (24h cadence) with severity-based issue creation
auto-star-repo.shleylineSessionStartAuto-star the repo if not already starred
config_change_audit.pysanctumConfigChangeAudit configuration changes
context_warning.pyconservePreToolUseContext utilization monitoring
daemon_lifecycle.pyoracleSessionStart, StopOracle daemon lifecycle management
deferred_item_sweep.pysanctumStopSweep session ledger and file deferred items as GitHub issues
deferred_item_watcher.pysanctumPostToolUseDetect deferred items in Skill output and write to session ledger
detect-git-platform.shleylineSessionStartDetect git forge platform from remote URL
fetch-recent-discussions.shleylineSessionStartFetch recent GitHub Discussions
graph_auto_update.pygauntletPostToolUseAuto-update code graph after git commits
graph_community_refresh.pycartographPostToolUseRefresh community detection after graph builds
homeostatic_monitor.pyabstractPostToolUseStability gap monitoring, queues degrading skills for improvement
local_doc_processor.pymemory-palacePostToolUseProcesses local docs
noqa_guard.pyleylinePreToolUseBlock inline lint suppression directives
permission_denied_logger.pyconservePermissionDeniedLog auto-mode permission denials for observability
permission_request.pyconservePermissionRequestPermission automation
post-evaluation.jsonabstractConfigQuality scoring config
post_implementation_policy.pysanctumSessionStartRequires docs/tests updates
post_learnings_stop.pyabstractStopPost learnings to GitHub Discussions on session stop
pr_blast_radius.pypensivePreToolUseSurface blast radius context on PR creation
pre-skill-load.jsonabstractConfigPre-load validation
pre_compact.pytomePreCompactCheckpoint active research session
pre_compact_preserve.pyconservePreCompactPreserve critical context before compression
pre_skill_execution.pyabstractPreToolUseSkill execution tracking
precommit_gate.pygauntletPreToolUsePre-commit quality gate for gauntlet
research_interceptor.pymemory-palacePreToolUseCache lookup before web
sanitize_external_content.pyleylinePostToolUseSanitize external content for prompt injection
security_pattern_check.pysanctumPreToolUseSecurity anti-pattern detection
session-start.shconserve, imbueSessionStartSession initialization
session_complete_notify.pysanctumStop, UserPromptSubmitCross-platform toast notifications and state management
session_lifecycle.pymemory-palaceStopSession lifecycle management
session_start.pytomeSessionStartCheck for active research sessions
session_start_hook.pyegregoreSessionStartInject manifest context into new sessions
setup.shconserveSetupEnvironment initialization
setup.shmemory-palaceSetupPalace directory initialization
skill_execution_logger.pyabstractPostToolUseSkill metrics logging
stop_hook.pyegregoreStopPrevent early exit while work items remain
supply_chain_check.pyleylineSessionStartWarn about known-compromised package versions in lockfiles
task_created_tracker.pysanctumTaskCreatedTrack task creation for workflow completeness monitoring
tdd_bdd_gate.pyimbuePreToolUseIron Law enforcement at write-time
tool_output_summarizer.pyconservePostToolUseMonitor and warn about tool output bloat
url_detector.pymemory-palaceUserPromptSubmitURL detection
user-prompt-submit.shimbueUserPromptSubmitScope validation
user_prompt_hook.pyegregoreUserPromptSubmitResume orchestration after user interrupts
verify_workflow_complete.pysanctumStopEnd-of-session workflow verification
web_research_handler.pymemory-palacePostToolUseWeb research processing and storage prompting

Command Reference — Core Plugins

Flag and option documentation for core plugin commands (abstract, attune, conserve, imbue, sanctum).

Extended plugins: Memory Palace, Pensive, Parseltongue, Spec-Kit, Scribe, Scry, Hookify, Leyline

See also: Capabilities Reference | Skills | Agents | Hooks | Workflows


Command Syntax

/<plugin>:<command-name> [--flags] [positional-args]

Common Flag Patterns:

Flag PatternDescriptionExample
--verboseEnable detailed output/bloat-scan --verbose
--dry-runPreview without executing/unbloat --dry-run
--forceSkip confirmation prompts/attune:init --force
--report FILEOutput to file/bloat-scan --report audit.md
--level NSet intensity/depth/bloat-scan --level 3
--skip-XSkip specific phase/prepare-pr --skip-updates

Abstract Plugin

/abstract:validate-plugin

Validate plugin structure against ecosystem conventions.

# Usage
/abstract:validate-plugin [plugin-name] [--strict] [--fix]

# Options
--strict       Fail on warnings (not just errors)
--fix          Auto-fix correctable issues
--report FILE  Output validation report

# Examples
/abstract:validate-plugin sanctum
/abstract:validate-plugin --strict conserve
/abstract:validate-plugin memory-palace --fix

/abstract:create-skill

Scaffold a new skill with proper frontmatter and structure.

# Usage
/abstract:create-skill <plugin>:<skill-name> [--template basic|modular] [--category]

# Options
--template     Skill template type (basic or modular with modules/)
--category     Skill category for classification
--interactive  Guided creation flow

# Examples
/abstract:create-skill pensive:shell-review --template modular
/abstract:create-skill imbue:new-methodology --category workflow-methodology

/abstract:create-command

Scaffold a new command with hooks and documentation.

# Usage
/abstract:create-command <plugin>:<command-name> [--hooks] [--extends]

# Options
--hooks        Include lifecycle hook templates
--extends      Base command or skill to extend
--aliases      Comma-separated command aliases

# Examples
/abstract:create-command sanctum:new-workflow --hooks
/abstract:create-command conserve:deep-clean --extends "conserve:bloat-scan"

/abstract:create-hook

Scaffold a new hook with security-first patterns.

# Usage
/abstract:create-hook <plugin>:<hook-name> [--type] [--lang]

# Options
--type     Hook event type (PreToolUse|PostToolUse|SessionStart|Stop|UserPromptSubmit)
--lang     Implementation language (bash|python)
--matcher  Tool matcher pattern

# Examples
/abstract:create-hook memory-palace:cache-check --type PreToolUse --lang python
/abstract:create-hook sanctum:commit-validator --type PreToolUse --matcher "Bash"

/abstract:analyze-skill

Analyze skill complexity and optimization opportunities.

# Usage
/abstract:analyze-skill <plugin>:<skill-name> [--metrics] [--suggest]

# Options
--metrics    Show detailed token/complexity metrics
--suggest    Generate optimization suggestions
--compare    Compare against skill baselines

# Examples
/abstract:analyze-skill imbue:proof-of-work --metrics
/abstract:analyze-skill sanctum:pr-prep --suggest

/abstract:make-dogfood

Update Makefile demonstration targets to reflect current features.

# Usage
/abstract:make-dogfood [--check] [--update]

# Options
--check     Verify Makefile is current (exit 1 if stale)
--update    Apply updates to Makefile
--dry-run   Show what would change

# Examples
/abstract:make-dogfood --check
/abstract:make-dogfood --update

/abstract:skills-eval

Evaluate skill quality across the ecosystem.

# Usage
/abstract:skills-eval [--plugin PLUGIN] [--threshold SCORE]

# Options
--plugin     Limit to specific plugin
--threshold  Minimum quality score (default: 70)
--output     Output format (table|json|markdown)

# Examples
/abstract:skills-eval --plugin sanctum
/abstract:skills-eval --threshold 80 --output markdown

/abstract:hooks-eval

Evaluate hook security and performance.

# Usage
/abstract:hooks-eval [--plugin PLUGIN] [--security]

# Options
--plugin    Limit to specific plugin
--security  Focus on security patterns
--perf      Focus on performance impact

# Examples
/abstract:hooks-eval --security
/abstract:hooks-eval --plugin memory-palace --perf

/abstract:evaluate-skill

Evaluate skill execution quality.

# Usage
/abstract:evaluate-skill <plugin>:<skill-name> [--metrics] [--suggestions]

# Options
--metrics      Show detailed execution metrics
--suggestions  Generate improvement suggestions
--compare      Compare against baseline metrics

# Examples
/abstract:evaluate-skill imbue:proof-of-work --metrics
/abstract:evaluate-skill sanctum:pr-prep --suggestions

Attune Plugin

/attune:init

Initialize project with complete development infrastructure.

# Usage
/attune:init [--lang LANGUAGE] [--name NAME] [--author AUTHOR]

# Options
--lang LANGUAGE         Project language: python|rust|typescript|go
--name NAME             Project name (default: directory name)
--author AUTHOR         Author name
--email EMAIL           Author email
--python-version VER    Python version (default: 3.10)
--description TEXT      Project description
--path PATH             Project path (default: .)
--force                 Overwrite existing files without prompting
--no-git                Skip git initialization

# Examples
/attune:init --lang python --name my-cli
/attune:init --lang rust --author "Your Name" --force

/attune:brainstorm

Brainstorm project ideas using Socratic questioning.

# Usage
/attune:brainstorm [TOPIC] [--output FILE]

# Options
--output FILE    Save brainstorm results to file
--rounds N       Number of question rounds (default: 5)
--focus AREA     Focus area: features|architecture|ux|technical

# Examples
/attune:brainstorm "CLI tool for data processing"
/attune:brainstorm --focus architecture --rounds 3

/attune:blueprint

Plan architecture and break down tasks.

# Usage
/attune:blueprint [--from BRAINSTORM] [--output FILE]

# Options
--from FILE      Use brainstorm results as input
--output FILE    Save plan to file
--depth LEVEL    Planning depth: high|detailed|exhaustive
--include        Include specific aspects: tests|ci|docs

# Examples
/attune:blueprint --from brainstorm.md --depth detailed
/attune:blueprint --include tests,ci

/attune:specify

Create detailed specifications from brainstorm or plan.

# Usage
/attune:specify [--from FILE] [--type TYPE]

# Options
--from FILE    Input file (brainstorm or plan)
--type TYPE    Spec type: technical|functional|api|data-model
--output DIR   Output directory for specs

# Examples
/attune:specify --from plan.md --type technical
/attune:specify --type api --output .specify/

/attune:execute

Execute implementation tasks systematically.

# Usage
/attune:execute [--plan FILE] [--phase PHASE] [--task ID]

# Options
--plan FILE     Task plan file (default: .specify/tasks.md)
--phase PHASE   Execute specific phase: setup|tests|core|integration|polish
--task ID       Execute specific task by ID
--parallel      Enable parallel execution where marked [P]
--continue      Resume from last checkpoint

# Examples
/attune:execute --plan tasks.md --phase setup
/attune:execute --task T1.2 --parallel

/attune:validate

Validate project structure against best practices.

# Usage
/attune:validate [--strict] [--fix]

# Options
--strict    Fail on warnings
--fix       Auto-fix correctable issues
--config    Path to custom validation config

# Examples
/attune:validate --strict
/attune:validate --fix

/attune:upgrade-project

Add or update configurations in existing project.

# Usage
/attune:upgrade-project [--component COMPONENT] [--force]

# Options
--component    Specific component: makefile|precommit|workflows|gitignore
--force        Overwrite existing without prompting
--diff         Show diff before applying

# Examples
/attune:upgrade-project --component makefile
/attune:upgrade-project --component workflows --force

Conserve Plugin

/conserve:bloat-scan

Progressive bloat detection for dead code and duplication.

# Usage
/bloat-scan [--level 1|2|3] [--focus TYPE] [--report FILE] [--dry-run]

# Options
--level 1|2|3      Scan tier: 1=quick, 2=targeted, 3=deep audit
--focus TYPE       Focus area: code|docs|deps|all (default: all)
--report FILE      Save report to file
--dry-run          Preview findings without taking action
--exclude PATTERN  Additional exclude patterns

# Scan Tiers
# Tier 1 (2-5 min): Large files, stale files, commented code, old TODOs
# Tier 2 (10-20 min): Dead code, duplicate patterns, import bloat
# Tier 3 (30-60 min): All above + cyclomatic complexity, dependency graphs

# Examples
/bloat-scan                           # Quick Tier 1 scan
/bloat-scan --level 2 --focus code    # Targeted code analysis
/bloat-scan --level 3 --report Q1-audit.md  # Deep audit with report

/conserve:unbloat

Safe bloat remediation with interactive approval.

# Usage
/unbloat [--approve LEVEL] [--dry-run] [--backup]

# Options
--approve LEVEL    Auto-approve level: high|medium|low|all
--dry-run          Show what would be removed
--backup           Create backup branch before changes
--interactive      Prompt for each item (default)

# Examples
/unbloat --dry-run                    # Preview all removals
/unbloat --approve high --backup      # Auto-approve high priority, backup first
/unbloat --interactive                # Approve each item manually

/conserve:optimize-context

Optimize context window usage.

# Usage
/optimize-context [--target PERCENT] [--scope PATH]

# Options
--target PERCENT   Target context utilization (default: 50%)
--scope PATH       Limit to specific directory
--suggest          Only show suggestions, don't apply
--aggressive       Apply all optimizations

# Examples
/optimize-context --target 40%
/optimize-context --scope plugins/sanctum/ --suggest

/conserve:analyze-growth

Consolidated: This command has been merged into /bloat-scan. See bloat-scan.

Analyze skill growth patterns.

# Usage (now use /bloat-scan instead)
/bloat-scan [--level 1|2|3] [--focus TYPE] [--report FILE]

# Previous /analyze-growth options are covered by:
/bloat-scan --level 2 --focus code    # Growth pattern analysis

Imbue Plugin

/imbue:justify

Audit changes for AI additive bias and Iron Law compliance.

# Usage
/justify [--scope staged|branch|file] [path...]

# Examples
/justify                        # Audit all branch changes
/justify --scope staged         # Only staged changes
/justify src/auth.py            # Specific files

/imbue:catchup

Quick context recovery after session restart.

# Usage
/catchup [--depth LEVEL] [--focus AREA]

# Options
--depth LEVEL    Recovery depth: shallow|standard|deep (default: standard)
--focus AREA     Focus on: git|docs|issues|all
--since DATE     Catch up from specific date

# Examples
/catchup                           # Standard recovery
/catchup --depth deep              # Full context recovery
/catchup --focus git --since "3 days ago"

/imbue:feature-review

Consolidated: This command has been merged into Skill(imbue:scope-guard). Invoke via Skill(imbue:scope-guard) instead.

Feature prioritization and gap analysis.

# Usage (now use Skill(imbue:scope-guard) instead)
Skill(imbue:scope-guard)

# scope-guard covers feature prioritization, gap analysis,
# and anti-overengineering evaluation

/imbue:structured-review

Structured review workflow with methodology options.

# Usage
/structured-review PATH [--methodology METHOD]

# Options
--methodology METHOD    Review methodology: evidence-based|checklist|formal
--todos                 Generate TodoWrite items
--summary              Include executive summary

# Examples
/structured-review plugins/sanctum/ --methodology evidence-based
/structured-review . --todos --summary

Sanctum Plugin

/sanctum:prepare-pr (alias: /pr)

Complete PR preparation workflow.

# Usage
/prepare-pr [--no-code-review] [--reviewer-scope SCOPE] [--skip-updates] [FILE]
/pr [options...]  # Alias

# Options
--no-code-review           Skip automated code review (faster)
--reviewer-scope SCOPE     Review strictness: strict|standard|lenient
--skip-updates             Skip documentation/test updates (Phase 0)
FILE                       Output file for PR description (default: pr_description.md)

# Reviewer Scope Levels
# strict   - All suggestions must be addressed
# standard - Critical issues must be fixed, suggestions are recommendations
# lenient  - Focus on blocking issues only

# Examples
/prepare-pr                                    # Full workflow
/pr                                            # Alias for full workflow
/prepare-pr --skip-updates                     # Skip Phase 0 updates
/prepare-pr --no-code-review                   # Skip code review
/prepare-pr --reviewer-scope strict            # Strict review for critical changes
/prepare-pr --skip-updates --no-code-review    # Fastest (legacy behavior)

/sanctum:acp

Add, commit, push. Stages all changes, generates a conventional commit message, commits, and pushes to the current branch.

# Usage
/acp

/sanctum:commit-msg

Generate commit message.

# Usage
/commit-msg [--type TYPE] [--scope SCOPE]

# Options
--type TYPE      Force commit type: feat|fix|docs|refactor|test|chore
--scope SCOPE    Force commit scope
--breaking       Include breaking change footer
--issue N        Reference issue number

# Examples
/commit-msg
/commit-msg --type feat --scope api
/commit-msg --breaking --issue 42

/sanctum:do-issue

Fix GitHub issues.

# Usage
/do-issue ISSUE_NUMBER [--branch NAME]

# Options
--branch NAME    Branch name (default: issue-N)
--auto-merge     Attempt auto-merge after PR
--draft          Create draft PR

# Examples
/do-issue 42
/do-issue 123 --branch fix/auth-bug
/do-issue 99 --draft

/sanctum:fix-pr

Address PR review comments.

# Usage
/fix-pr [PR_NUMBER] [--auto-resolve]

# Options
PR_NUMBER        PR number (default: current branch's PR)
--auto-resolve   Auto-resolve addressed comments
--batch          Address all comments in batch
--interactive    Address one comment at a time

# Examples
/fix-pr 42
/fix-pr --auto-resolve
/fix-pr 42 --batch

/sanctum:fix-workflow

Workflow retrospective with automatic improvement context.

# Usage
/fix-workflow [WORKFLOW_NAME] [--context]

# Options
WORKFLOW_NAME    Specific workflow to analyze
--context        Gather improvement context automatically
--lessons        Generate lessons learned
--improvements   Suggest workflow improvements

# Examples
/fix-workflow pr-review --context
/fix-workflow --lessons --improvements

/sanctum:pr-review

Enhanced PR review.

# Usage
/pr-review [PR_NUMBER] [--thorough]

# Options
PR_NUMBER    PR to review (default: current)
--thorough   Deep review with all checks
--quick      Fast review of critical issues only
--security   Security-focused review

# Examples
/pr-review 42
/pr-review --thorough
/pr-review --quick --security

/sanctum:update-docs

Update project documentation.

# Usage
/update-docs [--scope SCOPE] [--check]

# Options
--scope SCOPE    Scope: all|api|readme|guides
--check          Check only, don't modify
--sync           Sync with code changes

# Examples
/update-docs
/update-docs --scope api
/update-docs --check

/sanctum:update-readme

Consolidated: This command has been merged into /update-docs. See update-docs. Use /update-docs --scope readme for README-specific updates.

Modernize README.

# Usage (now use /update-docs instead)
/update-docs --scope readme

# Previous /update-readme options are covered by /update-docs:
/update-docs --scope readme    # README-specific updates
/update-docs --scope all       # Full documentation refresh

/sanctum:update-tests

Maintain tests.

# Usage
/update-tests [PATH] [--coverage]

# Options
PATH            Test path to update
--coverage      Ensure coverage targets
--missing       Add missing tests
--modernize     Update to modern patterns

# Examples
/update-tests tests/
/update-tests --missing --coverage

/sanctum:update-version

Bump versions.

# Usage
/update-version [VERSION] [--type TYPE]

# Options
VERSION        Explicit version (e.g., 1.2.3)
--type TYPE    Bump type: major|minor|patch|prerelease
--tag          Create git tag
--push         Push tag to remote

# Examples
/update-version 2.0.0
/update-version --type minor --tag
/update-version --type patch --tag --push

/sanctum:update-dependencies

Update project dependencies.

# Usage
/update-dependencies [--type TYPE] [--dry-run]

# Options
--type TYPE    Dependency type: all|prod|dev|security
--dry-run      Preview updates without applying
--major        Include major version updates
--security     Security updates only

# Examples
/update-dependencies
/update-dependencies --dry-run
/update-dependencies --type security
/update-dependencies --major

/sanctum:git-catchup

Git repository catchup.

# Usage
/git-catchup [--since DATE] [--author AUTHOR]

# Options
--since DATE      Start date for catchup
--author AUTHOR   Filter by author
--branch BRANCH   Specific branch
--format FORMAT   Output format: summary|detailed|log

# Examples
/git-catchup --since "1 week ago"
/git-catchup --author "user@example.com"

/sanctum:create-tag

Create git tags for releases.

# Usage
/create-tag VERSION [--message MSG] [--sign]

# Options
VERSION        Tag version (e.g., v1.0.0)
--message MSG  Tag message
--sign         Create signed tag
--push         Push tag to remote

# Examples
/create-tag v1.0.0
/create-tag v1.0.0 --message "Release 1.0.0" --sign --push

Extended plugins: Memory Palace, Pensive, Parseltongue, Spec-Kit, Scribe, Scry, Hookify, Leyline

See also: Skills | Agents | Hooks | Workflows

Command Reference — Extended Plugins

Flag and option documentation for extended plugin commands (memory-palace, parseltongue, pensive, spec-kit, scribe, scry, hookify, leyline).

Core plugins: Abstract, Attune, Conserve, Imbue, Sanctum

See also: Capabilities Reference | Skills | Agents | Hooks | Workflows


Memory Palace Plugin

/memory-palace:garden

Manage digital gardens.

# Usage
/garden [ACTION] [--path PATH]

# Actions
tend           Review and update garden entries
prune          Remove stale/low-value entries
cultivate      Add new entries from queue
status         Show garden health metrics

# Options
--path PATH    Garden path (default: docs/knowledge-corpus/)
--dry-run      Preview changes
--score N      Minimum score threshold for cultivation

# Examples
/garden tend                    # Review garden entries
/garden prune --dry-run         # Preview what would be removed
/garden cultivate --score 70    # Add high-quality entries
/garden status                  # Show health metrics

/memory-palace:navigate

Search across knowledge palaces.

# Usage
/navigate QUERY [--scope SCOPE] [--type TYPE]

# Options
--scope SCOPE    Search scope: local|corpus|all
--type TYPE      Content type: docs|code|web|all
--limit N        Maximum results (default: 10)
--relevance N    Minimum relevance score

# Examples
/navigate "authentication patterns" --scope corpus
/navigate "pytest fixtures" --type docs --limit 5

/memory-palace:palace

Manage knowledge palaces.

# Usage
/palace [ACTION] [PALACE_NAME]

# Actions
create NAME    Create new palace
list           List all palaces
status NAME    Show palace status
archive NAME   Archive palace

# Options
--template TEMPLATE    Palace template: session|project|topic
--from FILE           Initialize from existing content

# Examples
/palace create project-x --template project
/palace list
/palace status project-x
/palace archive old-project

/memory-palace:review-room

Review items in the knowledge queue.

# Usage
/review-room [--status STATUS] [--source SOURCE]

# Options
--status STATUS    Filter by status: pending|approved|rejected
--source SOURCE    Filter by source: webfetch|websearch|manual
--batch N          Review N items at once
--auto-score       Auto-generate scores

# Examples
/review-room --status pending --batch 10
/review-room --source webfetch --auto-score

Parseltongue Plugin

/parseltongue:analyze-tests

Test suite health report.

# Usage
/analyze-tests [PATH] [--coverage] [--flaky]

# Options
--coverage    Include coverage analysis
--flaky       Detect potentially flaky tests
--slow N      Flag tests slower than N seconds
--missing     Find untested code

# Examples
/analyze-tests tests/ --coverage
/analyze-tests --flaky --slow 5
/analyze-tests src/api/ --missing

/parseltongue:run-profiler

Profile code execution.

# Usage
/run-profiler [COMMAND] [--type TYPE]

# Options
--type TYPE    Profiler type: cpu|memory|line|call
--output FILE  Output file for profile data
--flame        Generate flame graph
--top N        Show top N hotspots

# Examples
/run-profiler "python main.py" --type cpu
/run-profiler "pytest tests/" --type memory --flame
/run-profiler --type line --top 20

/parseltongue:check-async

Async pattern validation.

# Usage
/check-async [PATH] [--strict]

# Options
--strict      Strict async compliance
--suggest     Suggest async improvements
--blocking    Find blocking calls in async code

# Examples
/check-async src/ --strict
/check-async --blocking --suggest

Pensive Plugin

/pensive:full-review

Unified code review.

# Usage
/full-review [PATH] [--scope SCOPE] [--output FILE]

# Options
--scope SCOPE    Review scope: changed|staged|all
--output FILE    Save review to file
--severity MIN   Minimum severity: critical|high|medium|low
--categories     Include categories: bugs|security|style|perf

# Examples
/full-review src/ --scope staged
/full-review --scope changed --severity high
/full-review . --output review.md --categories bugs,security

/pensive:code-review

Expert code review.

# Usage
/code-review [FILES...] [--focus FOCUS]

# Options
--focus FOCUS    Focus area: bugs|api|tests|security|style
--evidence       Include evidence logging
--lsp            Enable LSP-enhanced review (requires ENABLE_LSP_TOOL=1)

# Examples
/code-review src/api.py --focus bugs
/code-review --focus security --evidence
ENABLE_LSP_TOOL=1 /code-review src/ --lsp

/pensive:architecture-review

Architecture assessment.

# Usage
/architecture-review [PATH] [--depth DEPTH]

# Options
--depth DEPTH    Analysis depth: surface|standard|deep
--patterns       Identify architecture patterns
--anti-patterns  Flag anti-patterns
--suggestions    Generate improvement suggestions

# Examples
/architecture-review src/ --depth deep
/architecture-review --patterns --anti-patterns

/pensive:rust-review

Rust-specific review.

# Usage
/rust-review [PATH] [--safety]

# Options
--safety     Focus on unsafe code analysis
--lifetimes  Analyze lifetime patterns
--memory     Memory safety review
--perf       Performance-focused review

# Examples
/rust-review src/lib.rs --safety
/rust-review --lifetimes --memory

/pensive:test-review

Test quality review.

# Usage
/test-review [PATH] [--coverage]

# Options
--coverage     Include coverage analysis
--patterns     Review test patterns (AAA, BDD)
--flaky        Detect flaky test patterns
--gaps         Find testing gaps

# Examples
/test-review tests/ --coverage
/test-review --patterns --gaps

/pensive:shell-review

Shell script safety and portability review.

# Usage
/shell-review [FILES...] [--strict]

# Options
--strict       Strict POSIX compliance
--security     Security-focused review
--portability  Check cross-shell compatibility

# Examples
/shell-review scripts/*.sh --strict
/shell-review --security install.sh

/pensive:skill-review

Analyze skill runtime metrics and stability. This is the canonical command for skill performance analysis (execution counts, success rates, stability gaps).

For static quality analysis (frontmatter, structure), use abstract:skill-auditor.

# Usage
/skill-review [--plugin PLUGIN] [--recommendations]

# Options
--plugin PLUGIN      Limit to specific plugin
--all-plugins        Aggregate metrics across all plugins
--unstable-only      Only show skills with stability_gap > 0.3
--skill NAME         Deep-dive specific skill
--recommendations    Generate improvement recommendations

# Examples
/skill-review --plugin sanctum
/skill-review --unstable-only
/skill-review --skill imbue:proof-of-work
/skill-review --all-plugins --recommendations

Spec-Kit Plugin

/speckit-startup

Bootstrap specification workflow.

# Usage
/speckit-startup [--dir DIR]

# Options
--dir DIR    Specification directory (default: .specify/)
--template   Use template structure
--minimal    Minimal specification setup

# Examples
/speckit-startup
/speckit-startup --dir specs/
/speckit-startup --minimal

/speckit-clarify

Generate clarifying questions.

# Usage
/speckit-clarify [TOPIC] [--rounds N]

# Options
TOPIC        Topic to clarify
--rounds N   Number of question rounds
--depth      Deep clarification
--technical  Technical focus

# Examples
/speckit-clarify "user authentication"
/speckit-clarify --rounds 3 --technical

/speckit-specify

Create specification.

# Usage
/speckit-specify [--from FILE] [--output DIR]

# Options
--from FILE    Input source (brainstorm, requirements)
--output DIR   Output directory
--type TYPE    Spec type: full|api|data|ui

# Examples
/speckit-specify --from requirements.md
/speckit-specify --type api --output .specify/

/speckit-plan

Generate implementation plan.

# Usage
/speckit-plan [--from SPEC] [--phases]

# Options
--from SPEC    Source specification
--phases       Include phase breakdown
--estimates    Include time estimates
--dependencies Show task dependencies

# Examples
/speckit-plan --from .specify/spec.md
/speckit-plan --phases --estimates

/speckit-tasks

Generate task breakdown.

# Usage
/speckit-tasks [--from PLAN] [--parallel]

# Options
--from PLAN      Source plan
--parallel       Mark parallelizable tasks
--granularity    Task granularity: coarse|medium|fine
--assignable     Make tasks assignable

# Examples
/speckit-tasks --from .specify/plan.md
/speckit-tasks --parallel --granularity fine

/speckit-implement

Execute implementation plan.

# Usage
/speckit-implement [--phase PHASE] [--task ID] [--continue]

# Options
--phase PHASE   Execute specific phase
--task ID       Execute specific task
--continue      Resume from checkpoint
--parallel      Enable parallel execution

# Examples
/speckit-implement --phase setup
/speckit-implement --task T1.2
/speckit-implement --continue

/speckit-checklist

Generate implementation checklist.

# Usage
/speckit-checklist [--type TYPE] [--output FILE]

# Options
--type TYPE    Checklist type: ux|test|security|deployment
--output FILE  Output file
--interactive  Interactive completion mode

# Examples
/speckit-checklist --type security
/speckit-checklist --type ux --output checklists/ux.md

/speckit-analyze

Check artifact consistency.

# Usage
/speckit-analyze [--strict] [--fix]

# Options
--strict    Strict consistency checking
--fix       Auto-fix inconsistencies
--report    Generate consistency report

# Examples
/speckit-analyze
/speckit-analyze --strict --report

Scribe Plugin

/slop-scan

Consolidated: This command wrapper has been removed. slop-scan is now agent-only via the slop-hunter agent. Invoke directly with Agent(scribe:slop-hunter).

Scan files for AI-generated content markers.

# Usage (now agent-only)
Agent(scribe:slop-hunter)

# Or use the slop-detector skill directly:
Skill(scribe:slop-detector)

/style-learn

Create style profile from examples.

# Usage
/style-learn [FILES] --name NAME

# Options
FILES         Example files to learn from
--name NAME   Profile name
--merge       Merge with existing profile

# Examples
/style-learn good-examples/*.md --name house-style
/style-learn docs/api.md --name api-docs --merge

/doc-polish

Clean up AI-generated content.

# Usage
/doc-polish [FILES] [--style NAME] [--dry-run]

# Options
FILES         Files to polish
--style NAME  Apply learned style
--dry-run     Preview changes without writing

# Examples
/doc-polish README.md
/doc-polish docs/*.md --style house-style
/doc-polish **/*.md --dry-run

/doc-generate

Generate new documentation.

# Usage
/doc-generate TYPE [--style NAME] [--output FILE]

# Options
TYPE          Document type: readme|api|changelog|usage
--style NAME  Apply learned style
--output FILE Output file path

# Examples
/doc-generate readme
/doc-generate api --style api-docs
/doc-generate changelog --output CHANGELOG.md

/doc-verify

Consolidated: This command wrapper has been removed. doc-verify is now agent-only via the doc-verifier agent. Invoke directly with Agent(scribe:doc-verifier).

Validate documentation claims with proof-of-work.

# Usage (now agent-only)
Agent(scribe:doc-verifier)

# Or use the doc-generator skill with verification mode:
Skill(scribe:doc-generator)

Scry Plugin

/scry:record-terminal

Create terminal recording.

# Usage
/record-terminal [COMMAND] [--output FILE] [--format FORMAT]

# Options
COMMAND         Command to record
--output FILE   Output file (default: recording.gif)
--format FORMAT Output format: gif|svg|mp4|tape
--width N       Terminal width
--height N      Terminal height
--speed N       Playback speed multiplier

# Examples
/record-terminal "make test" --output demo.gif
/record-terminal --format svg --width 80 --height 24

/scry:record-browser

Record browser session.

# Usage
/record-browser [URL] [--output FILE] [--actions FILE]

# Options
URL             Starting URL
--output FILE   Output file
--actions FILE  Playwright actions script
--headless      Run headless
--viewport WxH  Viewport size

# Examples
/record-browser "http://localhost:3000" --output demo.mp4
/record-browser --actions test-flow.js --headless

Hookify Plugin

/hookify:install

Install hooks.

# Usage
/hookify:install [HOOK_NAME] [--plugin PLUGIN]

# Options
HOOK_NAME       Specific hook to install
--plugin PLUGIN Install hooks from plugin
--all           Install all available hooks
--dry-run       Preview installation

# Examples
/hookify:install memory-palace-web-processor
/hookify:install --plugin conserve
/hookify:install --all --dry-run

/hookify:configure

Configure hook settings.

# Usage
/hookify:configure [HOOK_NAME] [--enable|--disable] [--set KEY=VALUE]

# Options
HOOK_NAME         Hook to configure
--enable          Enable hook
--disable         Disable hook
--set KEY=VALUE   Set configuration value
--reset           Reset to defaults

# Examples
/hookify:configure memory-palace --set research_mode=cache_first
/hookify:configure context-warning --disable

/hookify:list

List installed hooks.

# Usage
/hookify:list [--plugin PLUGIN] [--status]

# Options
--plugin PLUGIN  Filter by plugin
--status         Show enabled/disabled status
--verbose        Show full configuration

# Examples
/hookify:list
/hookify:list --plugin memory-palace --status

Leyline Plugin

/leyline:reinstall-all-plugins

Refresh all plugins.

# Usage
/reinstall-all-plugins [--force] [--clean]

# Options
--force    Force reinstall even if up-to-date
--clean    Clean install (remove then reinstall)
--verify   Verify installation after reinstall

# Examples
/reinstall-all-plugins
/reinstall-all-plugins --clean --verify

/leyline:update-all-plugins

Update all plugins.

# Usage
/update-all-plugins [--check] [--exclude PLUGINS]

# Options
--check           Check for updates only
--exclude PLUGINS Comma-separated plugins to skip
--major           Include major version updates

# Examples
/update-all-plugins
/update-all-plugins --check
/update-all-plugins --exclude "experimental,beta"

Core plugins: Abstract, Attune, Conserve, Imbue, Sanctum

See also: Skills | Agents | Hooks | Workflows

Superpowers Integration

How Claude Night Market plugins integrate with the superpowers skills.

Last synced: superpowers v5.0.7 (2026-03-31)

Overview

Many Night Market capabilities achieve their full potential when used alongside superpowers. While all plugins work standalone, superpowers provides foundational methodology skills that enhance workflows.

Since v4.0.0, superpowers enforces workflows via hard gates, DOT flowcharts, and mandatory checklists rather than simply describing them. Since v5.0.6, inline self-review replaces subagent review loops, cutting review overhead from ~25 minutes to ~30 seconds.

Installation

# Add the superpowers marketplace
/plugin marketplace add obra/superpowers

# Install the superpowers plugin
/plugin install superpowers@superpowers-marketplace

Dependency Matrix

PluginComponentTypeSuperpowers DependencyEnhancement
abstract/create-skillCommandbrainstormingSocratic questioning
abstract/create-commandCommandbrainstormingConcept development
abstract/create-hookCommandbrainstormingSecurity design
abstract/test-skillCommandtest-driven-developmentTDD methodology
sanctum/prCommandreceiving-code-review, requesting-code-reviewPR validation
sanctum/pr-reviewCommandreceiving-code-reviewPR analysis
sanctum/fix-prCommandreceiving-code-reviewComment resolution
sanctum/do-issueCommandsubagent-driven-development, dispatching-parallel-agents, using-git-worktreesFull workflow
spec-kit/speckit-clarifyCommandbrainstormingClarification
spec-kit/speckit-planCommandwriting-plansPlanning
spec-kit/speckit-tasksCommandexecuting-plans, systematic-debuggingTask breakdown
spec-kit/speckit-implementCommandexecuting-plans, systematic-debuggingExecution
spec-kit/speckit-analyzeCommandsystematic-debugging, verification-before-completionConsistency
spec-kit/speckit-checklistCommandverification-before-completionValidation
pensive/full-reviewCommandsystematic-debugging, verification-before-completionDebugging + evidence
parseltonguepython-testingSkilltest-driven-development (includes testing-anti-patterns)TDD + anti-patterns
imbuescope-guard, proof-of-workSkillbrainstorming, writing-plans, executing-plans, verification-before-completionAnti-overengineering, evidence-based completion
conserve/optimize-contextCommandsystematic-debugging (includes condition-based-waiting)Smart waiting
ministerissue-managementSkillsystematic-debuggingBug investigation

Superpowers Skills Referenced

SkillPurposeUsed By
brainstormingSocratic questioning with hard gates and visual companionabstract, spec-kit, imbue
test-driven-developmentRED-GREEN-REFACTOR TDD cycle (includes testing-anti-patterns)abstract, sanctum, parseltongue
receiving-code-reviewTechnical rigor for evaluating suggestionssanctum
requesting-code-reviewQuality gates for code submissionsanctum
writing-plansStructured planning with inline self-reviewspec-kit, imbue
executing-plansContinuous task execution (no longer batches)spec-kit
systematic-debuggingFour-phase framework (includes root-cause-tracing, defense-in-depth, condition-based-waiting)spec-kit, pensive, minister, conserve
verification-before-completionEvidence-based review standardsspec-kit, pensive, imbue
subagent-driven-developmentAutonomous subagent orchestration (mandatory on capable harnesses)sanctum
dispatching-parallel-agentsParallel task dispatch for 2+ independent taskssanctum
using-git-worktreesIsolated implementation in feature branchessanctum
finishing-a-development-branchBranch cleanup, merge strategy, and finalizationsanctum
writing-skillsSkill authoring with description trap guidanceabstract

Graceful Degradation

All Night Market plugins work without superpowers:

Without Superpowers

  • Commands: Execute core functionality
  • Skills: Provide standalone guidance
  • Agents: Function with reduced automation

With Superpowers

  • Commands: Enhanced methodology phases
  • Skills: Integrated methodology patterns
  • Agents: Full automation depth

Skill Consolidation Notes (v4.0.0+)

Several standalone skills were merged into parent skills:

Former StandaloneNow Bundled InAccess
testing-anti-patternstest-driven-developmentModule file within TDD skill
root-cause-tracingsystematic-debuggingModule file within debugging skill
defense-in-depthsystematic-debuggingModule file within debugging skill
condition-based-waitingsystematic-debuggingModule file within debugging skill

Deprecated Commands

These superpowers slash commands show deprecation notices since v5.0.0. Use the skill equivalents:

DeprecatedReplacement
/brainstormSkill(superpowers:brainstorming)
/write-planSkill(superpowers:writing-plans)
/execute-planSkill(superpowers:executing-plans)

Key Patterns

Inline Self-Review (v5.0.6)

Superpowers replaced subagent review loops with inline self-review checklists. This cut review time from ~25 minutes to ~30 seconds with comparable defect detection. Night Market review workflows (pensive, sanctum, imbue) should follow this pattern when delegating to superpowers.

SUBAGENT-STOP Gate

Superpowers skills include <SUBAGENT-STOP> blocks that prevent subagents from activating full skill workflows. Night Market dispatch patterns (sanctum:do-issue, conserve:clear-context) should be aware of this when delegating work to subagents with superpowers installed.

Instruction Priority Hierarchy

Superpowers enforces: User instructions > Superpowers skills > Default system prompt. Night Market commands respect this ordering when combining skill invocations.

Context Isolation

All superpowers delegation skills now scope subagent context explicitly. Night Market’s parallel execution patterns should follow the same principle.

Example: /do-issue Workflow

Without Superpowers

1. Parse issue
2. Analyze codebase
3. Implement fix
4. Create PR

With Superpowers

1. Parse issue
2. [using-git-worktrees] Create isolated worktree
3. [subagent-driven-development] Plan subagent tasks
4. [dispatching-parallel-agents] Dispatch parallel work
5. [writing-plans] Create structured plan
6. [test-driven-development] Write failing test
7. Implement fix
8. [requesting-code-review] Inline self-review
9. [finishing-a-development-branch] Cleanup and merge
10. Create PR

For the full Night Market experience:

# 1. Add marketplaces
/plugin marketplace add obra/superpowers
/plugin marketplace add athola/claude-night-market

# 2. Install superpowers (foundational)
/plugin install superpowers@superpowers-marketplace

# 3. Install Night Market plugins
/plugin install sanctum@claude-night-market
/plugin install spec-kit@claude-night-market
/plugin install pensive@claude-night-market

Checking Integration

Verify superpowers is available:

/plugin list
# Should show superpowers@superpowers-marketplace

Commands will automatically detect and use superpowers when available.

Function Extraction Guidelines

Last Updated: 2025-12-06

Overview

This document provides standards and guidelines for function extraction and refactoring in the Claude Night Market plugin ecosystem. Following these guidelines validates maintainable, testable, and readable code.

Principles

1. Single Responsibility Principle (SRP)

A function should have only one reason to change.

2. Keep Functions Small

  • Ideal: 10-20 lines of code
  • Acceptable: 20-30 lines with clear logic
  • Maximum: 50 lines with strong justification
  • Never exceed 100 lines without splitting

3. Limited Parameters

  • Ideal: 0-3 parameters
  • Acceptable: 4-5 parameters with clear types
  • Consider object parameter if 6+ parameters

4. Clear Naming

  • Functions should be verbs that describe their action
  • Use consistent naming conventions across the codebase
  • Avoid abbreviations unless widely understood

When to Extract Functions

Immediate Extraction Required

  1. Function exceeds 30 lines

    # BAD - Too long
    def process_large_content(content):
        lines = content.split('\n')
        filtered_lines = []
        for line in lines:
            if line.strip():
                if not line.startswith('#'):
                    if len(line) < 100:
                        filtered_lines.append(line.strip())
        # ... 20 more lines
    
  2. Function has multiple responsibilities

    # BAD - Multiple responsibilities
    def analyze_and_optimize(content):
        # Analysis part
        complexity = calculate_complexity(content)
        quality = assess_quality(content)
    
        # Optimization part
        optimized = remove_redundancy(content)
        optimized = shorten_sentences(optimized)
        return optimized, complexity, quality
    
  3. Nested function depth exceeds 3 levels

    # BAD - Too nested
    def process_data(data):
        if data:
            for item in data:
                if item.valid:
                    for subitem in item.children:
                        if subitem.active:
                            # Deep nesting - extract this
                            process_subitem(subitem)
    

Consider Extraction

  1. Function has 4+ parameters

    # CONSIDER - Many parameters
    def create_report(title, content, author, date, format, include_header, include_footer):
        pass
    
    # BETTER - Use configuration object
    @dataclass
    class ReportConfig:
        title: str
        content: str
        author: str
        date: datetime
        format: str = "pdf"
        include_header: bool = True
        include_footer: bool = True
    
    def create_report(config: ReportConfig):
        pass
    
  2. Complex conditional logic

    # CONSIDER - Complex conditions
    def calculate_rate(user, product, time, location, special_offer):
        if user.premium and product.category in ["electronics", "books"]:
            if time.hour < 12 and location.country == "US":
                if special_offer and not user.used_recently:
                    return 0.9
        # ... more conditions
    
    # BETTER - Extract condition checks
    def _is_eligible_for_discount(user, product, time, location, special_offer):
        return (user.premium and
                product.category in ["electronics", "books"] and
                time.hour < 12 and
                location.country == "US" and
                special_offer and
                not user.used_recently)
    

Extraction Patterns

1. Extract Method Pattern

Before:

def generate_report(data):
    # Validate data
    if not data:
        raise ValueError("Data cannot be empty")
    if not all(isinstance(item, dict) for item in data):
        raise TypeError("All items must be dictionaries")

    # Process data
    processed = []
    for item in data:
        processed_item = {
            'id': item.get('id'),
            'name': item.get('name', '').title(),
            'value': float(item.get('value', 0))
        }
        processed.append(processed_item)

    # Calculate totals
    total = sum(item['value'] for item in processed)
    average = total / len(processed) if processed else 0

    return {
        'items': processed,
        'summary': {
            'total': total,
            'average': average,
            'count': len(processed)
        }
    }

After:

def generate_report(data):
    """Generate a report from data items."""
    _validate_data(data)
    processed_items = _process_data_items(data)
    summary = _calculate_summary(processed_items)

    return {
        'items': processed_items,
        'summary': summary
    }

def _validate_data(data):
    """Validate input data."""
    if not data:
        raise ValueError("Data cannot be empty")
    if not all(isinstance(item, dict) for item in data):
        raise TypeError("All items must be dictionaries")

def _process_data_items(data):
    """Process individual data items."""
    return [
        {
            'id': item.get('id'),
            'name': item.get('name', '').title(),
            'value': float(item.get('value', 0))
        }
        for item in data
    ]

def _calculate_summary(items):
    """Calculate summary statistics."""
    total = sum(item['value'] for item in items)
    return {
        'total': total,
        'average': total / len(items) if items else 0,
        'count': len(items)
    }

2. Strategy Pattern for Complex Logic

Before:

def optimize_content(content, strategy_type):
    if strategy_type == "aggressive":
        # Remove all emphasis
        lines = content.split('\n')
        cleaned = []
        for line in lines:
            if not line.strip().startswith('**'):
                cleaned.append(line)
        return '\n'.join(cleaned)
    elif strategy_type == "moderate":
        # Shorten code blocks
        # ... 20 lines of logic
    elif strategy_type == "gentle":
        # Only remove images
        # ... 20 lines of logic

After:

from abc import ABC, abstractmethod

class OptimizationStrategy(ABC):
    """Base class for content optimization strategies."""

    @abstractmethod
    def optimize(self, content: str) -> str:
        """Optimize content according to strategy."""
        pass

class AggressiveOptimizationStrategy(OptimizationStrategy):
    """Aggressive content optimization."""

    def optimize(self, content: str) -> str:
        lines = content.split('\n')
        cleaned = [
            line for line in lines
            if not line.strip().startswith('**')
        ]
        return '\n'.join(cleaned)

class ModerateOptimizationStrategy(OptimizationStrategy):
    """Moderate content optimization."""

    def optimize(self, content: str) -> str:
        # Implementation for moderate optimization
        pass

class GentleOptimizationStrategy(OptimizationStrategy):
    """Gentle content optimization."""

    def optimize(self, content: str) -> str:
        # Implementation for gentle optimization
        pass

# Strategy registry
OPTIMIZATION_STRATEGIES = {
    "aggressive": AggressiveOptimizationStrategy(),
    "moderate": ModerateOptimizationStrategy(),
    "gentle": GentleOptimizationStrategy()
}

def optimize_content(content: str, strategy_type: str) -> str:
    """Optimize content using specified strategy."""
    if strategy_type not in OPTIMIZATION_STRATEGIES:
        raise ValueError(f"Unknown strategy: {strategy_type}")

    strategy = OPTIMIZATION_STRATEGIES[strategy_type]
    return strategy.optimize(content)

3. Builder Pattern for Complex Construction

Before:

def create_complex_object(name, type, config, options, metadata):
    obj = ComplexObject()
    obj.name = name
    obj.type = type

    # Complex configuration
    if config.get('enabled', True):
        obj.enabled = True
        obj.timeout = config.get('timeout', 30)
        obj.retries = config.get('retries', 3)

    # Options processing
    for key, value in options.items():
        if key.startswith('custom_'):
            obj.custom_fields[key[7:]] = value
        else:
            setattr(obj, key, value)

    # Metadata handling
    obj.created_at = metadata.get('created_at', datetime.now())
    obj.created_by = metadata.get('created_by', 'system')

    return obj

After:

class ComplexObjectBuilder:
    """Builder for ComplexObject instances."""

    def __init__(self):
        self._object = ComplexObject()

    def with_name(self, name: str) -> 'ComplexObjectBuilder':
        self._object.name = name
        return self

    def with_type(self, obj_type: str) -> 'ComplexObjectBuilder':
        self._object.type = obj_type
        return self

    def with_config(self, config: Dict[str, Any]) -> 'ComplexObjectBuilder':
        self._object.enabled = config.get('enabled', True)
        self._object.timeout = config.get('timeout', 30)
        self._object.retries = config.get('retries', 3)
        return self

    def with_options(self, options: Dict[str, Any]) -> 'ComplexObjectBuilder':
        for key, value in options.items():
            if key.startswith('custom_'):
                self._object.custom_fields[key[7:]] = value
            else:
                setattr(self._object, key, value)
        return self

    def with_metadata(self, metadata: Dict[str, Any]) -> 'ComplexObjectBuilder':
        self._object.created_at = metadata.get('created_at', datetime.now())
        self._object.created_by = metadata.get('created_by', 'system')
        return self

    def build(self) -> ComplexObject:
        return self._object

# Usage
def create_complex_object(name, type, config, options, metadata):
    return (ComplexObjectBuilder()
            .with_name(name)
            .with_type(type)
            .with_config(config)
            .with_options(options)
            .with_metadata(metadata)
            .build())

Testing Extracted Functions

1. Unit Test Each Extracted Function

# Test for _validate_data
def test_validate_data_valid():
    data = [{'id': 1, 'name': 'test'}]
    # Should not raise
    _validate_data(data)

def test_validate_data_empty():
    with pytest.raises(ValueError, match="Data cannot be empty"):
        _validate_data([])

def test_validate_data_invalid_type():
    with pytest.raises(TypeError, match="All items must be dictionaries"):
        _validate_data([{'id': 1}, "invalid"])

2. Test Strategy Implementations

def test_aggressive_optimization():
    content = "**Bold text**\nNormal text\n**More bold**"
    strategy = AggressiveOptimizationStrategy()
    result = strategy.optimize(content)
    assert "Normal text" in result
    assert "**" not in result

3. Integration Tests

def test_generate_report_integration():
    data = [
        {'id': 1, 'name': 'test item', 'value': 100},
        {'id': 2, 'name': 'another item', 'value': 200}
    ]
    report = generate_report(data)

    assert report['summary']['total'] == 300
    assert report['summary']['average'] == 150
    assert len(report['items']) == 2

Code Review Checklist

When reviewing code for function extraction:

Function Size

  • Function is under 30 lines
  • If over 30 lines, there’s a clear justification
  • No function exceeds 100 lines

Responsibilities

  • Function has a single, clear purpose
  • Function name describes its purpose accurately
  • Function doesn’t mix abstraction levels

Parameters

  • Function has 0-5 parameters
  • Parameters are well-typed
  • Related parameters are grouped into objects

Complexity

  • Cyclomatic complexity is under 10
  • Nesting depth is under 4 levels
  • No deeply nested ternary operators

Testability

  • Function can be tested independently
  • Function has no hidden dependencies
  • Side effects are clearly documented

Documentation

  • Function has a clear docstring
  • Parameters are documented
  • Return value is documented
  • Exceptions are documented

Refactoring Workflow

1. Identify Refactoring Candidates

# Find long functions
find . -name "*.py" -exec wc -l {} \; | sort -n | tail -20

# Find complex functions (manual code review)
# Look for functions with:
# - Multiple return statements
# - Deep nesting
# - Many parameters
# - Mixed responsibilities

2. Create Tests First

# Write failing tests for the current behavior
def test_existing_behavior():
    # Test the function as it exists now
    pass

3. Extract Incrementally

  1. Extract small, private helper functions
  2. Run tests after each extraction
  3. Gradually extract larger functions
  4. Keep the public API stable

4. Optimize Imports and Dependencies

  • Remove unused imports
  • Group related imports
  • Consider circular dependency issues

5. Update Documentation

  • Update function docstrings
  • Update API documentation
  • Add examples for complex functions

Tools and Automation

1. Complexity Analysis

# Using radon (complexity analyzer)
pip install radon
radon cc your_file.py -a

# Using flake8 with complexity plugin
pip install flake8-mccabe
flake8 --max-complexity 10 your_file.py

2. Automated Refactoring Tools

# Using rope (refactoring library)
pip install rope
rope refactor.py -e

# Using black for formatting (maintains consistency)
pip install black
black your_file.py

3. Pre-commit Hooks

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/PyCQA/flake8
    rev: 4.0.1
    hooks:
      - id: flake8
        args: [--max-complexity=10, --max-line-length=100]

  - repo: https://github.com/psf/black
    rev: 22.3.0
    hooks:
      - id: black
        language_version: python3

Examples from the Codebase

Before: GrowthController.generate_control_strategies()

The original function was 60+ lines and handled multiple responsibilities.

After Refactoring:

def generate_control_strategies(self, growth_rate: float) -> StrategyPlan:
    """Generate detailed control strategies for growth management."""
    strategies = self._select_control_strategies(growth_rate)
    monitoring = self._define_monitoring_needs(strategies)
    implementation = self._plan_implementation(strategies, monitoring)

    return StrategyPlan(strategies, monitoring, implementation)

def _select_control_strategies(self, growth_rate: float) -> List[Strategy]:
    """Select appropriate control strategies based on growth rate."""
    # Extracted strategy selection logic

def _define_monitoring_needs(self, strategies: List[Strategy]) -> MonitoringPlan:
    """Define monitoring requirements for selected strategies."""
    # Extracted monitoring logic

def _plan_implementation(self, strategies: List[Strategy],
                        monitoring: MonitoringPlan) -> ImplementationPlan:
    """Plan implementation steps for strategies and monitoring."""
    # Extracted implementation planning

This refactoring:

  • Reduced main function to 5 lines
  • Created three focused helper functions
  • Made each function independently testable
  • Improved readability and maintainability

Conclusion

Following these function extraction guidelines will:

  1. Improve Maintainability: Smaller, focused functions are easier to understand and modify
  2. Enhance Testability: Each function can be tested in isolation
  3. Increase Reusability: Extracted functions can be reused in different contexts
  4. Reduce Bugs: Simpler functions have fewer edge cases and are easier to verify
  5. Improve Code Review: Smaller functions are easier to review and understand

Remember: The goal is not just to make functions smaller, but to make the code more readable, maintainable, and testable.

Achievement System

Track your learning progress through the Claude Night Market documentation.

How It Works

As you explore the documentation, complete tutorials, and try plugins, you earn achievements. Progress is saved in your browser’s local storage.

Your Progress

0 / 15 achievements unlocked

Available Achievements

Getting Started

AchievementDescriptionStatus
Marketplace PioneerAdd the Night Market marketplace
Skill ApprenticeUse your first skill
PR PioneerPrepare your first pull request

Documentation Explorer

AchievementDescriptionStatus
Plugin ExplorerRead all plugin documentation pages
Domain MasterUse all domain specialist plugins

Tutorial Completion

AchievementDescriptionStatus
First StepsComplete Your First Session
Full CycleComplete Feature Development Lifecycle
PR ProComplete Code Review and PR Workflow
Bug SquasherComplete Debugging and Issue Resolution
Knowledge KeeperComplete Memory Palace tutorial
Tutorial MasterComplete all tutorials

Plugin Mastery

AchievementDescriptionStatus
Foundation BuilderInstall all foundation layer plugins
Utility ExpertInstall all utility layer plugins
Full StackInstall all plugins

Advanced

AchievementDescriptionStatus
Spec MasterComplete a full spec-kit workflow
Review ExpertComplete a full pensive review
Palace ArchitectBuild your first memory palace

Reset Progress

Warning: This cannot be undone.

Achievement Tiers

TierAchievementsBadge
Bronze1-5Night Market Visitor
Silver6-10Night Market Regular
Gold11-14Night Market Expert
Platinum15Night Market Master