ContentForge

A Rust-native content creation and multi-platform publishing platform. Single binary. TUI + Web + CLI + MCP.

What is ContentForge?

ContentForge lets you write content once in Markdown and publish adapted versions to multiple platforms -- Twitter/X, LinkedIn, DEV.to, Medium, and more -- from a single binary. No Docker, no cloud services, no SaaS subscriptions.

Why ContentForge?

  • Write once, publish everywhere. Create a blog post, and ContentForge generates a tweet thread, a LinkedIn post, and a DEV.to cross-post, each respecting platform-specific formatting and character limits.
  • Single binary. One cargo install or brew install gives you CLI, TUI, Web UI, and MCP server. No infrastructure to manage.
  • Local-first. Your content lives in a local SQLite database. No data leaves your machine until you publish.
  • AI-native. Built-in LLM integration for drafting, adapting, and reviewing content. Exposes an MCP server so Claude Code can manage your content directly.
  • Developer-friendly. TOML configuration, JSON output mode, full REST API, scriptable CLI.

Quick Install

Homebrew

brew install mbaneshi/tap/contentforge

Cargo

cargo install contentforge

From Source

git clone https://github.com/mbaneshi/contentforge.git
cd contentforge
cargo build --release

First Steps

  1. Install ContentForge
  2. Follow the Quick Start tutorial (5 minutes)
  3. Configure your platform credentials

Features at a Glance

FeatureDescription
Multi-platformDEV.to, Twitter/X, LinkedIn, Medium, and more
Content adaptationAuto-adapt long-form to short-form, threads, platform limits
SchedulingCron-based scheduling with retry logic
AI agentsGenerate, adapt, split, and review content with LLMs
MCP serverUse ContentForge from Claude Code or any MCP client
AnalyticsPull engagement metrics from all platforms into one view
Multiple interfacesCLI, TUI, Web UI -- all from the same binary

Installation

System Requirements

  • Operating System: macOS, Linux, or Windows
  • Rust: 1.80+ (only needed for building from source)
  • Disk Space: ~20 MB for the binary

No external services (databases, Redis, Docker) are required. ContentForge uses an embedded SQLite database.

Install via Homebrew (macOS / Linux)

The recommended installation method for macOS and Linux:

brew install mbaneshi/tap/contentforge

Verify the installation:

contentforge --version

Install via Cargo

If you have Rust installed:

cargo install contentforge

This builds the release binary and installs it to ~/.cargo/bin/.

Download Pre-built Binaries

Pre-built binaries are available for each release on the GitHub Releases page.

PlatformBinary
macOS (Apple Silicon)contentforge-aarch64-apple-darwin.tar.gz
macOS (Intel)contentforge-x86_64-apple-darwin.tar.gz
Linux (x86_64)contentforge-x86_64-unknown-linux-gnu.tar.gz
Linux (ARM64)contentforge-aarch64-unknown-linux-gnu.tar.gz
Windows (x86_64)contentforge-x86_64-pc-windows-msvc.zip

Download, extract, and place the binary in your PATH:

# Example for macOS Apple Silicon
curl -LO https://github.com/mbaneshi/contentforge/releases/latest/download/contentforge-aarch64-apple-darwin.tar.gz
tar xzf contentforge-aarch64-apple-darwin.tar.gz
mv contentforge /usr/local/bin/

Build from Source

Clone the repository and build in release mode:

git clone https://github.com/mbaneshi/contentforge.git
cd contentforge
cargo build --release

The binary is at target/release/contentforge. Copy it to a directory in your PATH:

cp target/release/contentforge /usr/local/bin/

Build with Frontend (Optional)

To include the SvelteKit Web UI in the binary, build the frontend first:

cd frontend
pnpm install
pnpm build
cd ..
cargo build --release

The frontend build output is embedded into the binary via rust-embed.

Shell Completions

Generate shell completions for your shell:

# Bash
contentforge completions bash > ~/.local/share/bash-completion/completions/contentforge

# Zsh
contentforge completions zsh > ~/.zfunc/_contentforge

# Fish
contentforge completions fish > ~/.config/fish/completions/contentforge.fish

Verify Installation

Run the doctor command to check that everything is working:

contentforge doctor

This verifies:

  • Binary version and build info
  • Database initialization
  • Configuration file detection
  • Platform credential status

Next Steps

Quick Start

This tutorial walks you through creating, adapting, and publishing your first content piece with ContentForge. It takes about 5 minutes.

Prerequisites

  • ContentForge installed (Installation guide)
  • A DEV.to API key (the easiest platform to start with)

Step 1: Get a DEV.to API Key

  1. Go to dev.to/settings/extensions
  2. Under "DEV Community API Keys", generate a new key
  3. Copy the key

Step 2: Configure the Platform

contentforge platforms add devto --api-key YOUR_API_KEY

Verify the connection:

contentforge platforms health

Expected output:

Platform    Status    Account
--------    ------    -------
DEV.to      OK       your-username

Step 3: Create Content

Create a new article:

contentforge new \
  --title "Getting Started with Rust Error Handling" \
  --type article \
  --tags rust,error-handling,beginners

This creates a content piece in the idea status and opens your default editor. Write your article in Markdown:

Error handling in Rust is one of the language's strongest features.
Unlike exceptions in other languages, Rust makes error handling
explicit and type-safe through the `Result` and `Option` types.

## The Result Type

The `Result<T, E>` type is an enum with two variants:

- `Ok(T)` -- the operation succeeded with value `T`
- `Err(E)` -- the operation failed with error `E`

...

Save and close the editor. Note the content ID that is printed (e.g., a1b2c3d4-...).

Step 4: Adapt for the Platform

Generate a DEV.to-specific adaptation:

contentforge adapt a1b2c3d4 --platform devto

This creates an adaptation that:

  • Keeps the Markdown format (DEV.to supports Markdown natively)
  • Limits tags to 4 (DEV.to maximum)
  • Adds any canonical URL if configured

To preview the adaptation:

contentforge show a1b2c3d4 --adaptation devto

Step 5: Publish

Publish to DEV.to:

contentforge publish a1b2c3d4 --platform devto

Output:

Published to DEV.to
URL: https://dev.to/yourusername/getting-started-with-rust-error-handling-abc

The content status automatically changes to published.

Step 6: Check Publication Status

contentforge show a1b2c3d4

This shows the content details, all adaptations, and all publication records with live URLs.

Next Steps

Now that you have published your first piece, try these:

  • Adapt for more platforms: contentforge adapt a1b2c3d4 --platform twitter to generate a tweet thread
  • Schedule for later: contentforge schedule a1b2c3d4 --platform twitter --at "2026-03-20T09:00:00Z"
  • Use AI: contentforge generate "Write a blog post about async Rust" to have AI draft content for you
  • Launch the TUI: contentforge tui for a full interactive dashboard
  • Start the web UI: contentforge serve and open http://localhost:3000

See the Content Workflow guide for the full content lifecycle.

Configuration

ContentForge uses a TOML configuration file and supports environment variables for sensitive values like API keys.

Configuration File Location

ContentForge looks for its configuration file at:

OSPath
macOS~/.config/contentforge/config.toml
Linux~/.config/contentforge/config.toml
Windows%APPDATA%\contentforge\config.toml

You can also specify a custom path:

contentforge --config /path/to/config.toml <command>

Configuration File Format

# General settings
[general]
# Default editor for content creation
editor = "nvim"
# Default output format (text, json)
output_format = "text"
# Database location (default: ~/.local/share/contentforge/contentforge.db)
database_path = "~/.local/share/contentforge/contentforge.db"

# DEV.to configuration
[platforms.devto]
enabled = true
api_key = "${DEVTO_API_KEY}"  # Reference environment variable

# Twitter/X configuration
[platforms.twitter]
enabled = true
bearer_token = "${TWITTER_BEARER_TOKEN}"

# LinkedIn configuration
[platforms.linkedin]
enabled = true
access_token = "${LINKEDIN_ACCESS_TOKEN}"
author_urn = "urn:li:person:YOUR_ID"

# Medium configuration
[platforms.medium]
enabled = true
integration_token = "${MEDIUM_TOKEN}"

# Scheduling
[schedule]
# Check interval in seconds
poll_interval = 30
# Maximum retry attempts for failed publishes
max_retries = 3
# Default timezone for schedule display
timezone = "America/New_York"

# AI agent settings
[agent]
# LLM provider (openai, anthropic, local)
provider = "anthropic"
# Model to use
model = "claude-sonnet-4-20250514"
# API key for the LLM provider
api_key = "${ANTHROPIC_API_KEY}"

# Web server settings
[server]
# Host to bind to
host = "127.0.0.1"
# Port number
port = 3000
# Enable CORS (useful for development)
cors = false

Environment Variables

Sensitive values like API keys should be set as environment variables rather than stored in the config file. ContentForge supports ${VAR_NAME} syntax in the config file to reference environment variables.

Set them in your shell profile:

# ~/.bashrc or ~/.zshrc
export DEVTO_API_KEY="your-devto-api-key"
export TWITTER_BEARER_TOKEN="your-twitter-bearer-token"
export LINKEDIN_ACCESS_TOKEN="your-linkedin-access-token"
export MEDIUM_TOKEN="your-medium-integration-token"
export ANTHROPIC_API_KEY="your-anthropic-api-key"

Alternatively, use a .env file in the ContentForge config directory:

# ~/.config/contentforge/.env
DEVTO_API_KEY=your-devto-api-key
TWITTER_BEARER_TOKEN=your-twitter-bearer-token

Platform Credential Management

Add a Platform via CLI

# DEV.to (API key)
contentforge platforms add devto --api-key YOUR_KEY

# Twitter (bearer token)
contentforge platforms add twitter --bearer-token YOUR_TOKEN

# LinkedIn (OAuth)
contentforge platforms add linkedin --access-token YOUR_TOKEN --author-urn "urn:li:person:ABC123"

# Medium (integration token)
contentforge platforms add medium --token YOUR_TOKEN

List Configured Platforms

contentforge platforms list

Check Platform Health

contentforge platforms health

This calls each platform's health check endpoint to verify credentials are valid.

Remove a Platform

contentforge platforms remove twitter

Database Location

By default, the SQLite database is stored at:

OSPath
macOS~/.local/share/contentforge/contentforge.db
Linux~/.local/share/contentforge/contentforge.db
Windows%LOCALAPPDATA%\contentforge\contentforge.db

Override with the database_path setting in the config file or the --db flag:

contentforge --db /path/to/database.db list

Logging

Control log verbosity with the RUST_LOG environment variable:

# Minimal logging
RUST_LOG=warn contentforge serve

# Debug logging for contentforge crates only
RUST_LOG=contentforge=debug contentforge serve

# Trace logging (very verbose)
RUST_LOG=trace contentforge serve

Platform Setup Guide

This guide covers how to set up credentials and configure each supported publishing platform.

DEV.to

Status: Ready | Auth: API Key | Difficulty: Easy

Get Your API Key

  1. Log in to dev.to
  2. Go to Settings > Extensions > DEV Community API Keys
  3. Enter a description (e.g., "ContentForge") and click Generate API Key
  4. Copy the key immediately (it will not be shown again)

Configure

contentforge platforms add devto --api-key YOUR_API_KEY

Capabilities

FeatureSupported
ArticlesYes
Tags (max 4)Yes
SeriesYes
Canonical URLYes
ImagesYes
DeleteYes

Limitations

  • Maximum 4 tags per article
  • Tags must be lowercase, no spaces (use hyphens)
  • Rate limit: 30 requests per 30 seconds
  • Article body is Markdown

Twitter/X

Status: Ready | Auth: OAuth 2.0 / Bearer Token | Difficulty: Medium

Get API Access

  1. Go to the Twitter Developer Portal
  2. Create a project and app
  3. Under Authentication, set up OAuth 2.0 (User context) with the tweet.read and tweet.write scopes
  4. Generate a Bearer Token for app-only access, or complete the OAuth 2.0 flow for user context

Warning: Twitter API Tiers

  • Free tier: 500 tweets/month, 1 app
  • Basic ($100/month): 10,000 tweets/month
  • Pro ($5,000/month): 300,000 tweets/month

ContentForge tracks rate limits and will report RateLimited errors with retry-after times.

Configure

contentforge platforms add twitter --bearer-token YOUR_BEARER_TOKEN

Capabilities

FeatureSupported
Single tweetsYes
Threads (chains)Yes
Media attachmentsPlanned
DeleteYes

Limitations

  • 280 characters per tweet
  • Threads are published as reply chains (sequential API calls)
  • Rate limit: 200 tweets per 15 minutes (user context)

LinkedIn

Status: Ready | Auth: OAuth 2.0 | Difficulty: Medium

Get API Access

  1. Go to LinkedIn Developer Portal
  2. Create an app
  3. Under Products, request access to Share on LinkedIn and Sign In with LinkedIn using OpenID Connect
  4. Generate an OAuth 2.0 access token with the w_member_social scope

Note: Access Token Expiry

LinkedIn access tokens expire after 60 days. You will need to refresh them periodically. ContentForge will report AuthFailed when the token expires.

Find Your Author URN

You need your LinkedIn person URN (e.g., urn:li:person:ABC123). To find it:

curl -H "Authorization: Bearer YOUR_TOKEN" https://api.linkedin.com/v2/userinfo

The sub field in the response is your person ID. Your URN is urn:li:person:{sub}.

Configure

contentforge platforms add linkedin \
  --access-token YOUR_ACCESS_TOKEN \
  --author-urn "urn:li:person:YOUR_ID"

Capabilities

FeatureSupported
Text postsYes
ArticlesPlanned
ImagesPlanned
DeleteYes

Limitations

  • 3,000 character limit for post text
  • Rich media requires additional API permissions
  • LinkedIn API versioning: ContentForge uses the 202501 version header

Medium

Status: Ready | Auth: Integration Token | Difficulty: Easy

Get Your Integration Token

  1. Log in to Medium
  2. Go to Settings > Security and apps > Integration tokens
  3. Enter a description and generate a token
  4. Copy the token

Warning: Medium API Status

The Medium API is officially deprecated but still functional for creating posts. There is no guarantee of continued availability. Medium does not support post deletion via API.

Configure

contentforge platforms add medium --token YOUR_INTEGRATION_TOKEN

Capabilities

FeatureSupported
ArticlesYes
Tags (max 5)Yes
Canonical URLYes
MarkdownYes
DeleteNo (API limitation)

Limitations

  • Maximum 5 tags per article
  • No deletion support (Medium API limitation)
  • API is deprecated but functional

YouTube

Status: Planned | Auth: OAuth 2.0 | Difficulty: Medium

YouTube support is planned for Phase 3. It will allow updating video descriptions and metadata (not video uploads).


Instagram

Status: Planned | Auth: Graph API | Difficulty: Hard

Instagram support is planned for Phase 3. It requires a Facebook Business account and Instagram Professional account, plus the Instagram Graph API.


Reddit

Status: Planned | Auth: OAuth 2.0 | Difficulty: Medium

Reddit support is planned for Phase 3 for submitting self-posts and links to subreddits.


Hacker News

Status: Planned | Auth: Cookie-based | Difficulty: Medium

Hacker News support is planned for Phase 3. Since HN has no official API for posting, this will use authenticated web requests.


Substack

Status: Planned | Auth: Cookie-based | Difficulty: Fragile

Substack support is planned but marked as fragile. Substack has no public API; integration relies on reverse-engineered web endpoints that may break without notice.


Checking Platform Health

After configuring platforms, verify they are working:

# Check all platforms
contentforge platforms health

# Check a specific platform
contentforge platforms health --platform devto

This calls each platform's health check endpoint (typically a "get current user" API call) to verify that credentials are valid and the account is accessible.

Content Workflow

ContentForge follows a structured content lifecycle from initial idea to published post with tracked analytics. This guide walks through each stage.

The Content Lifecycle

Idea  -->  Drafting  -->  Review  -->  Ready  -->  Scheduled  -->  Published  -->  Archived

Each piece of content has a status field that tracks its position in this lifecycle. Status transitions can happen manually (via CLI/TUI/Web) or automatically (e.g., publishing changes status to Published).

Stage 1: Idea

Capture an idea before you forget it. Ideas are lightweight -- just a title and optional notes.

contentforge new --title "Why Rust's borrow checker is your friend" --type article

Or capture just a one-liner:

contentforge new --title "Thread about error handling patterns" --type thread

Content Types

TypeDescriptionBest for
articleLong-form content in MarkdownDEV.to, Medium, Substack
threadMulti-part content (e.g., tweet threads)Twitter/X
short_postShort-form textLinkedIn, single tweets
videoVideo content metadataYouTube
image_postImage with captionInstagram
link_shareURL with commentaryLinkedIn, Twitter, Reddit

Stage 2: Drafting

Write the canonical version of your content in Markdown. This is the "source of truth" from which all platform adaptations are derived.

# Open in your configured editor
contentforge edit <content-id>

# Or set the body directly
contentforge edit <content-id> --body "Full markdown content here..."

Add tags for organization:

contentforge tag <content-id> --add rust,error-handling,tutorial

Associate with a project:

contentforge edit <content-id> --project contentforge

Stage 3: Adapt

Generate platform-specific versions of your content. Each adaptation respects the target platform's constraints (character limits, formatting, tag limits).

# Adapt for a specific platform
contentforge adapt <content-id> --platform twitter
contentforge adapt <content-id> --platform devto
contentforge adapt <content-id> --platform linkedin

# Adapt for all configured platforms at once
contentforge adapt <content-id> --all

What Adaptation Does

Source TypeTarget PlatformAdaptation
ArticleTwitterSplits into a thread, each tweet under 280 chars
ArticleLinkedInExtracts key points, trims to 3,000 chars
ArticleDEV.toKeeps Markdown, limits to 4 tags
ArticleMediumKeeps Markdown, limits to 5 tags
ThreadLinkedInCombines thread into a single post
Short PostTwitterValidates under 280 chars

AI-Powered Adaptation

If an AI agent is configured, adaptations are generated intelligently:

contentforge adapt <content-id> --platform twitter --ai

The AI agent will:

  1. Read the canonical content
  2. Understand the target platform's style and constraints
  3. Generate a platform-native version (not just truncation)
  4. Preserve the core message while adjusting tone

Preview Adaptations

# See the adaptation for a specific platform
contentforge show <content-id> --adaptation twitter

# See all adaptations
contentforge show <content-id> --adaptations

Stage 4: Review (Optional)

Mark content as ready for review:

contentforge status <content-id> --set review

If using AI, request a review:

contentforge review <content-id>

The AI agent checks for:

  • Grammar and clarity
  • Platform-specific best practices
  • Hashtag suggestions
  • Engagement optimization tips

Stage 5: Schedule

Schedule content for publication at a specific time:

# Schedule for a specific time
contentforge schedule <content-id> --platform twitter --at "2026-03-20T09:00:00Z"

# Schedule for multiple platforms
contentforge schedule <content-id> --platform twitter --at "2026-03-20T09:00:00"
contentforge schedule <content-id> --platform linkedin --at "2026-03-20T09:30:00"

# View the schedule
contentforge schedule list

Recurring Schedules

Set up recurring publication rules:

# Every Friday at 9 AM, publish from the "ready" queue
contentforge schedule recurring \
  --name "weekly-roundup" \
  --cron "0 9 * * FRI" \
  --platforms twitter,linkedin

The Scheduling Engine

The scheduling engine runs as a background process (via contentforge daemon or as part of contentforge serve). It:

  1. Polls the schedule table every 30 seconds (configurable)
  2. Finds entries where scheduled_at <= now and status = pending
  3. Publishes via the appropriate adapter
  4. Updates the schedule entry status to published or failed
  5. Retries failed publishes up to 3 times (configurable) with exponential backoff

Stage 6: Publish

Publish immediately or let the scheduler handle it:

# Publish to a specific platform now
contentforge publish <content-id> --platform devto

# Publish to all adapted platforms
contentforge publish <content-id> --all

After publishing, a Publication record is created with:

  • The live URL on the platform
  • The platform-specific post ID
  • The publication timestamp

Stage 7: Track

After publication, ContentForge can pull engagement metrics:

# View analytics for a content piece
contentforge analytics <content-id>

# View analytics across all published content
contentforge analytics --summary

Metrics tracked (where supported by platform API):

  • Views / Impressions
  • Likes / Reactions
  • Shares / Reposts
  • Comments / Replies
  • Link clicks

Bulk Operations

# List all content in a specific status
contentforge list --status drafting

# Publish all ready content to all platforms
contentforge publish --status ready --all

# Archive all content older than 90 days
contentforge archive --older-than 90d

Content Organization

Tags

Tags help organize content by topic:

contentforge list --tag rust
contentforge list --tag error-handling

Projects

Group content by project:

contentforge list --project contentforge
contentforge list --project codeilus

AI Agents

ContentForge includes a built-in AI agent pipeline that can generate, adapt, split, and review content using LLMs. AI features are opt-in and the tool is fully functional without them.

Overview

The AI agent pipeline provides four capabilities:

  1. Generate -- Create a full draft from a one-line prompt
  2. Adapt -- Intelligently adapt content for a specific platform
  3. Split -- Break long content into thread parts
  4. Review -- Check content quality and suggest improvements

Configuration

Configure your LLM provider in ~/.config/contentforge/config.toml:

[agent]
provider = "anthropic"        # openai, anthropic, or local
model = "claude-sonnet-4-20250514"  # model identifier
api_key = "${ANTHROPIC_API_KEY}"

Supported providers:

ProviderModelsConfig Key
AnthropicClaude Sonnet, Claude HaikuANTHROPIC_API_KEY
OpenAIGPT-4o, GPT-4o-miniOPENAI_API_KEY
LocalOllama, llama.cpp(no API key needed)

Generate Content

Create a full draft from a prompt:

contentforge generate "Write a technical blog post about Rust's async/await model, targeting intermediate Rust developers"

The agent:

  1. Generates a structured outline
  2. Writes the full article in Markdown
  3. Creates a Content entity with status drafting
  4. Returns the content ID for further editing

Options

# Specify content type
contentforge generate "Tweet about our new release" --type short_post

# Specify target length
contentforge generate "Blog post about error handling" --length 1500

# Specify tags
contentforge generate "Rust tutorial" --tags rust,tutorial,beginners

# Specify a project
contentforge generate "Changelog for v0.2.0" --project contentforge

Adapt Content

Generate intelligent platform-specific adaptations:

contentforge adapt <content-id> --platform twitter --ai

Unlike simple truncation, AI adaptation:

  • Rewrites content in the platform's native style
  • Preserves the core message and key points
  • Respects character limits naturally (not by cutting off)
  • Adds platform-appropriate elements (hashtags for Twitter, formatting for LinkedIn)
  • Generates thread breakpoints that read naturally

Example: Article to Tweet Thread

Given a 1,500-word article about Rust error handling, the AI adapter might produce:

Tweet 1/7:
Rust's error handling is one of its killer features. Here's why the
Result type makes your code more reliable than try/catch -- a thread.

Tweet 2/7:
The Result<T, E> type forces you to handle errors at compile time.
No more "I forgot to catch that exception" bugs in production.

...

Tweet 7/7:
TL;DR: Rust error handling is verbose but intentional. The compiler
catches mistakes before your users do.

If you found this useful, check out the full article: [link]

Split into Threads

For content that needs to be broken into thread parts:

contentforge split <content-id> --platform twitter

The agent:

  1. Analyzes the content structure
  2. Identifies natural break points (paragraph boundaries, topic shifts)
  3. Ensures each part is under the platform's character limit
  4. Adds thread markers (1/N) and continuity phrases
  5. Stores the result as thread_parts in the adaptation

Review Content

Get AI feedback on your content before publishing:

contentforge review <content-id>

The review agent checks:

  • Clarity: Is the message clear and well-structured?
  • Grammar: Are there any grammatical or spelling issues?
  • Engagement: Will this perform well on the target platforms?
  • Platform fit: Does the adaptation match platform conventions?
  • Hashtags: Suggests relevant hashtags (for Twitter, LinkedIn)
  • Timing: Suggests optimal posting times based on content type

Review Output

Review for "Rust Error Handling" (article)

Quality: 8/10

Suggestions:
- Consider adding a concrete code example in the introduction
- The section on `?` operator could be shorter for social media adaptation
- Suggested hashtags: #RustLang #ErrorHandling #Programming

Platform Notes:
- Twitter thread: Good length (7 tweets), consider a hook in tweet 1
- LinkedIn: Add a question at the end to drive comments
- DEV.to: Add a cover image for better visibility

Agent Pipeline

For fully automated content creation, chain the agents:

# Generate, adapt for all platforms, and schedule
contentforge pipeline \
  --prompt "Write about our v0.2.0 release" \
  --platforms twitter,linkedin,devto \
  --schedule "2026-03-20T09:00:00Z"

This runs the full pipeline:

  1. Generate -- Creates the canonical content
  2. Adapt -- Generates adaptations for each platform
  3. Review -- Checks quality and makes improvements
  4. Schedule -- Queues for publication at the specified time

Using with MCP

The AI agent capabilities are also available via MCP, so Claude Code or other AI assistants can orchestrate the full pipeline. See the MCP Integration guide for details.

Cost Awareness

AI operations make LLM API calls which cost money. ContentForge shows estimated costs before executing:

contentforge generate "Blog post about Rust" --estimate
# Estimated: ~2,000 tokens input, ~3,000 tokens output
# Estimated cost: $0.02 (Claude Sonnet)

Use --confirm to skip the confirmation prompt:

contentforge generate "Blog post about Rust" --confirm

MCP Integration

ContentForge implements the Model Context Protocol (MCP) and can be used as a tool server for AI assistants like Claude Code.

What is MCP?

MCP is a protocol that allows AI assistants to use external tools. When ContentForge runs as an MCP server, an AI assistant can create content, adapt it for platforms, schedule publication, and check analytics -- all through natural language.

Setup with Claude Code

Step 1: Start the MCP Server

Add ContentForge to your Claude Code MCP configuration. Edit your Claude Code settings (typically ~/.config/claude-code/mcp.json or the project-level .mcp.json):

{
  "mcpServers": {
    "contentforge": {
      "command": "contentforge",
      "args": ["mcp"]
    }
  }
}

Step 2: Verify

In Claude Code, the ContentForge tools will appear automatically. You can verify by asking Claude:

"What ContentForge tools are available?"

Transport Modes

ContentForge supports two MCP transport modes:

ModeCommandUse Case
stdiocontentforge mcpClaude Code, local AI tools
SSEcontentforge mcp --sseWeb-based AI clients

For Claude Code, stdio (the default) is the correct choice.

Available Tools

create_content

Create a new content piece.

Parameters:

ParameterTypeRequiredDescription
titlestringyesContent title
bodystringyesMarkdown body
content_typestringyesarticle, thread, short_post, etc.
tagsarraynoTags for organization
projectstringnoAssociated project name

Example prompt:

"Create a new article titled 'Understanding Rust Lifetimes' with tags rust and tutorial."

list_content

List content filtered by status, project, or type.

Parameters:

ParameterTypeRequiredDescription
statusstringnoFilter by status (idea, drafting, published, etc.)
projectstringnoFilter by project
content_typestringnoFilter by content type
limitnumbernoMax results (default 20)

Example prompt:

"List all my draft articles."

get_content

Get a specific content piece by ID.

Parameters:

ParameterTypeRequiredDescription
idstringyesContent UUID

adapt_content

Generate a platform-specific adaptation.

Parameters:

ParameterTypeRequiredDescription
idstringyesContent UUID
platformstringyesTarget platform (twitter, devto, etc.)
use_aibooleannoUse AI for intelligent adaptation

Example prompt:

"Adapt my latest article for Twitter as a thread."

publish

Publish content to a platform.

Parameters:

ParameterTypeRequiredDescription
idstringyesContent UUID
platformstringyesTarget platform

Example prompt:

"Publish the Rust lifetimes article to DEV.to."

schedule

Schedule content for future publication.

Parameters:

ParameterTypeRequiredDescription
idstringyesContent UUID
platformstringyesTarget platform
scheduled_atstringyesISO 8601 datetime

Example prompt:

"Schedule the Twitter thread for tomorrow at 9 AM EST."

list_platforms

List configured platform accounts and their health status.

Parameters: None

get_analytics

Get engagement metrics for published content.

Parameters:

ParameterTypeRequiredDescription
idstringnoContent UUID (omit for all)

Example Conversations

Full Publishing Workflow

You: "Write a blog post about why developers should use Rust for CLI tools, then publish it to DEV.to and create a Twitter thread for it."

Claude: Uses create_content to draft the article, then adapt_content for DEV.to and Twitter, then publish to both platforms. Reports the live URLs.

Scheduled Campaign

You: "I have three articles ready. Schedule them for this week -- one on Monday, Wednesday, and Friday at 9 AM on LinkedIn and Twitter."

Claude: Uses list_content to find ready articles, then schedule for each platform and date.

Analytics Check

You: "How did my posts perform this week?"

Claude: Uses get_analytics and summarizes views, likes, and engagement across platforms.

Security Considerations

  • MCP over stdio communicates only with the parent process (Claude Code). No network exposure.
  • Platform credentials are read from your local config. The MCP server does not accept credentials as tool parameters.
  • All publish actions are explicit -- the AI must call the publish tool; no automatic publishing happens.
  • Audit: All operations are logged via the standard tracing framework. Set RUST_LOG=contentforge=info to see MCP tool invocations.

Architecture Overview

This page provides a high-level architecture overview for contributors. For the full deep-dive, see ARCHITECTURE.md in the repository.

Design Principles

  1. Layered architecture -- interfaces at the top, domain logic in the middle, infrastructure at the bottom. Dependencies flow downward only.
  2. Single binary -- all interfaces (CLI, TUI, Web, MCP) compile into one binary. No separate deployments.
  3. Trait-based extensibility -- the Publisher trait is the extension point for platform adapters. Adding a new platform requires implementing one trait.
  4. SQLite only -- no external database. The embedded SQLite database with WAL mode handles concurrent reads from the API server.

System Layers

Interface Layer

The top layer provides four ways to interact with ContentForge:

InterfaceCrateTechnologyPurpose
CLIcontentforge-cliClap 4 (derive)Scriptable command-line access
TUIcontentforge-tuiRatatui + CrosstermInteractive terminal dashboard
Webcontentforge-apiAxum + SvelteKitBrowser-based rich interface
MCPcontentforge-mcprmcpAI assistant integration

All four interfaces share the same domain logic and database. They are different frontends to the same system.

Service Layer

The contentforge-api crate provides the Axum HTTP server that all interfaces ultimately use (the CLI calls domain logic directly for simple operations but shares the same crate dependencies).

Key responsibilities:

  • REST endpoints for CRUD operations
  • WebSocket connections for real-time updates
  • Static file serving for the embedded SvelteKit frontend
  • Request validation and error mapping

Domain Layer

The core business logic lives in three crates:

  • contentforge-core -- Data types (Content, Platform, ScheduleEntry, etc.) and error types. Zero dependencies on infrastructure.
  • contentforge-agent -- AI pipeline that uses LLMs for content generation, adaptation, and review. Depends on rig-core.
  • contentforge-schedule -- Scheduling engine that polls for due entries and dispatches to publishers.

Infrastructure Layer

  • contentforge-db -- SQLite database with migrations, WAL mode, and repository pattern.
  • contentforge-publish -- Platform adapter trait and implementations. Makes HTTP calls to platform APIs.
  • contentforge-analytics -- Pulls engagement metrics from platform APIs.

Data Flow

User Input (any interface)
        |
        v
   Content CRUD (contentforge-db)
        |
        v
   AI Adaptation (contentforge-agent, optional)
        |
        v
   Platform Adaptation stored (contentforge-db)
        |
        v
   Schedule or Publish (contentforge-schedule / contentforge-publish)
        |
        v
   Platform API call (contentforge-publish adapters)
        |
        v
   Publication record stored (contentforge-db)
        |
        v
   Analytics collected (contentforge-analytics)

Key Traits

Publisher

#![allow(unused)]
fn main() {
#[async_trait]
pub trait Publisher: Send + Sync {
    fn platform(&self) -> Platform;
    fn validate(&self, adaptation: &PlatformAdaptation) -> Result<(), ContentForgeError>;
    async fn publish(&self, content: &Content, adaptation: &PlatformAdaptation) -> Result<Publication, ContentForgeError>;
    async fn delete(&self, publication: &Publication) -> Result<(), ContentForgeError>;
    async fn health_check(&self) -> Result<(), ContentForgeError>;
}
}

This is the only trait you need to implement to add a new platform. The PublisherRegistry manages all registered adapters and provides publish_all() for multi-platform publishing.

Error Handling

Two-tier approach:

  • Library crates use ContentForgeError (thiserror) with typed variants for each error category (publish failed, rate limited, auth failed, content too long, etc.)
  • Application boundaries use anyhow::Error for wrapping with context

See crates/contentforge-core/src/error.rs for the full error type.

Crate Map

ContentForge is organized as a Cargo workspace with 11 crates. This page describes what each crate does and how they relate.

Dependency Diagram

contentforge-app
├── contentforge-cli
├── contentforge-tui
├── contentforge-mcp
├── contentforge-api
│   ├── contentforge-publish
│   ├── contentforge-agent
│   ├── contentforge-schedule
│   └── contentforge-analytics
├── contentforge-db
└── contentforge-core (shared by all)

Crate Descriptions

contentforge-core

Path: crates/contentforge-core Dependencies: serde, chrono, uuid, thiserror

The foundation crate with zero internal dependencies. Contains all domain types:

  • Content -- the central entity (title, body, status, tags, adaptations, media)
  • ContentStatus -- lifecycle enum (Idea, Drafting, Review, Ready, Scheduled, Published, Archived)
  • ContentType -- what kind of content (Article, Thread, ShortPost, Video, ImagePost, LinkShare)
  • Platform -- supported platforms enum with metadata (char limits, markdown support, etc.)
  • PlatformAdaptation -- platform-specific version of content
  • PlatformCredential -- authentication credential types (API key, OAuth2, token, cookie)
  • PlatformAccount -- a configured platform connection
  • Publication -- record of a successful publish
  • ScheduleEntry -- a scheduled publication event
  • RecurringSchedule -- cron-based recurring rules
  • ContentForgeError -- the project-wide error type
  • MediaAttachment -- attached files with MIME types

contentforge-db

Path: crates/contentforge-db Dependencies: contentforge-core, rusqlite, serde_json, anyhow

SQLite persistence layer:

  • init_db(path) -- opens/creates database with WAL mode, runs migrations
  • init_memory_db() -- in-memory database for testing
  • ContentRepo -- CRUD operations for content (insert, get_by_id, list_by_status, update_status, delete)
  • Migration framework in migrations.rs with append-only SQL files

Schema tables: content, adaptations, media, platform_accounts, publications, schedule, recurring_schedules, analytics.


contentforge-publish

Path: crates/contentforge-publish Dependencies: contentforge-core, reqwest, async-trait, serde_json

Platform adapter trait and implementations:

  • Publisher trait -- the interface every adapter implements (platform, validate, publish, delete, health_check)
  • PublisherRegistry -- container for all configured adapters with get() and publish_all()
  • DevToPublisher -- DEV.to Forem API adapter
  • TwitterPublisher -- Twitter/X API v2 adapter (single tweets + threads)
  • LinkedInPublisher -- LinkedIn REST API adapter
  • MediumPublisher -- Medium API adapter

contentforge-agent

Path: crates/contentforge-agent Dependencies: contentforge-core, rig-core, serde_json, tokio

AI agent pipeline:

  • Content generation from prompts
  • Intelligent platform adaptation (not just truncation)
  • Thread splitting with natural break points
  • Content review and quality scoring
  • Uses rig-core for LLM provider abstraction

contentforge-schedule

Path: crates/contentforge-schedule Dependencies: contentforge-core, contentforge-db, contentforge-publish, cron, chrono, tokio

Scheduling engine:

  • Polls the schedule table on a configurable interval
  • Dispatches due entries to the appropriate publisher
  • Handles retries with exponential backoff
  • Manages recurring schedules via cron expressions
  • Runs as a background Tokio task

contentforge-analytics

Path: crates/contentforge-analytics Dependencies: contentforge-core, contentforge-db, reqwest

Engagement metrics collection:

  • Periodically fetches metrics from platform APIs (views, likes, shares, comments, clicks)
  • Stores snapshots in the analytics table
  • Provides aggregation queries for dashboards

contentforge-api

Path: crates/contentforge-api Dependencies: contentforge-core, contentforge-db, contentforge-publish, contentforge-agent, contentforge-schedule, contentforge-analytics, axum, tower-http, rust-embed

The Axum HTTP server:

  • REST endpoints under /api/ for all CRUD operations
  • WebSocket at /api/ws for real-time updates
  • Embedded SvelteKit static files served for all non-API routes
  • CORS configuration for development
  • Request validation and error responses

contentforge-cli

Path: crates/contentforge-cli Dependencies: contentforge-core, contentforge-db, contentforge-publish, contentforge-api, clap

Command-line interface:

  • new -- create content
  • list -- list content with filters
  • show -- display content details
  • edit -- edit content body
  • adapt -- create platform adaptations
  • publish -- publish to platforms
  • schedule -- schedule future publications
  • platforms -- manage platform accounts
  • analytics -- view engagement metrics
  • serve -- start the web server
  • tui -- launch the TUI
  • mcp -- start the MCP server
  • daemon -- run the scheduling daemon
  • doctor -- diagnose configuration issues

contentforge-tui

Path: crates/contentforge-tui Dependencies: contentforge-core, contentforge-db, contentforge-api, ratatui, crossterm

Terminal user interface:

  • Dashboard with content list, platform status, and schedule overview
  • Content editor with Markdown preview
  • Platform adaptation preview
  • Schedule management
  • Keyboard-driven navigation

contentforge-mcp

Path: crates/contentforge-mcp Dependencies: contentforge-core, contentforge-db, contentforge-publish, contentforge-agent, rmcp

MCP server implementation:

  • stdio transport for Claude Code
  • SSE transport for web clients
  • Exposes tools: create_content, list_content, adapt_content, publish, schedule, list_platforms, get_analytics
  • Handles JSON-RPC requests per the MCP specification

contentforge-app

Path: crates/contentforge-app Dependencies: all other crates, tokio, tracing-subscriber

Binary entry point:

  • Parses CLI arguments
  • Initializes logging (tracing-subscriber with env-filter)
  • Initializes the database
  • Dispatches to the appropriate interface (CLI, TUI, Web, MCP)
  • Wires all crates together

CLI Reference

Complete reference for all ContentForge CLI commands.

Global Options

contentforge [OPTIONS] <COMMAND>

Options:
  --config <PATH>      Path to config file (default: ~/.config/contentforge/config.toml)
  --db <PATH>          Path to database file (default: ~/.local/share/contentforge/contentforge.db)
  --format <FORMAT>    Output format: text, json (default: text)
  -v, --verbose        Increase log verbosity (can be repeated: -vv, -vvv)
  -h, --help           Print help
  -V, --version        Print version

Commands

new

Create a new content piece.

contentforge new [OPTIONS]
OptionTypeRequiredDescription
--title <TEXT>stringyesContent title
--type <TYPE>stringyesContent type (article, thread, short_post, video, image_post, link_share)
--body <TEXT>stringnoContent body (opens editor if omitted)
--tags <TAGS>stringnoComma-separated tags
--project <NAME>stringnoAssociated project
--no-editflagnoSkip opening the editor

Examples:

# Create an article and open the editor
contentforge new --title "Rust Error Handling Guide" --type article --tags rust,errors

# Create with inline body
contentforge new --title "Quick tip" --type short_post --body "Use ? operator for clean error propagation" --no-edit

# Create and associate with a project
contentforge new --title "v0.2.0 Release Notes" --type article --project contentforge

list

List content with optional filters.

contentforge list [OPTIONS]
OptionTypeDescription
--status <STATUS>stringFilter by status
--type <TYPE>stringFilter by content type
--project <NAME>stringFilter by project
--tag <TAG>stringFilter by tag
--limit <N>numberMaximum results (default: 50)
--offset <N>numberSkip first N results

Examples:

# List all drafts
contentforge list --status drafting

# List articles tagged "rust"
contentforge list --type article --tag rust

# JSON output for scripting
contentforge list --status ready --format json

show

Display details of a content piece.

contentforge show <ID> [OPTIONS]
OptionTypeDescription
--adaptationsflagShow all platform adaptations
--adaptation <PLAT>stringShow adaptation for a specific platform
--publicationsflagShow publication records

Examples:

# Show content details
contentforge show a1b2c3d4

# Show with all adaptations
contentforge show a1b2c3d4 --adaptations

# Show Twitter adaptation only
contentforge show a1b2c3d4 --adaptation twitter

edit

Edit a content piece.

contentforge edit <ID> [OPTIONS]
OptionTypeDescription
--title <TEXT>stringUpdate the title
--body <TEXT>stringUpdate the body (opens editor if omitted)
--tags <TAGS>stringReplace tags (comma-separated)
--project <NAME>stringUpdate project association

status

Change the status of a content piece.

contentforge status <ID> --set <STATUS>

Valid statuses: idea, drafting, review, ready, scheduled, published, archived.


adapt

Create a platform-specific adaptation.

contentforge adapt <ID> [OPTIONS]
OptionTypeRequiredDescription
--platform <PLAT>stringyes*Target platform
--allflagyes*Adapt for all configured platforms
--aiflagnoUse AI for intelligent adaptation

*One of --platform or --all is required.

Examples:

# Adapt for Twitter
contentforge adapt a1b2c3d4 --platform twitter

# AI-powered adaptation for LinkedIn
contentforge adapt a1b2c3d4 --platform linkedin --ai

# Adapt for all platforms
contentforge adapt a1b2c3d4 --all

publish

Publish content to a platform.

contentforge publish <ID> [OPTIONS]
OptionTypeRequiredDescription
--platform <PLAT>stringyes*Target platform
--allflagyes*Publish to all adapted platforms

Examples:

# Publish to DEV.to
contentforge publish a1b2c3d4 --platform devto

# Publish to all adapted platforms
contentforge publish a1b2c3d4 --all

schedule

Schedule content for future publication.

contentforge schedule <SUBCOMMAND>

schedule add

contentforge schedule add <ID> --platform <PLAT> --at <DATETIME>
OptionTypeRequiredDescription
--platform <PLAT>stringyesTarget platform
--at <DATETIME>stringyesISO 8601 datetime

schedule list

contentforge schedule list [OPTIONS]
OptionTypeDescription
--status <STATUS>stringFilter by schedule status (pending, published, failed)
--platform <PLAT>stringFilter by platform

schedule cancel

contentforge schedule cancel <SCHEDULE-ID>

schedule recurring

contentforge schedule recurring --name <NAME> --cron <EXPR> --platforms <PLATS>

Examples:

# Schedule a tweet for tomorrow morning
contentforge schedule add a1b2c3d4 --platform twitter --at "2026-03-20T09:00:00Z"

# List pending schedules
contentforge schedule list --status pending

# Set up a weekly recurring schedule
contentforge schedule recurring --name "weekly-roundup" --cron "0 9 * * FRI" --platforms twitter,linkedin

platforms

Manage platform accounts.

contentforge platforms <SUBCOMMAND>

platforms add

contentforge platforms add <PLATFORM> [OPTIONS]

Platform-specific options:

  • devto: --api-key <KEY>
  • twitter: --bearer-token <TOKEN>
  • linkedin: --access-token <TOKEN> --author-urn <URN>
  • medium: --token <TOKEN>

platforms list

contentforge platforms list

platforms health

contentforge platforms health [--platform <PLAT>]

platforms remove

contentforge platforms remove <PLATFORM>

analytics

View engagement metrics.

contentforge analytics [OPTIONS]
OptionTypeDescription
<ID>stringContent ID (omit for summary)
--summaryflagShow aggregate metrics
--platform <PLAT>stringFilter by platform

generate

Generate content using AI.

contentforge generate <PROMPT> [OPTIONS]
OptionTypeDescription
--type <TYPE>stringContent type (default: article)
--tags <TAGS>stringComma-separated tags
--project <NAME>stringAssociated project
--length <N>numberTarget word count
--estimateflagShow cost estimate only
--confirmflagSkip confirmation prompt

review

Get AI feedback on content.

contentforge review <ID>

serve

Start the web server.

contentforge serve [OPTIONS]
OptionTypeDescription
--host <HOST>stringBind address (default: 127.0.0.1)
--port <PORT>numberPort number (default: 3000)
--corsflagEnable CORS headers

tui

Launch the terminal UI.

contentforge tui

mcp

Start the MCP server.

contentforge mcp [OPTIONS]
OptionTypeDescription
--sseflagUse SSE transport instead of stdio

daemon

Run the scheduling daemon.

contentforge daemon [OPTIONS]
OptionTypeDescription
--interval <SECS>numberPoll interval in seconds (default: 30)

doctor

Diagnose configuration and environment issues.

contentforge doctor

completions

Generate shell completions.

contentforge completions <SHELL>

Supported shells: bash, zsh, fish, powershell, elvish.

REST API Reference

ContentForge exposes a REST API when running in server mode (contentforge serve). All endpoints are under the /api/ prefix.

Base URL

http://localhost:3000/api

Authentication

The API currently does not require authentication (it is designed for local use). When running on 127.0.0.1, only local processes can access it.

Content Endpoints

List Content

GET /api/content

Query Parameters:

ParameterTypeDescription
statusstringFilter by content status
typestringFilter by content type
projectstringFilter by project
tagstringFilter by tag
limitnumberMax results (default: 50)
offsetnumberPagination offset

Response:

{
  "items": [
    {
      "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
      "title": "Rust Error Handling",
      "body": "# Rust Error Handling\n\n...",
      "content_type": "article",
      "status": "drafting",
      "tags": ["rust", "error-handling"],
      "project": "blog",
      "created_at": "2026-03-19T10:00:00Z",
      "updated_at": "2026-03-19T14:30:00Z"
    }
  ],
  "total": 42
}

Create Content

POST /api/content

Request Body:

{
  "title": "Rust Error Handling",
  "body": "# Rust Error Handling\n\nError handling in Rust...",
  "content_type": "article",
  "tags": ["rust", "error-handling"],
  "project": "blog"
}

Response: 201 Created

{
  "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "title": "Rust Error Handling",
  "status": "idea",
  "created_at": "2026-03-19T10:00:00Z"
}

Get Content

GET /api/content/:id

Response:

{
  "id": "a1b2c3d4-...",
  "title": "Rust Error Handling",
  "body": "...",
  "content_type": "article",
  "status": "drafting",
  "tags": ["rust"],
  "project": "blog",
  "adaptations": [
    {
      "platform": "twitter",
      "body": "Rust's error handling...",
      "thread_parts": ["Tweet 1...", "Tweet 2..."]
    }
  ],
  "media": [],
  "created_at": "2026-03-19T10:00:00Z",
  "updated_at": "2026-03-19T14:30:00Z"
}

Update Content

PUT /api/content/:id

Request Body: (partial updates supported)

{
  "title": "Updated Title",
  "body": "Updated body...",
  "tags": ["rust", "tutorial"],
  "status": "ready"
}

Response: 200 OK with the updated content object.

Delete Content

DELETE /api/content/:id

Response: 204 No Content


Adaptation Endpoints

Create Adaptation

POST /api/content/:id/adapt

Request Body:

{
  "platform": "twitter",
  "use_ai": true
}

Response: 201 Created

{
  "platform": "twitter",
  "body": "Rust's error handling is one of its killer features...",
  "thread_parts": [
    "1/ Rust's error handling is one of its killer features...",
    "2/ The Result<T, E> type forces you to handle errors..."
  ],
  "canonical_url": null
}

Get Adaptations

GET /api/content/:id/adaptations

Response:

{
  "adaptations": [
    {
      "platform": "twitter",
      "body": "...",
      "thread_parts": ["...", "..."]
    },
    {
      "platform": "devto",
      "body": "...",
      "title": "Rust Error Handling Guide"
    }
  ]
}

Publishing Endpoints

Publish

POST /api/content/:id/publish

Request Body:

{
  "platform": "devto"
}

Response: 200 OK

{
  "publication": {
    "id": "pub-uuid-...",
    "platform": "dev_to",
    "url": "https://dev.to/username/rust-error-handling-abc",
    "platform_post_id": "1234567",
    "published_at": "2026-03-19T15:00:00Z"
  }
}

Publish to All

POST /api/content/:id/publish-all

Response:

{
  "results": [
    {
      "platform": "dev_to",
      "status": "success",
      "url": "https://dev.to/..."
    },
    {
      "platform": "twitter",
      "status": "success",
      "url": "https://x.com/i/status/..."
    },
    {
      "platform": "linkedin",
      "status": "error",
      "error": "Authentication failed"
    }
  ]
}

List Publications

GET /api/content/:id/publications

Schedule Endpoints

Create Schedule Entry

POST /api/schedule

Request Body:

{
  "content_id": "a1b2c3d4-...",
  "platform": "twitter",
  "scheduled_at": "2026-03-20T09:00:00Z"
}

Response: 201 Created

List Schedule

GET /api/schedule

Query Parameters:

ParameterTypeDescription
statusstringFilter (pending, published, failed)
platformstringFilter by platform

Cancel Schedule Entry

DELETE /api/schedule/:id

Response: 204 No Content


Platform Endpoints

List Platforms

GET /api/platforms

Response:

{
  "platforms": [
    {
      "platform": "dev_to",
      "display_name": "username",
      "enabled": true,
      "healthy": true
    },
    {
      "platform": "twitter",
      "display_name": "@handle",
      "enabled": true,
      "healthy": true
    }
  ]
}

Platform Health Check

GET /api/platforms/health

Analytics Endpoints

Get Analytics for Content

GET /api/content/:id/analytics

Response:

{
  "analytics": [
    {
      "platform": "dev_to",
      "url": "https://dev.to/...",
      "views": 1250,
      "likes": 45,
      "comments": 12,
      "captured_at": "2026-03-19T18:00:00Z"
    }
  ]
}

Get Analytics Summary

GET /api/analytics/summary

WebSocket

Real-time Updates

WS /api/ws

Connect to receive real-time events:

{"type": "publish_success", "content_id": "...", "platform": "twitter", "url": "..."}
{"type": "publish_failed", "content_id": "...", "platform": "linkedin", "error": "..."}
{"type": "schedule_triggered", "schedule_id": "...", "content_id": "..."}
{"type": "analytics_updated", "content_id": "...", "platform": "dev_to"}

Error Responses

All errors follow a consistent format:

{
  "error": {
    "code": "content_not_found",
    "message": "Content not found: a1b2c3d4-..."
  }
}

Common error codes:

CodeHTTP StatusDescription
content_not_found404Content ID does not exist
platform_not_configured400Platform adapter not set up
publish_failed502Platform API call failed
rate_limited429Platform rate limit reached
auth_failed401Platform credentials invalid
content_too_long400Exceeds platform char limit
validation_error422Invalid request body

MCP Tools Reference

Complete reference for all MCP tools exposed by ContentForge.

Overview

ContentForge implements the Model Context Protocol (MCP) and exposes the following tools when running as an MCP server (contentforge mcp).

Tools

create_content

Create a new content piece.

Input Schema:

{
  "type": "object",
  "properties": {
    "title": {
      "type": "string",
      "description": "The title of the content piece"
    },
    "body": {
      "type": "string",
      "description": "The content body in Markdown format"
    },
    "content_type": {
      "type": "string",
      "enum": ["article", "thread", "short_post", "video", "image_post", "link_share"],
      "description": "The type of content"
    },
    "tags": {
      "type": "array",
      "items": { "type": "string" },
      "description": "Tags for organizing the content"
    },
    "project": {
      "type": "string",
      "description": "The project this content belongs to"
    }
  },
  "required": ["title", "body", "content_type"]
}

Output: JSON object with the created content's ID, title, and status.


list_content

List content pieces with optional filtering.

Input Schema:

{
  "type": "object",
  "properties": {
    "status": {
      "type": "string",
      "enum": ["idea", "drafting", "review", "ready", "scheduled", "published", "archived"],
      "description": "Filter by content status"
    },
    "content_type": {
      "type": "string",
      "description": "Filter by content type"
    },
    "project": {
      "type": "string",
      "description": "Filter by project name"
    },
    "limit": {
      "type": "number",
      "description": "Maximum number of results (default: 20)"
    }
  }
}

Output: JSON array of content summaries (id, title, status, content_type, tags, updated_at).


get_content

Get full details of a specific content piece.

Input Schema:

{
  "type": "object",
  "properties": {
    "id": {
      "type": "string",
      "description": "The UUID of the content piece"
    }
  },
  "required": ["id"]
}

Output: Full content object including body, adaptations, media, and publication records.


adapt_content

Generate a platform-specific adaptation of content.

Input Schema:

{
  "type": "object",
  "properties": {
    "id": {
      "type": "string",
      "description": "The UUID of the content to adapt"
    },
    "platform": {
      "type": "string",
      "enum": ["twitter", "linkedin", "devto", "medium", "youtube", "instagram", "reddit", "hackernews", "substack"],
      "description": "Target platform for adaptation"
    },
    "use_ai": {
      "type": "boolean",
      "description": "Use AI for intelligent adaptation (default: false)"
    }
  },
  "required": ["id", "platform"]
}

Output: The generated adaptation object with platform-specific body, title, and thread parts (if applicable).


publish

Publish content to a specific platform.

Input Schema:

{
  "type": "object",
  "properties": {
    "id": {
      "type": "string",
      "description": "The UUID of the content to publish"
    },
    "platform": {
      "type": "string",
      "description": "Target platform"
    }
  },
  "required": ["id", "platform"]
}

Output: Publication record with the live URL, platform post ID, and publication timestamp.


schedule

Schedule content for future publication.

Input Schema:

{
  "type": "object",
  "properties": {
    "id": {
      "type": "string",
      "description": "The UUID of the content to schedule"
    },
    "platform": {
      "type": "string",
      "description": "Target platform"
    },
    "scheduled_at": {
      "type": "string",
      "description": "ISO 8601 datetime for when to publish"
    }
  },
  "required": ["id", "platform", "scheduled_at"]
}

Output: Schedule entry with the schedule ID, content ID, platform, and scheduled time.


list_platforms

List all configured platform accounts and their health status.

Input Schema:

{
  "type": "object",
  "properties": {}
}

Output: Array of platform accounts with platform name, display name, enabled status, and health check result.


get_analytics

Get engagement metrics for published content.

Input Schema:

{
  "type": "object",
  "properties": {
    "id": {
      "type": "string",
      "description": "Content UUID (omit for summary of all content)"
    }
  }
}

Output: Analytics data including views, likes, shares, comments, and clicks per platform.

Transport Configuration

stdio (default)

{
  "mcpServers": {
    "contentforge": {
      "command": "contentforge",
      "args": ["mcp"]
    }
  }
}

SSE

{
  "mcpServers": {
    "contentforge": {
      "url": "http://localhost:3000/mcp/sse"
    }
  }
}

Error Handling

MCP tool calls return errors in the standard MCP error format:

{
  "isError": true,
  "content": [
    {
      "type": "text",
      "text": "Platform Twitter/X not configured. Run 'contentforge platforms add twitter' to set up credentials."
    }
  ]
}

Error messages are designed to be human-readable and actionable, so the AI assistant can relay them to the user or take corrective action.