Set up Claude Code to clip your podcast automatically

Published: April 23, 2026

Imagine this: you finish recording a two-hour podcast. Instead of spending the next day scrubbing through it, you simply tell an AI assistant, "Find the three most compelling, under-60-second clips from this episode, make them about the future of AI, add bold captions, and save them to my desktop." You walk away, and when you return, the files are waiting for you. This isn't science fiction; it's an agentic workflow, and you can set it up today with SwiftyClip and Claude Code.

This tutorial dives deep into the Multi-modal Control Port (MCP), SwiftyClip's secret weapon for automation. By enabling the MCP, you turn SwiftyClip into a tool that AI agents can operate. We'll use Anthropic's Claude Code as our agent, but the principles apply to any capable AI assistant.

Step 1: Install SwiftyClip and Initial Setup

If you haven't already, your first step is to install SwiftyClip. Download it, drag it to your Applications folder, and run it once to complete the initial setup of its on-device models. This is the foundation of our workflow. The power of the MCP comes from its ability to control the sophisticated analysis engine you've already installed on your Mac.

Step 2: Enable the Multi-modal Control Port (MCP)

The MCP is a local server built into SwiftyClip that listens for commands from authorized clients. It's disabled by default for security. To turn it on, open SwiftyClip, go to the menu bar, and navigate to Settings > Agents.

[Screenshot: SwiftyClip's Settings window with the 'Agents' tab selected and the 'Enable MCP Server' toggle highlighted.]

You'll see a toggle switch labeled "Enable MCP Server." Flip it on. The app will confirm that the server is running and display a port number, which is typically 8989. This means SwiftyClip is now listening for commands on localhost:8989. Keep this window open or remember the port number.

Step 3: Create the Claude Desktop Configuration File

Claude Code, running as a desktop application, needs explicit permission to access local tools. This is a critical security feature. You grant this permission by creating a specific JSON configuration file in your home directory.

Open a text editor and create a new file. Paste the following content into it:

{
  "version": "1.0",
  "tools": [
    {
      "name": "swiftyclip_mcp",
      "type": "http",
      "description": "An AI tool for finding, styling, and exporting video clips from long-form content. Use it to analyze videos and generate short-form clips with captions.",
      "endpoint": "http://localhost:8989/openapi.json"
    }
  ]
}

Save this file as claude_desktop_config.json in your user home folder (e.g., /Users/yourname/claude_desktop_config.json). This file tells Claude that a tool named swiftyclip_mcp exists and that it can learn how to use it by fetching the API schema from the provided endpoint—the very same one your local SwiftyClip MCP server is now hosting.

Restart your Claude Code desktop app to ensure it loads the new configuration.

Step 4: Prompt Claude Code to Process Your Podcast

Now for the exciting part. Ensure the MCP server is running in SwiftyClip. Open Claude Code and start a new conversation. You will now "brief" the AI on its task. Your prompt needs to be specific. The more detail you provide, the better the result.

Here is an example prompt you can adapt:

Hi Claude. I need you to process a video for me using the `swiftyclip_mcp` tool. The video is located at `~/Documents/Podcast/episode_14_raw.mp4`. I need you to find the top 3 most interesting clips that are between 45 and 75 seconds long. The clips should be about the challenges of building an AI startup. Once you've identified the clips, please export them with the 'Bold & Punchy' caption style and save them in `~/Desktop/NewClips/`. Please name the files clip-1.mp4, clip-2.mp4, and clip-3.mp4.

When you send this prompt, Claude will recognize the request to use the swiftyclip_mcp tool. It will securely communicate with the SwiftyClip app running on your Mac to execute the commands needed to fulfill your request.

Step 5: Review the Agent's Output

Claude will think for a moment and then start showing its work. You will see it formulate a plan, which might look something like this:

Okay, I will process the video at `~/Documents/Podcast/episode_14_raw.mp4`. 1. First, I'll call `swiftyclip_mcp.analyze_video` to process the entire file. 2. Then, I'll use `swiftyclip_mcp.find_clips` with the topic "challenges of building an AI startup" and a duration between 45 and 75 seconds. 3. Finally, I will loop through the top 3 results and call `swiftyclip_mcp.export_clip` for each, using the 'Bold & Punchy' style and saving them to `~/Desktop/NewClips/`. Executing Step 1 now...

You don't need to do anything but watch. The agent is now in control, running the commands, waiting for SwiftyClip to complete the analysis, and then exporting the final results. Once it's done, it will confirm the task is complete. Navigate to the output folder (~/Desktop/NewClips/ in this example), and you'll find your three perfectly clipped, captioned, and ready-to-post videos.

This agentic workflow represents a paradigm shift in content creation. By connecting powerful AI agents to specialized local software like SwiftyClip, you can automate complex, time-consuming tasks, freeing you to focus on what matters most: creating great content.