Batch import: drop 10 podcasts, get clips while you sleep
Published: April 22, 2026
Podcasters who run more than one show have the same complaint about every clipping tool on the market: you can only process one episode at a time. Interview shows with a weekly cadence can stack up five to ten hours of footage on a Sunday, and staring at a progress bar five times in a row is the opposite of what you bought the tool for.
SwiftyClip v1.0.4 closes that gap. Drop ten videos into the app at once and SwiftyClip queues them, analyzes the first, renders your clips, then moves on to the next. You close your laptop and come back to a library of clip candidates for the whole week.
How it works
The drop zone already accepted an array of file URLs. What changed is how the engine tracks them. A new importQueuepublished state on ClippingEngine surfaces every imported file as a Sendable snapshot with a display name and a status — queued, analyzing, or done. The existing workspace UI still opens when the first analysis finishes; the queue keeps processing in the background.
Concretely, importVideos([URL]) now appends to the queue rather than short-circuiting to a single asset. Analysis runs one video at a time (the on-device transcription model can only be in one place at once), and the UI shows a compact table of the queue so you can see what's up next.
Why one-at-a-time, not parallel
A few people have asked why we don't parallelize. The short answer: the Neural Engine is faster at one video at a time than at two in parallel. Apple's ANE is a single accelerator; splitting WhisperKit inference across two concurrent jobs actually runs slower than sequencing them, because the work stops being batch-friendly. On an M2 Pro, sequential analysis of a 60-minute podcast runs in under five minutes; two in parallel takes nine.
The longer answer is that parallelism lets the thermal envelope spike unpredictably. Users with fanless M-series Macs (MacBook Air, iPad Pro) get throttled when both the ANE and GPU fight for thermal budget. The queue approach lets the system breathe between jobs and the whole batch finishes sooner on average.
What this unlocks
If you run more than one show, the practical workflow changes. You stop treating SwiftyClip as a per-episode tool and start treating it as an overnight job. Drop your week's worth of interviews on Sunday evening, leave the Mac plugged in, wake up to a library of scored clips across every show. Review them in the morning with a cup of coffee, render the top three from each show with the aspect-ratio switcher, and schedule the week.
For agencies the math is even better. The Studio tier already shares a project library across seats via CloudKit. Combined with batch import, an agency can drop all their clients' episodes on one Mac mini and have every creator-side editor wake up to candidates in the morning without any manual hand-off.
What ships next
A few follow-ups we're already planning:
- Folder drops: drop a folder and SwiftyClip grabs every recognized video file inside (with reasonable extension filters).
- Overnight schedule: "start tonight at 11pm" option that holds the queue until low-power hours.
- MCP tool for queue state: a
clip.queueStatustool so Claude Code can poll the queue and decide what to render next automatically. - Per-item preset: let each queued import carry its own preset (interview / solo / stream) instead of a global default.
For the current shape — see the changelog, the first-clip guide, and the podcast-to-shorts guide which we'll update with the batch workflow this week.
Batch import is available on every tier — Free caps at 5 renders per month in total (not per batch), Starter unlocks unlimited. If you've been running a multi-show workflow elsewhere, this is the week to try the Starter tier.