Frequently Asked Questions
Honest, direct answers to your most common questions about how SwiftyClip works, our business model, and our commitment to your privacy.
Does SwiftyClip really run entirely on my Mac?
Yes, 100%. All transcription, AI analysis, and video rendering happens locally on your machine. We use the Mac's built-in Apple Neural Engine and GPU (via Metal) to accelerate these tasks. No content is ever uploaded to a cloud server for processing, which guarantees your privacy and eliminates processing queues.
How does it compare to Opus Clip / Submagic / Vugola at pricing?
SwiftyClip is a one-time purchase, not a recurring subscription. Cloud services like Opus Clip, Submagic, or Vugola charge monthly fees based on upload minutes or credits, which can become expensive for active creators. Our 'Lifetime' license gives you unlimited local processing for a single payment. This model provides predictable costs and significantly lower total cost of ownership over time, especially for high-volume workflows.
What macOS and hardware do I need?
You need a Mac with an Apple Silicon chip (M1, M2, M3, M4 series or newer) running macOS 14 (Sonoma) or later. Apple Silicon is required for its high-performance Neural Engine, which is essential for on-device transcription and analysis. We do not support Intel-based Macs due to performance constraints. 8GB of RAM is the minimum, but 16GB or more is recommended for smoother performance with large 4K video files.
How accurate is the transcription vs cloud competitors?
Our on-device transcription, powered by a customized version of Whisper, achieves accuracy comparable to leading cloud services. In our internal tests with clear audio, we see word error rates below 5%, which is competitive with services that use similar underlying technology. Accuracy can vary with audio quality, background noise, and accents, but because processing is local, you can re-transcribe as needed without extra cost or delay.
How does the face-tracked reframing work?
SwiftyClip uses Apple's Vision framework to detect faces in your video. Once a primary speaker is identified, the system creates a virtual camera that smoothly follows their face, keeping them centered in the frame. It's not a simple crop; it uses easing functions to create natural-looking pans and tilts, mimicking a human camera operator. This ensures the speaker remains the focal point in vertical formats like 9:16 without jarring jumps.
Will you add Windows / Linux / iPad support?
We are currently focused exclusively on macOS. Our performance and privacy model relies heavily on Apple's integrated hardware and software stack, specifically the Neural Engine and frameworks like Metal and Vision. Porting to Windows or Linux would require a complete re-architecture and couldn't guarantee the same level of performance or security. iPadOS support is a possibility in the future as the hardware is very capable, but it is not on our immediate roadmap.
What does the MCP server let AI agents do exactly?
The MCP (Multi-Clip Project) server is a local-only API that allows AI agents or other scripts on your machine to interact with SwiftyClip programmatically. It does not connect to the internet. An agent can use it to query the status of rendering jobs, submit new videos for processing from a folder, retrieve transcripts, or get a list of generated clips. It's designed for automating your content workflow, for example, letting an agent automatically create clips from a new video file dropped into a specific directory.
How is the Lifetime license different from a subscription?
A Lifetime license is a one-time purchase that grants you access to SwiftyClip indefinitely for the current major version and all subsequent minor updates (e.g., all 1.x versions). It includes unlimited local processing. This contrasts with a subscription, which requires ongoing monthly or annual payments to maintain access. A lifetime license does not include free upgrades to future major versions (e.g., a hypothetical version 2.0), which may be a paid upgrade.
What happens to my scheduled posts if I cancel?
Since SwiftyClip is a one-time purchase, there is no subscription to 'cancel.' If you're referring to integrations with social media platforms, SwiftyClip does not directly schedule posts. It exports video files to your local drive. You would then use your preferred scheduling tool (like Buffer, Later, or the platform's native scheduler) to upload and schedule the exported clips. Your scheduled posts in those third-party services are unaffected by anything you do in SwiftyClip.
Can I use SwiftyClip commercially or for clients?
Yes. The Lifetime license permits full commercial use. You can create clips for your own business, for social media channels you manage, or for clients as part of a service you provide. There are no restrictions on monetizing the content you create with the app.
How do I export my clips — watermark, quality, formats?
You have full control over exports. By default, there are no watermarks. You can export clips in H.264 or HEVC (H.265) codecs at resolutions up to 4K. Quality is controlled via a bitrate setting, allowing you to balance file size and visual fidelity. Clips are exported as .mp4 or .mov files, making them compatible with all social media platforms and video editors.
Do you train AI on my content?
Absolutely not. This is a core principle of SwiftyClip. All processing happens on your device. Your videos, transcripts, and any derived data never leave your Mac. We have no way to access your content, and therefore we cannot and do not use it for training any AI models.
What if I run out of disk space while rendering?
SwiftyClip pre-calculates the estimated output file size for a rendering job before it begins. If the available disk space on your selected output drive is insufficient, the app will warn you and will not start the render. This prevents render failures and potential data corruption from a full disk. You will be prompted to either free up space or choose a different output destination.
How do you prevent duplicate clips?
The clipping engine generates a unique hash for each potential clip based on its source video, start/end timestamps, and the transcript segment. Before proposing a new clip, it checks this hash against a local database of previously generated clips from the same source file. If a match is found, it's flagged as a duplicate and is not recommended, ensuring you don't create redundant content.
How is SwiftyClip audited for privacy — do you have a public report?
We designed SwiftyClip to be inherently private by processing everything on-device. The app makes zero network calls for its core functions. To verify this, we encourage technically-inclined users to use network monitoring tools like Little Snitch or Wireshark. We are also in the process of commissioning an independent third-party security audit, and the report will be made public on our website once it is complete.