In-Meeting Technical Assistant.
One Shortcut
- ✦ Keyboard-first — Start recording, show assistant — works with any app
- ✦ AI Talking Points — Surfaces what to ask next, keeps your goal in view
- ✦ Every speaker identified — 97% accurate, no meeting bots required — search by speaker benchmark
- ✦ On-device speech processing — Audio never leaves your Mac
Basic transcription is always free — no account needed.
Requires macOS 15+ and Apple Silicon · v1.0.0-rc.1
What It Does During Your Meeting
- Requirements gathering. Catches vague asks and flags what's still unclear in the conversation.
- Goal tracking. Keeps what you came to get out of the meeting in view as the discussion shifts.
- Facts from your prep. Surfaces the right note from your prep at the moment it's relevant.
Tuned against a 96-scenario benchmark to nearly eliminate hallucinations.
Customizable to fit any role
Sharpens vague requirements live
When feedback is “it’s clunky” or “that doesn’t work,” it suggests the follow-up that gets to specifics — before the meeting ends and you’re guessing.
Flags what the room missed
When a concern gets brushed past or a decision happens without a key stakeholder weighing in, it surfaces the gap while you’re still in the room.
Commitments, pinned live
Tracks who volunteered for what during the meeting — with dates — so action items don’t get lost in a fast-moving room.
"Wait, say that again — I'll start recording."
Institutional Knowledge
Add your docs in any format. The right details surface at the right point in the conversation 94.8% of the time, regardless of document size or complexity.
Drop in whatever you have
CRM exports, pricing PDFs, competitor briefs, Slack threads — paste or drag them in before your call. Messy formatting, broken layouts, embedded artifacts — it handles the cleanup.
Voices saved across meetings
Save a speaker’s name once and their voice is recognized automatically in every future meeting. Turn on calendar sync and attendee names are matched before anyone speaks.
Your vocabulary, built in
Add product names, acronyms, and internal terms. The transcription gets “Kubernetes” instead of “Cooper Netties.”
Capture Decisions, Action Items
Action items with the right owner because the transcript has the right speaker.
“3 PM my time” → 1 PM yours
When a remote coworker’s time zone is in your prep notes, the model does the math. Every deadline in the summary is resolved to your local time — no mental arithmetic.
Customizable to your workflow
Set custom prompts so every summary matches your format — engineering decisions, client commitments, whatever you need.
Keep Working by Voice
Correct, annotate, and draft follow-ups — all in natural language.
Corrections and notes in a sentence
Wrong speaker name? Missed detail? Say it and it’s fixed. Add context that the mic didn’t catch — client names, project codes, decisions made off-camera.
Ask anything about the conversation
“What did we decide about pricing?” — instant answers from the full transcript.
Follow-ups written from what was actually said
Draft emails, Slack messages, or status updates grounded in the real conversation — not your memory of it.
Find Anything from Any Meeting
Search by meaning, not just keywords. Pull up what Sarah said last Tuesday, or every mention of the Q4 deadline — across every meeting you've ever recorded.
V2 Beta Planning
Summary
- • Beta launch gated on 1k DAU + <2s load time
- • Onboarding descoped from 5 steps to 3
- • Analytics instrumentation must ship before beta
Action Items
Transcript
The five-step onboarding is too heavy for beta. I think we can cut it to three steps and still validate the core flow.
Agreed. Let’s lock success metrics now — I’m thinking 1k DAU and sub-2-second load time. Raj, can analytics be ready before launch?
If we scope it to the core events, yes. Full instrumentation would push us past the date though.
Core events are fine for beta. We can layer in the rest post-launch. Who’s making the call if load time slips?
Connect Your AI Agent
Make your meetings available to any AI agent — local or remote. Prepare context before a meeting, ask questions after, or automate follow-ups.
Start Free. Go Unlimited
Zero per-minute fees — your Mac handles speech-to-text, so AI only processes text — faster and cheaper.
no credit card or account login required
On-device — always free
Unlimited local transcription, recording & dictation
AI features — no trial expiration
2 meetings with speaker labels & summaries per day
Generous daily limits on all AI features through October 2026
$132 billed annually — save 21%
Unlimited AI Feature Use *
Meeting assist
Speaker attribution & summaries
Follow-ups & search
File transcriptions
Direct input on new features and integrations
Requires macOS 15+ and Apple Silicon · v1.0.0-rc.1
* Fair-use limits apply — set high enough that even heavy usage is unaffected.
Subscription managed by Polar. 30-day money-back guarantee.
On-Device Audio. Zero-Retention AI
We never see your audio, your transcripts, or who you talk to.
Audio Never Leaves
Your audio never leaves your Mac. AI features work with transcript text only — or enable Local Mode for a meeting and nothing leaves your device at all.
No Account Required
No login, no email, no account — no way to connect your usage to you.
Bring Your Own Key
Use your own API key — your data flows directly to your provider, never through us.
GDPR & CCPA Compliant
Completely ephemeral. We are bound by our privacy policy to never log copies of your transcript data for any reason.
Common Questions
Everything you need to know before getting started.
Does it work with Zoom, Teams, and other apps?
Yes. MimicScribe captures system audio at the OS level, so it works with any app that plays sound — Zoom, Teams, Google Meet, Slack huddles, or anything else. No plugins or browser extensions required.
Does it handle other languages and technical terms?
Yes. The speech model recognizes 25 languages including English, Spanish, French, German, Italian, Portuguese, and more. Meetings that are primarily English with occasional code-switching — a customer quote in Spanish, a German product name, a French aside — transcribe cleanly and stay searchable. Technical vocabulary, proper nouns, and jargon work without a custom dictionary.
What happens if I lose my internet connection?
Transcription always works offline — it runs entirely on your Mac. AI features like text refinement, meeting summaries, and the meeting assistant require an internet connection. If you lose connectivity mid-meeting, transcription continues uninterrupted and AI enrichments are queued until you're back online.
What permissions does MimicScribe need?
- Microphone — required for all voice features.
- System Audio Recording — captures audio from video calls for meeting recording. Audio only — no screen capture or video.
- Accessibility optional — voice editing only. Skip it and meeting recording still works.
- Calendar optional — pulls attendee names into meeting context. Disabled by default.
Do I need to tell people I'm recording?
Recording laws vary by state and country — some US states and most of Europe require all parties to consent. MimicScribe doesn't announce itself to meeting participants; that's your call. Best practice: tell attendees at the top of the call, and stop recording if anyone objects. If you're in a regulated industry, check with your legal or compliance team before rolling it out.
Will MimicScribe slow down my Mac?
MimicScribe uses about 330 MB of memory during an active meeting — less than a single browser tab. ML models run on the Neural Engine, not main RAM, so your other apps aren't competing for memory. The heap stays flat even during long recordings. See full performance details →
What runs on-device vs. in the cloud?
Speech recognition, diarization, and echo cancellation run on your Mac. Speaker attribution, action items, summaries, and real-time suggestions require larger models than what can run locally with acceptable accuracy — so these are handled by cloud AI using transcript text only. Your data is never stored or used for training. See the technology page for the full breakdown.
How do I get the best results?
The AI uses whatever context you give it. Spend two minutes filling out Your Context (your role, what you care about) and add a reference document — product positioning, pricing notes, objection handlers. It turns generic summaries into briefings that actually understand your work. Setup guide with examples →