Back

Show HN: Sparrow-1 – Audio-native model for human-level turn-taking without ASR

94 points23 hourstavus.io

For the past year I've been working to rethink how AI manages timing in conversation at Tavus. I've spent a lot of time listening to conversations. Today we're announcing the release of Sparrow-1, the most advanced conversational flow model in the world.

Some technical details:

- Predicts conversational floor ownership, not speech endpoints

- Audio-native streaming model, no ASR dependency

- Human-timed responses without silence-based delays

- Zero interruptions at sub-100ms median latency

- In benchmarks Sparrow-1 beats all existing models at real world turn-taking baselines

I wrote more about the work here: https://www.tavus.io/post/sparrow-1-human-level-conversation...

nubg27 minutes ago

Btw while I think this is cool and useful for real time voice interfaces for the general populace, I wonder if for professional users (eg a dev coding by dictating all day), a simple push to talk is not always going to be superior, because you can make long pauses while you think about something, this would creep out a human, but the AI would wait patiently for your push to talk.

allan_s44 minutes ago

How does it compare with https://github.com/KoljaB/RealtimeVoiceChat , which is absent of the benchmark ?

ttul10 hours ago

I tried talking to Claude today. What a nightmare. It constantly interrupts you. I don’t mind if Claude wants to spend ten seconds thinking about its reply, but at least let ME finish my thought. Without decent turn-taking, the AI seems impolite and it’s just an icky experience. I hope tech like this gets widely distributed soon because there are so many situations in which I would love to talk with a model. If only it worked.

mavamaarten10 hours ago

Agreed. English is not my native language. And I do speak it well, it's just that sometimes I need a second to think mid-sentence. None of the live chat models out there handle this well. Claude just starts answering before I've even had the chance to finish a sentence.

Tostino2 hours ago

English is my native language, and I still have this problem all the time with voice models.

sigmoid108 hours ago

Anthropic doesn't have any realtime multimodal audio models available, they just use STT and TTS models slapped on top of Claude. So they are currently the worst provider if you actually want to use voice communication.

cuuupid10 hours ago

The first time I met Tavus, their engineers (incl Brian!) were perfectly willing to sit down and build their own better Infiniband to get more juice out of H100s. There is pretty much nobody working on latency and realtime at the level they are, Sparrow-1 would be an defining achievement for most startups but will just be one of dozens for Tavus :)

lostmsu2 hours ago

> perfectly willing

dreaming

ljoshua3 hours ago

Hey @code_brian, would Tavus make the conversational audio model available outside of the PALs and video models? Seems like this could be a great use case for voice-only agents as well.

randyburden10 hours ago

Awesome. We've been using Sparrow-0 in our platform since launch, and I'm excited to move to Sparrow-1 over the next few days. Our training and interview pre-screening products rely heavily on Tavus's AI avatars, and this upgrade (based on the video in your blog post) looks like it addresses some real pain points we've run into. Really nice work.

dfajgljsldkjag10 hours ago

I am always skeptical of benchmarks that show perfect scores, especially when they come from the company selling the product. It feels like everyone claims to have solved conversational timing these days. I guess we will see if it is actually any good.

fudged7110 hours ago

Different industry, but our marketing guy once said "You know what this [perfect] metric means? We can never use it in marketing because it's not believable"

khalic9 hours ago

Just include some noise, it’s like the most available resource in the universe

drob5184 hours ago

Never thought of noise as a resource, but yea.

nextaccountic10 hours ago

> Non-verbal cues are invisible to text: Transcription-based models discard sighs, throat-clearing, hesitation sounds, and other non-verbal vocalizations that carry critical conversational-flow information. Sparrow-1 hears what ASR ignores.

Could Sparrow instead be used to produce high quality transcription that incorporate non-verbal cues?

Or even, use Sparrow AND another existing transcription/ASR thing to augment the transcription with non-verbal cues

sourcetms8 hours ago

How do I try the demo for Sparrow-1? What is pricing like?

nubg11 hours ago

Any examples available? Sounds amazing.

orliesaurus11 hours ago

Literally no way to sign up to try. Put my email and password and it puts me into some wait list despite the video saying I could try the model today. That's what makes me mad about these kind of releases is that the marketing and the product don't talk together.

qfavret9 hours ago

try signing up for the API platform on the site. You can access it there

mentalgear9 hours ago

Metric | Sparrow-1 Precision 100% Recall 100%

Common ...

reubenmorais9 hours ago

If you watch the demo video you can see how they would get this: the model is not aggressive enough. While it doesn't cut you off, which is nice, it also always waits an uncanny amount of time to chime in.

oersted8 hours ago

That should lead to a low recall: too many false negatives. I wonder how they are calculating it.

krautburglar9 hours ago

Such things were doing a good-enough job scamming the elderly as it is--even with the silence-based delays.