Back

Qwen3-TTS family is now open sourced: Voice design, clone, and generation

466 points12 hoursqwen.ai
simonw3 hours ago

I got this running on macOS using mlx-audio thanks to Prince Canuma: https://x.com/Prince_Canuma/status/2014453857019904423

Here's the script I'm using: https://github.com/simonw/tools/blob/main/python/q3_tts.py

You can try it with uv (downloads a 4.5GB model on first run) like this:

  uv run https://tools.simonwillison.net/python/q3_tts.py \
    'I am a pirate, give me your gold!' \
    -i 'gruff voice' -o pirate.wav
indigodaddy21 minutes ago

Simon how do you think this would perform on CPU only? Lets say threadripper with 20G ram. (Voice cloning in particular)

gcr2 hours ago

This is wonderful, thank you. Another win for uv!

simonw9 hours ago

If you want to try out the voice cloning yourself you can do that an this Hugging Face demo: https://huggingface.co/spaces/Qwen/Qwen3-TTS - switch to the "Voice Clone" tab, paste in some example text and use the microphone option to record yourself reading that text - then paste in other text and have it generate a version of that read using your voice.

I shared a recording of audio I generated with that here: https://simonwillison.net/2026/Jan/22/qwen3-tts/

javier1234543218 hours ago

This is terrifying. With this and z-image-turbo, we've crossed a chasm. And a very deep one. We are currently protected by screens, we can, and should assume everything behind a screen is fake unless rigorously (and systematically, i.e. cryptographically) proven otherwise. We're sleepwalking into this, not enough people know about it.

rdtsc8 hours ago

That was my thought too. You’d have “loved ones” calling with their faces and voices asking for money in some emergency. But you’d also have plausible deniability as anything digital can be brushed off as “that’s not evidence, it could be AI generated”.

rpdillon4 hours ago

Only if you focus on the form instead of the content. For a long time my family has had secret words and phrases we use to identify ourselves to each other over secure, but unauthenticated, channels (i.e. the channel is encrypted, but the source is unknown). The military has had to deal with this for some time, and developed various form of IFF that allies could use to identify themselves. E.g. for returning aircraft, a sequence of wing movements that identified you as friend. I think for a small group (in this case, loved ones), this could be one mitigation of that risk. My parents did this with me as a kid, ostensibly as a defense against some other adult saying "My mom sent me to pick you up...". I never did hear of that happening, though.

neevans8 hours ago

this was already possible with chatterbox for a long while.

freedomben7 hours ago

Yep, this has been the reality now for years. Scammers have already had access to it. I remember an article years ago about a grandma who wired her life savings to a scammer who claimed to have her granddaughter held hostage in a foreign country. Turns out they just cloned her voice from Facebook data and knew her schedule so timed it while she would be unreachable by phone.

DANmode7 hours ago

or anyone who refuses to use hearing aids.

u80804 hours ago
harshreality49 minutes ago

That's a reupload of Cybergem's video. https://www.youtube.com/watch?v=-gGLvg0n-uY

javier1234543214 hours ago

Oh wow. Thank you for this. Amazing, terrifying, spot on, all of it.

arcanemachiner3 hours ago

I knew what it would be before I even opened it. The crazy thing is that video is like 3 years old.

fridder5 hours ago

Admittedly I have not dove into it much but, I wonder if we might finally have a usecase for NFTs and web3? We need some sort of way to denote items are persion generated not AI. Would certainly be easier than trying to determine if something is AI generated

grumbel5 hours ago

That's the idea behind C2PA[1], your camera and the tools put a signature on the media to prove its provenance. That doesn't make manipulation impossible (e.g. you could photograph an AI image of a screen), but it does give you a trail of where a photo came from and thus an easier way to filter it or lookup the original.

[1] https://c2pa.org/

simonw4 hours ago

How would NFTs/web3 help differentiate between something created by a human and something that a human created with AI and then tagged with their signature using those tools?

oceanplexian6 hours ago

> This is terrifying.

Far more terrifying is Big Tech having access to a closed version of the same models, in the hands of powerful people with a history of unethical behavior (i.e. Zuckerberg's "Dumb Fucks" comments). In fact it's a miracle and a bit ironic that the Chinese would be the ones to release a plethora of capable open source models, instead of the scraps like we've seen from Google, Meta, OpenAI, etc.

javier1234543215 hours ago

I do strongly agree. Though the societal impact is only mitigated by open models, not curtailed at all.

echelon7 hours ago

We're going to be okay.

There are far more good and interesting use cases for this technology. Games will let users clone their voices and create virtual avatars and heroes. People will have access to creative tools that let them make movies and shows with their likeness. People that couldn't sing will make music.

Nothing was more scary than the invention of the nuclear weapon. And we're all still here.

Life will go on. And there will be incredible benefits that come out of this.

javier1234543216 hours ago

I'm not denigrating the tech, all I'm saying is that we've crossed to new territory and there will be consequences that we don't understand from this. The same way that social media has been particularly detrimental to young people (especially women) in a way we were not ready for. This __smells__ like it could be worse, alongside with (or regardless of) the benefits of both.

I simply think people don't really know that the new world requires a new set of rules of engagement for anything that exists behind a screen (for now).

doug7137052 hours ago

> Nothing was more scary than the invention of the nuclear weapon. And we're all still here.

Except that building a nuclear weapon was not available to everyone, certainly not to dumb people whose brain have been feeded with social media content.

supern0va7 hours ago

We'll be okay eventually, when society adapts to this and becomes fully aware of the capabilities and the use cases for abuse. But, that may take some time. The parent is right to be concerned about the interim, at the very least.

That said, I am likewise looking forward to the cool things to come out of this.

DANmode7 hours ago

> People that couldn't sing will make music.

I was with you, until

But, yeah. Life will go on.

+2
echelon7 hours ago
magicalhippo7 hours ago

The HF demo space was overloaded, but I got the demo working locally easily enough. The voice cloning of the 1.7B model captures the tone of the speaker very well, but I found it failed at reproducing the variation in intonation, so it sounds like a monotonous reading of a boring text.

I presume this is due to using the base model, and not the one tuned for more expressiveness.

edit: Or more likely, the demo not exposing the expressiveness controls.

The 1.7B model was much better at ignoring slight background noise in the reference audio compared to the 0.6B model though. The 0.6B would inject some of that into the generated audio, whereas the 1.7B model would not.

Also, without FlashAttention it was dog slow on my 5090, running at 0.3X realtime with just 30% GPU usage. Though I guess that's to be expected. No significant difference in generation speed between the two models.

Overall though, I'm quite impressed. I haven't checked out all the recent TTS models, but a fair number, and this one is certainly one of the better ones in terms of voice cloning quality I've heard.

thedangler5 hours ago

How did you do this locally? Tools? Language?

magicalhippo2 hours ago

I just followed the Quickstart[1] in the GitHub repo, refreshingly straight forward. Using the pip package worked fine, as did installing the editable version using the git repository. Just install the CUDA version of PyTorch[2] first.

The HF demo is very similar to the GitHub demo, so easy to try out.

  pip install torch torchvision --index-url https://download.pytorch.org/whl/cu128
  pip install qwen3-tts
  qwen-tts-demo Qwen/Qwen3-TTS-12Hz-1.7B-Base --no-flash-attn --ip 127.0.0.1 --port 8000
That's for CUDA 12.8, change PyTorch install accordingly.

Skipped FlashAttention since I'm on Windows and I haven't gotten FlashAttention 2 to work there yet (I found some precompiled FA3 files[3] but Qwen3-TTS isn't FA3 compatible yet).

[1]: https://github.com/QwenLM/Qwen3-TTS?tab=readme-ov-file#quick...

[2]: https://pytorch.org/get-started/locally/

[3]: https://windreamer.github.io/flash-attention3-wheels/

dsrtslnd233 hours ago

Any idea on the VRAM footprint for the 1.7B model? I guess it fits on consumer cards but I am wondering if it works on edge devices.

magicalhippo2 hours ago

The demo uses 6GB dedicated VRAM on Windows, but keep in mind that it's without FlashAttention. I expect it would drop a bit if I got that working.

Haven't looked into the demo to see if it could be optimized by moving certain bits to CPU for example.

pseudosavant7 hours ago

Remarkable tech that is now accessible to almost anyone. My cloned voice sounded exactly like me. The uses for this will be from good to bad and everywhere in-between. A deceased grandmother reading "Good Night Moon" to grandkids, scamming people, the ability to create podcasts with your own voices from just prompts.

KolmogorovComp3 hours ago

Hello, the recording you posted does not tell much about the cloning capability without an example from your real voice.

simonw2 hours ago

Given how easy voice cloning is with this thing I chickened out of sharing the training audio I recorded!

That's not really rational considering the internet is full of examples of my voice that anyone could use though. Here's a recent podcast clip: https://www.youtube.com/watch?v=lVDhQMiAbR8&t=3006s

KolmogorovComp1 hour ago

Thanks, so it’s in the [pretty close but still distinguishable] range.

kingstnap5 hours ago

It was fun to try out. I wonder if at some point if I have a few minutes of me talking I could make myself read an entire book to myself.

mohsen17 hours ago

> The requested GPU duration (180s) is larger than the maximum allowed

What am I doing wrong?

gregsadetsky7 hours ago

you need to login

TheAceOfHearts8 hours ago

Interesting model, I've managed to get the 0.6B param model running on my old 1080 and I can generated 200 character chunks safely without going OOM, so I thought that making an audiobook of the Tao Te Ching would be a good test. Unfortunately each snippet varies drastically in quality: sometimes the speaker is clear and coherent, but other times it bursts out laughing or moaning. In a way it feels a bit like magical roulette, never being quite certain of what you're going to get. It does have a bit of charm, when you chain the various snippets together you really don't know what direction it's gonna go.

Using speaker Ryan seems to be the most consistent, I tried speaker Eric and it sounded like someone putting on a fake exaggerated Chinese accent to mock speakers.

If it wasn't for the unpredictable level of emotions from each chunk, I'd say this is easily the highest quality TTS model I've tried.

KaoruAoiShiho7 hours ago

Have you tried specifying the emotion? There's an option to do so and if it's left empty it wouldn't surprise me if it defaulted to rng instead of bland.

TheAceOfHearts6 hours ago

For the system prompt I used:

> Read this in a calm, clear, and wise audiobook tone.

> Do not rush. Allow the meaning to sink in.

But maybe I should experiment with something more detailed. Do you have any suggestions?

dsrtslnd233 hours ago

do you have the RTF for the 1080? I am trying to figure out if the 0.6B model is viable for real-time inference on edge devices.

TheAceOfHearts2 hours ago

Yeah, it's not great. I wrote a harness that calculates it as: 3.61s Load Time, 38.78s Gen Time, 18.38s Audio Len, RTF 2.111.

The Tao Te Ching audiobook came in at 62 mins in length and it ran for 102 mins, which gives an RTF of 1.645.

I do get a warning about flash-attn not being installed, which says that it'll slow down inference. I'm not sure if that feature can be supported on the 1080 and I wasn't up for tinkering to try.

genewitch9 hours ago

it isn't often that tehcnology gives me chills, but this did it. I've used "AI" TTS tools since 2018 or so, and i thought the stuff from two years ago was about the best we were going to get. I don't know the size of these, i scrolled to the samples. I am going to get the models set up somewhere and test them out.

Now, maybe the results were cherrypicked. i know everyone else who has released one of these cherrypicks which to publish. However, this is the first time i've considered it plausible to use AI TTS to remaster old radioplays and the like, where a section of audio is unintelligible but can be deduced from context, like a tape glitch where someone says "HEY [...]LAR!" and it's an episode of Yours Truly, Johnny Dollar...

I have dozens of hours of audio of like Bob Bailey and people of that era.

kamranjon9 hours ago

I wonder if it was trained on anime dubs cause all of the examples I listened to sounded very similar to a miyazaki style dub.

genewitch6 hours ago

scroll down to the second to last group, the second one down is obama speaking english, the third one down is trump speaking japanese (a translation of the english phrase)

besides, they know what side their bread is buttered on. I feel like this is almost not the real announcement; or, the engineers that wrote this up and did the demos just ran it that way. The normal speech voices are fine (lower than the anime ones on the page.) i agree that the first few are very infantile. I'll change that word if i can think of a better one.

freedomben7 hours ago

Indeed, I have a future project/goal of "restoring" Have Gun - Will Travel radio episodes to listenable quality using tech like this. There are so many lines where sound effects and tape rot and other "bad recording" things make it very difficult to understand what was sad. Will be amazing, but as with all tech the potential for abuse is very real

genewitch6 hours ago

hey if you want to collab or trade notes, my email is in my profile. there was java software that did FANTASTIC work cleaning up crappy transfers of audio, like, specifically, it was perfect for "AM Quality Monaural Audio".

  Observe, original: https://www.youtube.com/watch?v=YiRcOVDAryM
  my edit (took about an hour, if memory serves, to set up. forgot render time...): https://www.youtube.com/watch?v=xazubVJ0jz4
i say "was [...] software" because the last 2 times i've tried to use it, it did imperceptible cleanup, making it worthless. Anyhow, all my radio plays are from OTRR, i think.

Audio.Restoration.DeNoise.DeNoiseLF.2.8.3_WiN.OSX is a more recent version i think

p.s. are you a "dude named Ben"?

throwaw1210 hours ago

Qwen team, please please please, release something to outperform and surpass the coding abilities of Opus 4.5.

Although I like the model, I don't like the leadership of that company and how close it is, how divisive they're in terms of politics.

mortsnort10 hours ago

They were just waiting for someone in the comments to ask!

mhuffman9 hours ago

It really is the best way to incentivize politeness!

zeppelin1014 hours ago

Someone has to take the first step. Let's be grateful to the brave anon HN poster for stepping up.

stuckkeys6 hours ago

I loled hard at this. Thank you kind stranger.

pseudony8 hours ago

Same issue (I am Danish).

Have you tested alternatives? I grabbed Open Code and a Minimax m2.1 subscription, even just the 10usd/mo one to test with.

Result? We designed a spec for a slight variation of a tool for which I wrote a spec with Claude - same problem (process supervisor tool), from scratch.

Honestly, it worked great, I have played a little further with generating code (this time golang), again, I am happy.

Beyond that, Glm4.7 should also be great.

See https://dev.to/kilocode/open-weight-models-are-getting-serio...

It is a recent case story of vibing a smaller tool with kilo code, comparing output from minimax m2.1 and Glm4.7

Honestly, just give it a whirl - no need to send money to companies/nations your disagree with with.

nunodonato8 hours ago

I've been using GLM 4.7 with Claude Code. best of both worlds. Canceled my Anthropic subscription due to the US politics as well. Already started my "withdrawal" in Jan 2025, Anthropic was one of the few that was left

dsrtslnd233 hours ago

Are you using an API proxy to route GLM into the Claude Code CLI? Or do you mean side-by-side usage? Not sure if custom endpoints are supported natively yet.

stavros5 hours ago

I much prefer OpenCode these days, give it a try.

nunodonato4 hours ago

I did, I couldnt get used to it and didn't get so good results. I think Claude Code's tools are really top notch, and maybe the system prompt

bigyabai8 hours ago

I'm in the same boat. Sonnet was overkill for me, and GLM is cheap and smart enough to spit out boilerplate and FFMPEG commands whenever it's asked.

$20/month is a bit of an insane ask when the most valuable thing Anthropic makes is the free Claude Code CLI.

mikenew1 hour ago

I've recently switched to OpenCode and found it to be far better. Plus GML 4.7 is free at the moment, so for now it's a great no-cost setup.

stavros5 hours ago

I don't know, I max out my Opus limits regularly. I guess it depends on usage.

TylerLives10 hours ago

>how divisive they're in terms of politics

What do you mean by this?

throwaw1210 hours ago

Dario said not nice words about China and open models in general:

https://www.bloomberg.com/news/articles/2026-01-20/anthropic...

vlovich1239 hours ago

I think the least politically divisive issue within the US is concern about China’s growth as it directly threatens the US’s ability to set the world’s agenda. It may be politically divisive if you are aligned with Chinese interests but I don’t see anything politically divisive for a US audience. I expect Chinese CEOs speak in similar terms to a Chinese audience in terms of making sure they’re decoupled from the now unstable US political machine.

subscribed6 hours ago

Looking at the last year's US agenda I'm okay with that.

cmrdporcupine8 hours ago

"... for a US audience"

And that's the rub.

Many of us are not.

giancarlostoro8 hours ago

From the perspective of competing against China in terms of AI the argument against open models makes sense to me. It’s a terrible problem to have really. Ideally we should all be able to work together in the sandbox towards a better tomorrow but thats not reality.

I prefer to have more open models. On the other hand China closes up their open models once they start to show a competitive edge.

Levitz8 hours ago

I mean, there's no way it's about this right?

Being critical of favorable actions towards a rival country shouldn't be divisive, and if it is, well, I don't think the problem is in the criticism.

Also the link doesn't mention open source? From a google search, he doesn't seem to care much for it.

Balinares8 hours ago

They're supporters of the Trump administration's military, a view which is not universally lauded.

mohsen17 hours ago

With a good harness I am getting similar results with GLM 4.7. I am paying for TWO! max accounts and my agents are running 24/7.

I still have a small Claude account to do some code reviews. Opus 4.5 does good reviews but at this point GLM 4.7 usually can do the same code reviews.

If cost is an issue (for me it is, I pay out of pocket) go with GLM 4.7

imiric3 hours ago

Your GitHub profile is... disturbing. 1,354 commits and 464 pull requests in January so far.

Regardless of how productive those numbers may seem, that amount of code being published so quickly is concerning, to say the least. It couldn't have possibly been reviewed by a human or properly tested.

If this is the future of software development, society is cooked.

gcr2 hours ago

You may not like it but this is what a 10x developer looks like. :-)

amrrs10 hours ago

Have you tried the new GLM 4.7?

davely9 hours ago

I've been using GLM 4.7 alongside Opus 4.5 and I can't believe how bad it is. Seriously.

I spent 20 minutes yesterday trying to get GLM 4.7 to understand that a simple modal on a web page (vanilla JS and HTML!) wasn't displaying when a certain button was clicked. I hooked it up to Chrome MCP in Open Code as well.

It constantly told me that it fixed the problem. In frustration, I opened Claude Code and just typed "Why won't the button with ID 'edit' work???!"

It fixed the problem in one shot. This isn't even a hard problem (and I could have just fixed it myself but I guess sunk cost fallacy).

bityard8 hours ago

I've used a bunch of the SOTA models (via my work's Windsurf subscription) for HTML/CSS/JS stuff over the past few months. Mind you, I am not a web developer, these are just internal and personal projects.

My experience is that all of the models seem to do a decent job of writing a whole application from scratch, up to a certain point of complexity. But as soon as you ask them for non-trivial modifications and bugfixes, they _usually_ go deep into rationalized rabbit holes into nowhere.

I burned through a lot of credits to try them all and Gemini tended to work the best for the things I was doing. But as always, YMMV.

KolmogorovComp8 hours ago

Exactly the same feedback

girvo4 hours ago

> I can't believe how bad it is

This has been my consistent experience with every model prior to Opus 4.5, and every single open model I've given a go.

Hopefully we will get there in another 6 months when Opus is distilled into new open models, but I've always been shocked at some of the claims around open models, when I've been entirely unable to replicate them.

Hell, even Opus 4.5 shits the bed with semi-regularity on anything that's not completely greenfield for my usage, once I'm giving it tasks beyond some unseen complexity boundary.

Balinares8 hours ago

Amazingly, just yesterday, I had Opus 4.5 crap itself extensively on a fairly simple problem -- it was trying to override a column with an aggregation function while also using it in a group-by without referring to the original column by its full qualified name prefixed with the table -- and in typical Claude fashion it assembled an entire abstraction layer to try and hide the problem under, before finally giving up, deleting the column, and smugly informing me I didn't need it anyway.

That evening, for kicks, I brought the problem to GLM 4.7 Flash (Flash!) and it one-shot the right solution.

It's not apples to apples, because when it comes down to it LLMs are statistical token extruders, and it's a lot easier to extrude the likely tokens from an isolated query than from a whole workspace that's already been messed up somewhat by said LLM. That, and data is not the plural of anecdote. But still, I'm easily amused, and this amused me. (I haven't otherwise pushed GLM 4.7 much and I don't have a strong opinion about about it.)

But seriously, given the consistent pattern of knitting ever larger carpets to sweep errors under that Claude seems to exhibit over and over instead of identifying and addressing root causes, I'm curious what the codebases of people who use it a lot look like.

throwaw1210 hours ago

yes I did, not on par with Opus 4.5.

I use Opus 4.5 for planning, when I reach my usage limits fallback to GLM 4.7 only for implementing the plan, it still struggles, even though I configure GLM 4.7 as both smaller model and heavier model in claude code

WarmWash9 hours ago

The Chinese labs distill the SOTA models to boost the performance of theirs. They are a trailer hooked up (with a 3-6 month long chain) to the trucks pushing the technology forwards. I've yet to see a trailer overtake it's truck.

China would need an architectural breakthrough to leap American labs given the huge compute disparity.

miklosz9 hours ago

I have seen indeed a trailer overtake its truck. Not a beautiful view.

digdugdirk8 hours ago

Agreed. I do think the metaphor still holds though.

A financial jackknifing of the AI industry seems to be one very plausible outcome as these promises/expectations of the AI companies starts meeting reality.

overfeed8 hours ago

Care to explain how the volume of AI research papers authored by Chinese researchers[1] has exceeded US-published ones? Time-traveling plagiarism perhaps, since you believe the US is destined to lead always.

1. Chinese researcher in China, to be more specific.

bfeynman7 hours ago

Not a great metric, research in academia doesn't necessarily translate to value. In the US they've poached so many academics because of how much value they directly translate to.

WarmWash4 hours ago

I don't doubt China wouldn't be capable of making SOTA models, however they are very heavily compute constrained. So they are forced to shortcut compute by riding the coattails of compute heavy models.

They need a training-multiplier breakthrough that would allow them to train SOTA models on on a fraction of the compute that the US does. And this would also have to be kept a secret and be well hidden (often multiple researchers from around the world put the pieces together on a problem at around the same time, so the breakthrough would have to be something pretty difficult to discover for the greatest minds in the field) to prevent the US from using it to multiply their model strength with their greater compute.

jacquesm8 hours ago

Volume is easy: they have far more people, it is quality that counts.

+2
overfeed6 hours ago
aaa_aaa9 hours ago

No all they need is time. I am awaiting the dowfall of the ai hegemony and hype with popcorn at hand.

mhuffman9 hours ago

I would be happy with an openweight 3 month old Claude

cmrdporcupine8 hours ago

DeepSeek 3.2 is frankly fairly close to that. GLM 4.7 as well. They're basically around Sonnet 4 level.

Onavo9 hours ago

Well DeepSeek V4 is rumored to be in that range and will be released in 3 weeks.

sampton10 hours ago

Every time Dario opens his mouth it's something weird.

rahimnathwani9 hours ago

Has anyone successfully run this on a Mac? The installation instructions appear to assume an NVIDIA GPU (CUDA, FlashAttention), and I’m not sure whether it works with PyTorch’s Metal/MPS backend.

magicalhippo7 hours ago

FWIW you can run the demo without FlashAttention using --no-flash-attn command-line parameter, I do that since I'm on Windows and haven't gotten FlashAttention2 to work.

javier1234543218 hours ago

I recommend using modal for renting the metal.

turnsout7 hours ago

It seems to depend on FlashAttention, so the short answer is no. Hopefully someone does the work of porting the inference code over!

satvikpendem8 hours ago

This would be great for audiobooks, some of the current AI TTS still struggle.

girvo4 hours ago

Amusingly one of their examples (the final Age Control example) is prompted to have American English as an accent, but sounds like an Australian trying to sounds American to my ear haha

PunchyHamster8 hours ago

Looking forward for my grandma being scammed by one!

jacquesm8 hours ago

So far that seems to be the main use case.

bigyabai7 hours ago

Grandmas should know better, nowadays. It's 2026, half of today's grandparents grew up with QVC and landline psychics.

jakobdabo6 hours ago

Can anyone please provide directions/links to tools that can be run locally, and that take an audio recording of a voice as an input, and produce an output with the same voice saying the same thing with the same intonations, but with a fixed/changed accent?

This is needed for processing an indie game's voice recordings, where the voice actors weren't native speakers and had some accent.

gunalx3 hours ago

Voice actors are slo cooked. Some of the demos arguably sounded way better than a lot of indie voice-acting.

whinvik8 hours ago

Haha something that I want to try out. I have started using voice input more and more instead of typing and now I am on my second app and second TTS model, namely Handy and Parakeet V3.

Parakeet is pretty good, but there are times it struggles. Would be interesting to see how Qwen compares once Handy has it in.

woodson4 hours ago

This is about speech to text, not speech recognition.

Footprint05216 hours ago

Why parakeet over whisper v3 turbo? Just curious as one who heavily uses whisper, I’ve seemed to have better results with that

whinvik5 hours ago

Parakeet is much smaller and for me the perf/speed combo has just been better.

thedangler9 hours ago

Kind of a noob, how would I implement this locally? How do I pass it audio to process. I'm assuming its in the API spec?

dust429 hours ago

Scroll down on the Huggingface page, there are code examples and also a link to github: https://huggingface.co/Qwen/Qwen3-TTS-12Hz-0.6B-Base

daliusd6 hours ago

I wanted to try this locally as well so I have asked AI to write CLI for me: https://github.com/daliusd/qtts

There are some samples. If you have GPU you might want to fork and improve this, but otherwise slow, but usable on CPU as well.

swaraj5 hours ago

Tried the voice clone with a 30s trump clip (with reference text), and it didn't sound like him at all.

dangoodmanUT4 hours ago

Many voices clone better than 11labs, while admitedly lower bitrate

JonChesterfield8 hours ago

I see a lot of references to `device_map="cuda:0"` but no cuda in the github repo, is the complete stack flash attention plus this python plus the weights file, or does one need vLLM running as well?

sails6 hours ago

Any recommendations for an iOS app to test models like this? There are a few good ones for text gen, and it’s a great way to try models

bigyabai5 hours ago

Besides UTM, no.

albertwang10 hours ago

great news, this looks great! is it just me, or do most of the english audio samples sound like anime voices?

bityard8 hours ago

Well, if you look at the prompts, they are basically told to sound like that.

And if you ask me, I think these models were trained on tween fiction podcasts. (My kids listen to a lot of these and dramatic over-acting seems to be the industry standard.)

Also, their middle-aged adult with an "American English" accent sounds like any American I've ever met. More like a bad Sean Connery impersonator.

rapind10 hours ago

> do most of the english audio samples sound like anime voices?

100% I was thinking the same thing.

reactordev9 hours ago

The real value I see is being able to clone a voice and change timbre and characteristics of the voice to be able to quickly generate voice overs, narrations, voice acting, etc. It's superb!

devttyeu10 hours ago

Also like some popular youtubers and popular speakers.

pixl9710 hours ago

Hmm, wonder where they got their training data from?

thehamkercat10 hours ago

even the Japanese audio samples sound like anime

htrp9 hours ago

subbed audio training data (much better than cc data) is better

indigodaddy10 hours ago

How does the cloning compare to pocket TTS?

quinncom4 hours ago

Pocket TTS is much smaller: 100M parameters versus 600–1800M.

indigodaddy22 minutes ago

Ah right so I guess qwen3-tts isn't going to work for cpu-only like pocket TTS can(?)

andhuman7 hours ago

It’s uncanny good. I prefer it to pocket, but then again pocket is much smaller and for realtime streaming.

ideashower9 hours ago

Huh. One of the English Voice Clone examples features Obama.

illwrks3 hours ago

I think the other sounds like Steve Jobs - I could be wrong though!

subscribed5 hours ago

Distinct, characteristic voice. My first to play with will be Morgan Freeman.

salzig8 hours ago

So now we're getting every movie in "original voice" but local language? Can't wait to view anime or Bollywood :D

wahnfrieden9 hours ago

How is it for Japanese?

salzig8 hours ago

there is a sample clone -> Trump speaks Japanese.

Edit: "Cross-lingual Voice Clone" https://qwen.ai/blog?id=qwen3tts-0115#voice-clone

lostmsu10 hours ago

I still don't know anyone who managed Qwen3-Omni to work properly on a local machine.