Back

Meta Segment Anything Model 3

178 points3 monthsai.meta.com
trevorhlynn2 months ago

This was front page for a while last week

https://news.ycombinator.com/item?id=45982073

dang2 months ago

Thanks! Macroexpanded:

Meta Segment Anything Model 3 - https://news.ycombinator.com/item?id=45982073 - Nov 2025 (133 comments)

p.s. This was lobbed onto the frontpage by the second-chance pool (https://news.ycombinator.com/item?id=26998308) and I need to make sure we don't end up with duplicate threads that way.

stronglikedan2 months ago

what is old is new again

vessenes2 months ago

Released last week. Looks like all the weights are now out and published. Don’t sleep on the SAM 3D series — it’s seriously impressive. They have a human pose model which actually rigs and keeps multiple humans in a scene with objects, all from one 2D photo (!), and their straight object 3D model is by far the best I’ve played with - it got a really very good lamp with translucency and woven gems in usable shape in under 15 seconds.

Qwuke2 months ago

Between this and DINOv3, Meta is doing a lot for the SOTA even if Llama 4 came up short compared to the Chinese models.

nl2 months ago

https://ai.meta.com/blog/sam-3d/ for those interested.

Fraterkes2 months ago

Are those the actual wireframes they're showing in the demos on that page? As in, do the produced models have "normal" topology? Or are they still just kinda blobby with a ton of polygons

seanw2652 months ago

I haven’t tried it myself, but if you’re asking specifically about the human models, the article says they’re not generating raw meshes from scratch. They extract the skeleton, shape, and pose from the input and feed that into their HMR system [0], which is a parametric human model with clean topology.

So the human results should have a clean mesh. But that’s separate from whatever pipeline they use for non-human objects.

[0]: https://github.com/facebookresearch/MHR

vessenes2 months ago

I’ve only used the playground. But I think they are actual meshes - they don’t have any of the weird splat noise at the edge of the objects, and they do not seem to show similar lighting artifacts to a typical splat rendering.

daemonologist2 months ago

For the objects I believe they're displaying Gaussian splats in the demo, but the model itself can also produce a proper mesh. The human poses are meshes (it's posing and adjusting a pre-defined parametric model).

visioninmyblood2 months ago
retinaros2 months ago

I looked quickly but it does not generate a 3d model file right?

enoch20902 months ago

Surprisingly, SAM3 works bad on engineering drawings while SAM2 kinda works, and VLMs like Qwen3-VL works as well

zubiaur2 months ago

Had good luck with Gemini 2.5, SAM3 failed miserably with PIDs.

retinaros2 months ago

yeah I tried too. Im trying a fine tuning on PIDs.

enoch20902 months ago

Looking forward to your progress! Just checked the paper and it says the underlying backbone is still DETR. My guess would be that SAM3 uses more video frames during the training process and caused the dilution of sparse engineering-paper-like data.

the_duke2 months ago

Side question: what are the current top goto open models for image captioning and building image embeddings dbs, with somewhat reasonable hardware requirements?

daemonologist2 months ago

For pure image embedding, I find DINOv3 to be quite good. For multimodal embedding, maybe RzenEmbed. For captioning I would use a regular multimodal LLM, Qwen 3 or Gemma 3 or something, if your compute budget allows.

NitpickLawyer2 months ago

Try any of the qwen3-vl models. They have 8, 4 and 2B models in this family.

Glemkloksdjf2 months ago

I would suggest YOLO. Depending on your domain, you might also finetune these models. Its relativly easy as they are not big LLMs but either image classification or bounding boxes.

I would recommend bounding boxes.

jabron2 months ago

What do you mean "bounding boxes"? They were talking about captions and embeddings, so a vision language model is required.

Glemkloksdjf2 months ago

I suggested YOLO and non llm-vl as a lot faster alternative.

Of course CLIP would be otherwise the other option than a big llm-vl one.

smallerize2 months ago

Which YOLO?

Glemkloksdjf2 months ago

Any current one. they are easy to use and you can just benchmark them yourself.

I'm using small and medum.

Also the code for using it is very short and easy to use. You can also use ChatGPT to generate small exepriments to see what fits your case better

+1
throwaway3141552 months ago
phkahler2 months ago

Which (if any) of these models could run on a RaspberryPi for object recognition at several FPS?

aliljet2 months ago

I wonder how effective this is medical scenarios? Segmenting organs and tumors in cat scans or MRIs?

colkassad2 months ago

Been waiting days to get approval to download this from huggingface. What's up with that?

observationist2 months ago

Alternative downloads exist. You can find torrents, and match checksums against the HF downloads, but there are also mirrors and clones right there in HF which you can download without even having to log in.

colkassad2 months ago

Thanks, got it and it's working wonders for my use case.

knicholes2 months ago

I was approved within about 10 minutes for "Segment Anything 3"

tschellenbach2 months ago

same here, didn't get approval

shashanoid2 months ago

Miss the old segment anything page, used it a lot. This UI I found very complex to use

bradyriddle2 months ago

Same.

Checkout https://github.com/MiscellaneousStuff/meta-sam-demo

It's a rip of the previous sam playground. I use it for a bunch of things.

Sam 3 is incredible. I'm surprised it's not getting more attention.

stronglikedan2 months ago

> I'm surprised it's not getting more attention.

Remember, it's not the idea, it's the marketing!

vanjoe2 months ago

For a long time I've wanted to use something like this to remove advertisements from hockey games.The moving ads on the boards are really annoying. Maybe I'll get around to actually doing that one of these days.

cheesecompiler2 months ago

This would be convenient for post-production and editing of video, e.g. to aid colour grading in Davinci Resolve. Currently a lot of manual labour goes into tracking and hand-masking in grading.

maelito2 months ago

I wonder if this can be used to track an object's speed. E.g. a vehicle on a road. It would need to recognize shapes, e.g. car model or average size of a bike, to guess a speed.

Will-Reppeto2 months ago

[dead]

Workaccount22 months ago

I do a test on multimodal LLMs where I show them a dog with 5 legs, and ask them to count how many legs the dog has. So far none of them can do it. They all say "4 legs".

Segment anything however was able to segment all 5 dog legs when prompted to. Which means that meta is doing something else under the hood here, and may lend itself to a very powerful future LLM.

Right now some of the biggest complaints people have with LLMs stems from their incompetence processing visual data. Maybe meta is onto something here.

jampekka2 months ago

Segmentation doesn't need to count legs. I'd guess something like YOLO could segment 5 legged dogs too.

chompychop2 months ago

YOLO is not a segmentation model.

chompychop2 months ago

Thanks! TIL there's a class of segmentation models with the YOLO naming scheme.

lucasban2 months ago

I thought it was a joke about YAML

Der_Einzige2 months ago

Lol you obviously haven't seen what cheats for FPS games look like in the last 3 years.

https://github.com/Babyhamsta/Aimmy

nerdsniper2 months ago

You don’t need segmentation to count legs. Object detection can do that. DeepLabCut from 2020 perhaps.

PunchTornado2 months ago

I doubt that gemini 3 cannot do it.