Back

Gemini 3.1 Pro

410 points2 hoursdeepmind.google
simonw2 hours ago

Pretty great pelican: https://simonwillison.net/2026/Feb/19/gemini-31-pro/ - took over 5 minutes though, but I think that's because they're having performance teething problems on launch day.

embedding-shape2 hours ago

It's an excellent demonstration of the main issue I have with the Gemini family of models, they always go "above and beyond" to do a lot of stuff, even if I explicitly prompt against it. In this case, most of the SVG ends up consisting not just of a bike and a pelican, but clouds, a sun, a hat on the pelican and so much more.

Exactly the same thing happens when you code, it's almost impossible to get Gemini to not do "helpful" drive-by-refactors, and it keeps adding code comments no matter what I say. Very frustrating experience overall.

mullingitover1 hour ago

> it's almost impossible to get Gemini to not do "helpful" drive-by-refactors

Just asking "Explain what this service does?" turns into

[No response for three minutes...]

+729 -522

cowmoo7281 hour ago

it's also so aggressive about taking out debug log statements and in-progress code. I'll ask it to fill in a new function somewhere else and it will remove all of the half written code from the piece I'm currently working on.

+2
chankstein381 hour ago
BartShoot15 minutes ago

if you had to ask it obviously needs to refactor code for clarity so next person does not need to ask

kylec1 hour ago

"I don't know what did it, but here's what it does now"

quotemstr35 minutes ago

What. You don't have yours ask for edit approval?

+1
embedding-shape29 minutes ago
SignalStackDev39 minutes ago

[dead]

enobrev2 hours ago

I have the same issue. Even when I ask it to do code-reviews and very explicitly tell it not to change files, it will occasionally just start "fixing" things.

mikepurvis48 minutes ago

I find Copilot leans the other way. It'll myopically focus its work in the exact function I point it at, even when it's clear that adding a new helper would allow shared with the function right beside it.

Overall, I think it's probably better that it stay focused, and allow me to prompt it with "hey, go ahead and refactor these two functions". At the same time, really the ideal would be to have it proactively ask, or even pitch the refactor as a colleague would, like "based on what I see of this function, it would make most sense to XYZ, do you think that makes sense? <sure go ahead> <no just keep it a minimal change>"

Or perhaps even better, simply pursue both changes and present them as A/B options for the human reviewer to select between.

neya47 minutes ago

> it's almost impossible to get Gemini to not do "helpful" drive-by-refactors

This has not been my experience. I do Elixir primarily and Gemini has helped build some really cool products and massive refactors along the way. And it would even pick up security issues and potential optimizations along the way

What HAS been an issue constantly though was randomly the model will absolutely not respond at all and some random error would occur which is embarrassing for a company like Google with the infrastructure they own.

embedding-shape41 minutes ago

Out of curiosity, do you have any public projects (with public source code) you've made exclusively with Gemini, so one could take a look? I've tried a bunch of times to use Gemini to at least finish something small but I always end up sufficiently frustrated to abort it as the instruction-following seems so bad.

JLCarveth20 minutes ago

Every time I have tried using `gemini-cli` it just thinks endlessly and never actually gives a response.

tyfon1 hour ago

I was using gemini antigravity in opencode a few weeks ago before they started banning everyone for that and I got into the habit of writing "do x, then wait for instructions".

That helped quite a bit but it would still go off on it's own from time to time.

gavinray2 hours ago

Do you have Personalization Instructions set up for your LLM models?

You can make their responses fairly dry/brief.

embedding-shape2 hours ago

I'm mostly using them via my own harnesses, so I have full control of the system prompts and so on. And no matter what I try, Gemini keeps "helpfully" adding code comments every now and then. With every other model, "- Don't add code comments" tends to be enough, but with Gemini I'm not sure how I could stop the comments from eventually appearing.

+1
WarmWash1 hour ago
metal_am1 hour ago

I'd love to hear some examples!

gavinray1 hour ago

I use LLM's outside of work primarily for research on academic topics, so mine is:

  Be a proactive research partner: challenge flawed or unproven ideas with evidence; identify inefficiencies and suggest better alternatives with reasoning; question assumptions to deepen inquiry.
ai4prezident57 minutes ago

[dead]

zengineer1 hour ago

true, whenever I ask Gemini to help me with a prompt for generating an image of XYZ, it generates the image.

MrCheeze57 minutes ago

Does anyone understand why LLMs have gotten so good at this? Their ability to generate accurate SVG shapes seems to greatly outshine what I would expect, given their mediocre spatial understanding in other contexts.

pknerd24 minutes ago

> Does anyone understand why LLMs have gotten so good at this?

Added more IF/THEN/ELSE conditions.

kridsdale315 minutes ago

More wires and jumpers on the breadboard.

simonw37 minutes ago

My best guess is that the labs put a lot of work into HTML and CSS spatial stuff because web frontend is such an important application of the models, and those improvements leaked through to SVG as well.

WarmWash2 hours ago

Less pretty and more practical, it's really good at outputting circuit designs as SVG schematics.

https://www.svgviewer.dev/s/dEdbH8Sw

InitialLastName1 hour ago

I don't know what of this is the prompt and what was the output, but that's a pretty bad schematic (for both aesthetic and circuit-design reasons).

WarmWash39 minutes ago

The prompts were doing the design, reference voltage, hysteresis, output stage, all the maths and then the SVG is from asking the model to take all that and the current BOM to make an SVG schematic of it. In the past models would just output totally incoherent messes of lines and shapes.

I did a larger circuit too that this is part of, but it's not really for sharing online.

svnt54 minutes ago

Yes but you concede it is a schematic.

0_____01 hour ago

that's pretty amazing for an LLM but as an EE, if my intern did this i would sigh inwardly and pull up some existing schematics for some brief guidance on symbol layout.

sam_14212 hours ago

Models are soon going to start benchmaxxing generating SVGs of pelicans on bikes

cbsks1 hour ago

That’s Simon’s goal. “All I’ve ever wanted from life is a genuinely great SVG vector illustration of a pelican riding a bicycle. My dastardly multi-year plan is to trick multiple AI labs into investing vast resources to cheat at my benchmark until I get one.”

https://simonwillison.net/2025/Nov/13/training-for-pelicans-...

travisgriggs8 minutes ago

So once that's achieved, I wonder how well it deals with unsuspected variations. E.g.

"Give me an illustration of a bicycle riding by a pelican"

"Give me an illustration of a bicycle riding over a pelican"

"Give me an illustration of a bicycle riding under a flying pelican"

So on and so forth. Or will it start to look like the Studio C sketch about Lobster Bisque: https://youtu.be/A2KCGQhVRTE

embedding-shape2 hours ago

Soon? I'd be willing to bet it's been included in the training set at least 6 months by now. Not so obvious so it generates always perfect pelicans on bikes, but sufficiently for the "minibench" to be less useful today than in the past.

jsheard2 hours ago

Simons been doing this exact test for nearly 18 months now, if vendors want to benchmaxx it then they've had more than enough time to do so already.

stri8ted1 hour ago

Exactly. As far as I'm concerned, the benchmark is useless. It's way too easy and rewarding to train on it.

Legend244058 minutes ago

Y'all are way too skeptical, no matter what cool thing AI does you'll make up an excuse for how they must somehow be cheating.

pixl9756 minutes ago

I mean if you want to make your own benchmark, simply don't make it public and don't do it often. If your salamander on skis or whatever gets better with time it likely has nothing to do with being benchmaxxed.

tarr1126 minutes ago

What do you think this particular prompt is evaluating for?

The more popular these particular evals are, the more likely the model will be trained for them.

SoKamil57 minutes ago

It seems they trained the model to output good svg’s.

In their blog post[1], first use case they mention is svg generation. Thus, it might not be any indicator at all anymore.

[1] https://blog.google/innovation-and-ai/models-and-research/ge...

Arcuru2 hours ago

Did you stop using the more detailed prompt? I think you described it here: https://simonwillison.net/2025/Nov/18/gemini-3/

simonw35 minutes ago

It seems to be having capacity problems right now but I'll run that as soon as I can get it to work.

AmazingTurtle55 minutes ago

At this point, the pelican benchmark became so widely used that there must be high quality pelicans in the dataset, I presume. What about generating an okapi on a bicycle instead?

tromp40 minutes ago

Or, even more challenging, an okapi on a recumbent ?!

bredren2 hours ago

What is that, a snack in the basket?

sigmar2 hours ago

"integrating a bicycle basket, complete with a fish for the pelican... also ensuring the basket is on top of the bike, and that the fish is correctly positioned with its head up... basket is orange, with a fish inside for fun."

how thoughtful of the ai to include a snack. truly a "thanks for all the fish"

defen1 hour ago

A pelican already has an integrated snack-holder, though. It wouldn't need to put it in the basket.

WarmWash2 hours ago

A fish for the road

steve_adams_862 hours ago

Ugh, the gears and chain don't mesh and there's no sprocket on the rear hub

But seriously, I can't believe LLMs are able to one-shot a pelican on a bicycle this well. I wouldn't have guessed this was going to emerge as a capability from LLMs 6 years ago. I see why it does now, but... It still amazes me that they're so good at some things.

emp173441 hour ago

Is this capability “emergent”, or do AI firms specifically target SVG generation in order to improve it? How would we be able to tell?

simonw34 minutes ago

Google specifically boast about their SVG performance in the announcement post: https://blog.google/innovation-and-ai/models-and-research/ge...

You can try any combination of animal on vehicle to confirm that they likely didn't target pelicans directly though.

0_____01 hour ago

next time you host a party, have people try to draw a bicycle on your whiteboard (you have a whiteboard in your house right? you should, anyway...)

human adults are generally quite bad at drawing them, unless they spend a lot of time actually thinking about bicycles as objects

54235423423555 minutes ago
emp1734452 minutes ago

What’s your point? Yes, humans fail sometimes, as do AI models. Are you trying to imply that, in light of this, AI is now as capable as human beings? If so, that conclusion doesn’t follow logically.

0_____042 minutes ago

it's not a loaded point, i just think it's funny that humans typically cannot one-shot this. and it will make your friends laugh

HPsquared1 hour ago

And the left leg is straight while the right leg is bent.

EDIT: And the chain should pass behind the seat stay.

infthi2 hours ago

Wonder when will we get something other than a side view

mikepurvis1 hour ago

That would be a especially challenging for vector output. I tried just now on ChatGPT 5.2 to jump straight to an image, with this prompt:

"make me a cartoon image of a pelican riding a bicycle, but make it from a front 3/4 view, that is riding toward the viewer."

The result was basically a head-on view, but I expect if you then put that back in and said, "take this image and vectorize it as an SVG" you'd have a much better time than trying to one-shot the SVG directly from a description.

... but of course, if that's so, then what's preventing the model from being smart enough to identify this workflow and follow it on its own to get the task completed?

calny2 hours ago

Great pelican but what’s up with that fish in the basket?

nicr_227 minutes ago

Yeah, why only _one_ fish?

It's obvious that pelican is riding long distance, no way a single fish is sufficiently energy dense for more than a few miles.

Can't the model do basic math???

coldtea2 hours ago

It's a pelican. What do you expect a pelican to have in his bike's basket?

It's a pretty funny and coherent touch!

embedding-shape59 minutes ago

> What do you expect a pelican to have in his bike's basket?

Probably stuff it cannot fit in the gullet, or don't want there (think trash). I wouldn't expect a pelican to stash fish there, that's for sure.

kridsdale313 minutes ago

You never travel with a snack fish for later on? He's going to be burning calories.

gavinray2 hours ago

Where else are cycling Pelican's meant to keep their fish?

calny50 minutes ago

I get it, I just meant the fish is poorly done, when I’d have guessed it would be relatively simple part. Maybe the black dot eye is misplaced idk.

mohsen12 hours ago

is there something in your prompt about hats? why the pelican always wearing a hat recently?!

bigfishrunning2 hours ago

At this point, i think maybe they're training on all of the previous pelicans, and one of them decided to put a hat on it?

Disclaimer: This is an unsubstantiated claim that i made up

benatkin51 minutes ago

I used the AI studio link and tried running it with the temperature set to 1.75: https://jsbin.com/locodaqovu/edit?html,output

xnx2 hours ago

Not even animated? This is 2026.

readitalready2 hours ago

Jeff Dean just posted an animated version: https://x.com/JeffDean/status/2024525132266688757

benbreen1 hour ago

One underrated thing about the recent frontier models, IMO, is that they are obviating the need for image gen as a standalone thing. Opus 4.6 (and apparently 3.1 Pro as well) doesn't have the ability to generate images but it is so good at making SVG that it basically doesn't matter at this point. And the benefit of SVG is that it can be animated and interactive.

I find this fascinating because it literally just happened in the past few months. Up until ~summer of 2025, the SVG these models made was consistently buggy and crude. By December of 2026, I was able to get results like this from Opus 4.5 (Henry James: the RPG, made almost entirely with SVG): https://the-ambassadors.vercel.app

And now it looks like Gemini 3.1 Pro has vaulted past it.

+1
embedding-shape58 minutes ago
cachius54 minutes ago

2025 that is

bigfishrunning2 hours ago

That Ostrich Tho

cachius45 minutes ago

That Tires Tho

saberience2 hours ago

I hope we keep beating this dead horse some more, I'm still not tired of it.

xrd32 minutes ago

These models are so powerful.

It's totally possible to build entire software products in the fraction of the time it took before.

But, reading the comments here, the behaviors from one version to another point version (not major version mind you) seem very divergent.

It feels like we are now able to manage incredibly smart engineers for a month at the price of a good sushi dinner.

But it also feels like you have to be diligent about adopting new models (even same family and just point version updates) because they operate totally differently regardless of your prompt and agent files.

Imagine managing a team of software developers where every month it was an entirely new team with radically different personalities, career experiences and guiding principles. It would be chaos.

I suspect that older models will be deprecated quickly and unexpectedly, or, worse yet, will be swapped out with subtle different behavioral characteristics without notice. It'll be quicksand.

simonw21 minutes ago

I had an interesting experience recently where I ran Opus 4.6 against a problem that o4-mini had previously convinced me wasn't tractable... and Opus 4.6 found me a great solution. https://github.com/simonw/sqlite-chronicle/issues/20

This inspired me to point the latest models at a bunch of my older projects, resulting in a flurry of fixes and unblocks.

jama21126 minutes ago

Yeah I keep maintaining a specific app I built with gpt 5.1 codex max with that exact model because it continues to work for the requests I send it, and attempts with other models even 5.2 or 5.3 codex seemed to have odd results. If I were superstitious I would say it’s almost like the model that wrote the code likes to work on the code better. Perhaps there’s something about the structure it created though that it finds easier to understand…

seizethecheese30 minutes ago

> It feels like we are now able to manage incredibly smart engineers for a month at the price of a good sushi dinner.

In my experience it’s more like idiot savant engineers. Still remarkable.

WarmWash22 minutes ago

I have long suspected that a large part of people's distaste for given models comes from their comfort with their daily driver.

Which I guess feeds back to prompting still being critical for getting the most out of a model (outside of subjective stylistic traits the models have in their outputs).

worldsavior23 minutes ago

Sushy dinner? What are you building with AI, a calculator?

spankalee1 hour ago

I hope this works better than 3.0 Pro

I'm a former Googler and know some people near the team, so I mildly root for them to at least do well, but Gemini is consistently the most frustrating model I've used for development.

It's stunningly good at reasoning, design, and generating the raw code, but it just falls over a lot when actually trying to get things done, especially compared to Claude Opus.

Within VS Code Copilot Claude will have a good mix of thinking streams and responses to the user. Gemini will almost completely use thinking tokens, and then just do something but not tell you what it did. If you don't look at the thinking tokens you can't tell what happened, but the thinking token stream is crap. It's all "I'm now completely immersed in the problem...". Gemini also frequently gets twisted around, stuck in loops, and unable to make forward progress. It's bad at using tools and tries to edit files in weird ways instead of using the provided text editing tools. In Copilot it, won't stop and ask clarifying questions, though in Gemini CLI it will.

So I've tried to adopt a plan-in-Gemini, execute-in-Claude approach, but while I'm doing that I might as well just stay in Claude. The experience is just so much better.

For as much as I hear Google's pulling ahead, Anthropic seems to be to me, from a practical POV. I hope Googlers on Gemini are actually trying these things out in real projects, not just one-shotting a game and calling it a win.

Oras8 minutes ago

Glad I’m not the only one who experienced this. I have a paid antigravity subscription and most of the time I use Claude models due to the exact issues you have pointed out.

s3p25 minutes ago

Don't get me started on the thinking tokens. Since 2.5P the thinking has been insane. "I'm diving in to the problem", "I'm fully immersed" or "I'm meticulously crafting the answer"

slopinthebag39 minutes ago

Hmm, interesting..

My workflow is to basically use it to explain new concepts, generate code snippets inline or fill out function bodies, etc. Not really generating code autonomously in a loop. Do you think it would excel at this?

jbellis41 minutes ago

yeah, g3p is as smart or smarter as the other flagships but it's just not reliable enough, it will go into "thinking loops" and burn 10s of 1000s of tokens repeating itself.

https://blog.brokk.ai/gemini-3-pro-preview-not-quite-baked/

hopefully 3.1 is better.

knollimar1 hour ago

Is the thinking token stream obfuscated?

Im fully immersed

orbital-decay38 minutes ago

It's just a summary generated by a really tiny model. I guess it also an ad-hoc way to obfuscate it, yes. In particular they're hiding prompt injections they're dynamically adding sometimes. Actual CoT is hidden and entirely different from that summary. It's not very useful for you as a user, though (neither is the summary).

minimaxir2 hours ago

Price is unchanged from Gemini 3 Pro: $2/M input, $12/M output. https://ai.google.dev/gemini-api/docs/pricing

Knowledge cutoff is unchanged at Jan 2025. Gemini 3.1 Pro supports "medium" thinking where Gemini 3 did not: https://ai.google.dev/gemini-api/docs/gemini-3

Compare to Opus 4.6's $5/M input, $25/M output. If Gemini 3.1 Pro does indeed have similar performance, the price difference is notable.

rancar255 minutes ago

If we don't see a huge gain on the long-term horizon thinking reflected with the Vendor-Bench 2, I'm not going to switch away from CC. Until Google can beat Anthropic on that front, Claude Code paired with the top long-horizon models will continue to pull away with full stack optimizations at every layer.

jbellis43 minutes ago

still no minimal reasoning in G3.1P :(

(this is why Opus 4.6 is worth the price -- turning off thinking makes it 3x-5x faster but it loses only a small amount of intelligence. nobody else has figured that out yet)

plaidfuji1 hour ago

Sounds like the update is mostly system prompt + changes to orchestration / tool use around the core model, if the knowledge cutoff is unchanged

brokencode30 minutes ago

This keeps getting repeated for all kinds of model releases, but isn’t necessarily true. It’s possible to make all kinds of changes without updating the pretraining data set. You can’t judge a model’s newness based on what it knows about.

sigmar1 hour ago

knowledge cutoff staying the same likely means they didn't do a new pre-train. We already knew there were plans from deepmind to integrate new RL changes in the post training of the weights. https://x.com/ankesh_anand/status/2002017859443233017

1024core49 minutes ago

It got the car wash question perfectly:

You are definitely going to have to drive it there—unless you want to put it in neutral and push!

While 200 feet is a very short and easy walk, if you walk over there without your car, you won't have anything to wash once you arrive. The car needs to make the trip with you so it can get the soap and water.

Since it's basically right next door, it'll be the shortest drive of your life. Start it up, roll on over, and get it sparkling clean.

Would you like me to check the local weather forecast to make sure it's not going to rain right after you wash it?

1024core21 minutes ago

Some people are suggesting that this might actually be in the training set. Since I can't rule that out, I tried a different version of the question, with an elephant instead of a car:

> It's a hot and dusty day in Arizona and I need to wash my elephant. There's a creek 300 feet away. Should I ride my elephant there or should I just walk there by myself?

Gemini said:

That sounds like quite the dusty predicament! Given the options, you should absolutely ride (or lead) your elephant to the creek.

Here is the straightforward reality of the situation:

    The Logistical Nightmare of Walking Alone: If you walk to the creek by yourself, your elephant stays at your starting point. Elephants require a massive amount of water for a bath. Unless you are planning an extreme workout carrying hundreds of buckets of water 300 feet back through the Arizona heat, you're going to have a very bad time—and the elephant will stay dusty.

    The Perks of Going Together: 300 feet is a very short, easy stroll for an elephant. Plus, elephants generally love water and mud! Letting your elephant get right into the creek will make the washing process much easier for you and much more enjoyable for them. They can even use their trunk to help hose themselves down.
Grab some elephant-safe soap and head down to the water together!

Would you like some tips on the best methods and tools for scrubbing down an elephant?

kovek11 minutes ago

I think that semantically this question is too similar to the car wash one. Changing subjects from car to elephant and car wash to creek does not change the fact that they are subjects. The embeddings will be similar in that dimension.

1024core8 minutes ago

I understand. But isn't it a sign of "smarts" that one can generalize from analoguous tasks?

embedding-shape26 minutes ago

GPT-OSS-120b, a (downloadable) model released more than half a year ago also gets that right, I'm not sure this is such a great success.

> Would you like me to check the local weather forecast to make sure it's not going to rain right after you wash it?

Regardless of what I do, the days I decide to wash my car, it ALWAYS rains the day after, even if I look at the weather report and it says sunny. Cute that Gemini thinks it can predict it better than universe's will.

steve197740 minutes ago

The question of course is, did it get the car wash question right because it is "the car wash question" or because it could actually infer why the car needed to be there?

embedding-shape27 minutes ago

Wasn't that "twoot" (or whatever Mastodon calls them) made just a week ago? Unlikely to have been in the training dataset of a model becoming available for public use today, unless Google made some serious advancements on the training front.

jama21125 minutes ago

Shouldn’t be too hard to come up with a new unique reasoning question

upmind30 minutes ago

The answer here is why I dislike Gemini, though it gets the correct answer, it's far too verbose.

suddenlybananas41 minutes ago

They probably had time to toss that example in the training soup.

AlphaAndOmega035 minutes ago

Previous models from competitors usually got that correct, and the reasoning versions almost always did.

This kind of reflexive criticism isn't helpful, it's closer to a fully generalized counter-argument against LLM progress, whereas it's obvious to anyone that models today can do things they couldn't do six months ago, let alone 2 years back.

suddenlybananas31 minutes ago

I'm not denying any progress, I'm saying that reasoning failures that are simple which have gone viral are exactly the kind of thing that they will toss in the training data. Why wouldn't they? There's real reputational risks in not fixing it and no costs in fixing it.

takoid2 hours ago

Shared this in the other Gemini Pro 3.1 thread (https://news.ycombinator.com/item?id=47074735) but wanted to share it here as well.

I just tested the "generate an SVG of a pelican riding a bicycle" prompt and this is what I got: https://codepen.io/takoid/pen/wBWLOKj

The model thought for over 5 minutes to produce this. It's not quite photorealistic (some parts are definitely "off"), but this is definitely a significant leap in complexity.

onionisafruit1 hour ago

Good to see it wearing a helmet. Their safety team must be on their game.

BrokenCogs36 minutes ago

Yes but why would a pelican need a helmet? If it falls over it can just fly away... Common sense 1 Gemini 0

nickandbro2 hours ago

Does well on SVGs outside of "pelican riding on a bicycle" test. Like this prompt:

"create a svg of a unicorn playing xbox"

https://www.svgviewer.dev/s/NeKACuHj

Still some tweaks to the final result, but I am guessing with the ARC-AGI benchmark jumping so much, the model's visual abilities are allowing it to do this well.

simonw2 hours ago

Interesting how it went a bit more 3D with the style of that one compared to the pelican I got.

andy12_2 hours ago

I'm thinking now that as models get better and better at generating SVGs, there could be a point where we can use them to just make arbitrary UIs and interactive media with raw SVGs in realtime (like flash games).

nickandbro2 hours ago

Or quite literally a game where SVG assets are generated on the fly using this model

roryirvine1 hour ago

On the other hand, creation of other vector image formats (eg. "create a postscript file showing a walrus brushing its teeth") hasn't improved nearly so much.

Perhaps they're deliberately optimising for SVG generation.

WarmWash52 minutes ago

3.1 Pro is the first model to correctly count the number of legs on my "five legged dog" test image. 3.0 flash was the previous best, getting it after a few prompts of poking. 3.1 got it on the first prompt though, with the prompt being "How many legs does the dog have? Count Carefully".

However, it didn't get it on the first try with the original prompt (prompt: "How many legs does the dog have?"). It initially said 4, then with a follow up prompt got it to hesitantly say 5, with one limb must being obfuscated or hidden.

So maybe I'll give it a 90%?

This is without tools as well.

merlindru49 minutes ago

your question may have become part of the training data with how much coverage there was around it. perhaps you should devise a new test :P

WarmWash35 minutes ago

Honestly at this point I have fed this image in so many times on so many models, that it also functions as a test for "Are they training on my image specifically" (they are generally, for sure, but that's along with everything else in the ocean of info people dump in).

I genuinely don't think they are. GPT-5.2 still stands by 4 legs, and OAI has been getting this image consistently for over a year. And 3.1 still fumbled with the harder prompt "How many legs does the dog have?". I needed to add the "count carefully" part to tip it off that something was amiss.

Since it did well I'll make some other "extremely far out of the norm" images to see how it fairs. A spider with 10 legs or a fish with two side fins.

gallerdude47 minutes ago

My job may have become part of the training data with how much coverage there is around it. Perhaps another career would be a better test of LLM capabilities.

suddenlybananas41 minutes ago

Have you ever heard of a black swan?

wat1000046 minutes ago

Easy fix, make a new test image with six legs, and watch all the LLMs say it has five.

mijoharas1 hour ago

Gemini 3 is still in preview (limited rate limits) and 2.5 is deprecated (still live but won't be for long).[0]

Are Google planning to put any of their models into production any time soon?

Also somewhat funny that some models are deprecated without a suggested alternative(gemini-2.5-flash-lite). Do they suggest people switch to Claude?

[0] https://ai.google.dev/gemini-api/docs/deprecations

jakejohnson34 minutes ago

I haven't seen any deprecation notices for 2.5 yet, just for 2. I'd expect (and hope) the deprecation timeline for 2.5 is longer since 3.0 is still in preview. Maybe they just default to 1 year here?

> Note: The shutdown dates listed in the table indicate the /earliest/ possible dates on which a model might be retired. We will communicate the exact shutdown date to users with advance notice to ensure a smooth transition to a replacement model.

andrewmutz1 hour ago

I agree completely. I don't know how anyone can be building on these models when all of them are either deprecated or not actually released yet. As someone who has production systems running on the deprecated models, this situation really causes me grief.

vidarh49 minutes ago

This feels very Google

davidguetta1 hour ago

Implementation and Sustainability Hardware: Gemini 3 Pro was trained using Google’s Tensor Processing Units (TPUs). TPUs are specically designed to handle the massive computations involved in training LLMs and can speed up training considerably compared to CPUs. TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training, which can lead to better model quality. TPU Pods (large clusters of TPUs) also provide a scalable solution for handling the growing complexity of large foundation models. Training can be distributed across multiple TPU devices for faster and more efficient processing.

So google doesn't use NVIDIA GPUs at all ?

dekhn60 minutes ago

When I worked there, there was a mix of training on nvidia GPUs (especially for sparse problems when TPUs weren't as capable), CPUs, and TPUs. I've been gone for a few years but I've heard a few anecdotal statements that some of their researchers have to use nvidia GPUs because the TPUs are busy.

lejalv56 minutes ago

Bla bla bla yada sustainability yada often come with large better growing faster...

It's such an uninformative piece of marketing crap

PunchTornado1 hour ago

no. only tpus

paride57451 hour ago

Another reason to use Gemini then.

Less impact on gamers…

TiredOfLife45 minutes ago

TPUs still use ram and chip production capacity

Robdel122 hours ago

I really want to use google’s models but they have the classic Google product problem that we all like to complain about.

I am legit scared to login and use Gemini CLI because the last time I thought I was using my “free” account allowance via Google workspace. Ended up spending $10 before realizing it was API billing and the UI was so hard to figure out I gave up. I’m sure I can spend 20-40 more mins to sort this out, but ugh, I don’t want to.

With alllll that said.. is Gemini 3.1 more agentic now? That’s usually where it failed. Very smart and capable models, but hard to apply them? Just me?

alpineman2 hours ago

100% agreed. I wish someone would make a test for how reliably the LLMs follow tool use instructions etc. The pelicans are nice but not useful for me to judge how well a model will slot into a production stack.

embedding-shape1 hour ago

At first when I got started with using LLMs I read/analyzed benchmarks, looked at what example prompts people used and so on, but many times, a new model does best at the benchmark, and you think it'll be better, but then in real work, it completely drops the ball. Since then I've stopped even reading benchmarks, I don't care an iota about them, they always seem more misdirected than helpful.

Today I have my own private benchmarks, with tests I run myself, with private test cases I refuse to share publicly. These have been built up during the last 1/1.5 years, whenever I find something that my current model struggles with, then it becomes a new test case to include in the benchmark.

Nowadays it's as easy as `just bench $provider $model` and it runs my benchmarks against it, and I get a score that actually reflects what I use the models for, and it feels like it more or less matches with actually using the models. I recommend people who use LLMs for serious work to try the same approach, and stop relying on public benchmarks that (seemingly) are all gamed by now.

cdelsolar1 hour ago

share

embedding-shape1 hour ago

The harness? Trivial to build yourself, ask your LLM for help, it's ~1000 LOC you could hack together in 10-15 minutes.

As for the test cases themselves, that would obviously defeat the purpose, so no :)

phamilton1 hour ago

> For those building with a mix of bash and custom tools, Gemini 3.1 Pro Preview comes with a separate endpoint available via the API called gemini-3.1-pro-preview-customtools. This endpoint is better at prioritizing your custom tools (for example view_file or search_code).

It sounds like there was at least a deliberate attempt to improve it.

pdntspa2 hours ago

You can delete the billing from a given API key

Stevvo1 hour ago

You could always use it through Copilot. The credits based billing is pretty simple without surprise charges.

horsawlarway2 hours ago

So much this.

It's absolutely amazing how hostile Google is to releasing billing options that are reasonable, controllable, or even fucking understandable.

I want to do relatively simple things like:

1. Buy shit from you

2. For a controllable amount (ex - let me pick a limit on costs)

3. Without spending literally HOURS trying to understand 17 different fucking products, all overlapping, with myriad project configs, api keys that should work, then don't actually work, even though the billing links to the same damn api key page, and says it should work.

And frankly - you can't do any of it. No controls (at best delayed alerts). No clear access. No real product differentiation pages. No guides or onboarding pages to simplify the matter. No support. SHIT LOADS of completely incorrect and outdated docs, that link to dead pages, or say incorrect things.

So I won't buy shit from them. Period.

sciencejerk1 hour ago

You think AWS is better?

3form1 hour ago

Exact reason I used none of these platforms for my personal projects, ever.

pdimitar27 minutes ago

Who is comparing to AWS and why? They can both be terrible at the same time, you know.

surgical_fire1 hour ago

May be very silly of me, but I avoid using Gemini on my personal Google account. I use it at work, because my employer provides it.

I am scared some automated system may just decide I am doing something bad and terminate my account. I have been moving important things to Proton, but there are some stuff that I couldn't change that would cause me a lot of annoyance. It's not trivial to set up an alternative account just for Gemini, because my Google account is basically on every device I use.

I mostly use LLMs as coding assistant, learning assistant, and general queries (e.g.: It helped me set up a server for self hosting), so nothing weird.

CamperBob28 minutes ago

For what it's worth, there was an (unfortunately unsuccessful) HN submission from a guy who got his Gemini account banned, apparently without losing his whole Google account: https://news.ycombinator.com/item?id=47007906

himata41132 hours ago

use openrouter instead

panarchy11 minutes ago

I had it make a simple HTML/JS canvas game (think flappy bird) and while it did some things mildly better (and others noticeably worse) it still fell into the exact same traps as earlier models. It also had a lot of issues generating valid JS at parts and asking it what the code should be just made it endlessly generate the same exact incorrect code.

tenpoundhammer1 hour ago

In an attempt to get outside of benchmark gaming I had it make Platypus on a Tricycle. It's not as good as pelican on bicycle. https://www.svgviewer.dev/s/BiRht5hX

textlapse52 minutes ago

To really confuse it, ask it to take that tricycle with the platypus on it to a car wash.

dinosor1 hour ago

For a moment I assumed the output would look like Perry the Platipus from the Disney (I think?) show. It's suprising to me (as a layman) that a show with lots of media that would've made it to the training corpus didn't show up.

0_____01 hour ago

that's better than i thought it would be

the_duke2 hours ago

Gemini 3 is pretty good, even Flash is very smart for certain things, and fast!

BUT it is not good at all at tool calling and agentic workflows, especially compared to the recent two mini-generations of models (Codex 5.2/5.3, the last two versions of Anthropic models), and also fell behind a bit in reasoning.

I hope they manage to improve things on that front, because then Flash would be great for many tasks.

chermi2 hours ago

You can really notice the tool use problems. They gotta get on that. The agent trend seems real, and powerful. They can't afford to fall behind on it.

verdverm1 hour ago

I don't really have tool usage issues that I don't put under that doesn't follow system prompt instructions consistently

there are these times where it puts a prefix on all function calls, which is weird and I think hallucination, so maybe that one

3.1 hopefully fixes that

anthonypasq2 hours ago

yeah, it seems to me like Gemini is a little behind on the current RL patterns and also they dont seem interested in really creating a dedicated coding model. I think they have so much product surface (search, AI mode, gmail, youtube, chrome etc), they are prioritizing making the model very general. but who knows im just talking out of my ass.

verdverm2 hours ago

These improvements are one of the things specifically called out on the submitted page

spwa42 hours ago

In other words: they just need to motivate their employees while giving in to finance's demands to fire a few thousand every month or so ...

And don't forget, it's not just direct motivation. You can make yourself indispensable by sabotaging or at least not contributing to your colleagues' efforts. Not helping anyone, by the way, is exactly what your managers want you to do. They will decide what happens, thank you very much, and doing anything outside of your org ... well there's a name for that, isn't there? Betrayal, or perhaps death penalty.

syspec13 minutes ago

Does anyone know if this is in GA immediately or if it is in preview?

On our end, Gemini 3.0 Preview was very flakey (not model quality, but as in the API responses sometimes errored out), making it unreliable.

Does this mean that 3.0 is now GA at least?

mixel2 hours ago

Google seems to really pull ahead in this AI race. For me personally they offer the best deal and although the software is not quiet there compared to openai or anthropic (in regards to 1. web GUI, 2. agent-cli). I hope they can fix that in the future and I think once Gemini 4 or whatever launches we will see a huge leap again

rubslopes1 hour ago

I don't understand this sentiment. It may hold true for other LLM use cases (image generation, creative writing, summarizing large texts), but when it comes to coding specifically, Google is *always* behind OpenAI and Anthropic, despite having virtually infinite processing power, money, and being the ones who started this race in the first place.

Until now, I've only ever used Gemini for coding tests. As long as I have access to GPT models or Sonnet/Opus, I never want to use Gemini. Hell, I even prefer Kimi 2.5 over it. I tried it again last week (Gemini Pro 3.0) and, right at the start of the conversation, it made the same mistake it's been making for years: it said "let me just run this command," and then did nothing.

My sentiment is actually the opposite of yours: how is Google *not* winning this race?

hobofan52 minutes ago

> despite having virtually infinite processing power, money

Just because they have the money doesn't mean that they spend it excessively. OpenAI and Anthropic are both offering coding plans that are possibly severely subsidized, as they are more concerned with growth at all cost, while Google is more concerned with profitability. Google has the bigger warchest and could just wait until the other two run out of money rather than forcing the growth on that product line in unprofitable means.

Maybe they are also running much closer to their compute limits then the other ones too and their TPUs are already saturated with API usage.

eknkc1 hour ago

I hope they fail.

I honestly do not wish Google to have the best model out there and be forced to use their incomprehensible subscription / billing / project management whatever shit ever again.

I don’t know what their stuff cost. I don’t know why would I use vertex or ai studio. What is included in my subscription what is billed per use.

I pray that whatever they build fails and burns.

otherme1231 hour ago

They all suck. OpenAI ignores scanning limits and disabled routes in robots.txt, after a 429 "Too Many Requests" they retry the same url half a dozen of times from different IPs in the next couple of minutes, and they once DoS'ed my small VPS trying to do a full scan of sitemaps.xml in less than one hour, trying and retrying if any endpoint failed.

Google and others at least respects both robots.txt and 429s. They invested years scanning all the internet, so they can now train on what they have stored in their server. OpenAI seems to assume that MY resources are theirs.

toraway34 minutes ago

For a personal plan to use premium Gemini AI features or for agentic development with Gemini CLI/Antigravity the billing is no more or less complicated then Claude Code or Codex CLI.

You pay for the $20/mo Google AI Pro plan with a credit card via the normal personal billing flow like you would for a Google One plan without any involvement of Google Cloud billing or AI Studio. Authorize in the client with your account and you're good to go.

(With the bundled drive storage on AI Pro I'm just paying a few bucks more than I was before so for me it's my least expensive AI subscription excluding the Z.ai ultra cheap plan).

Or, just like with Anthropic or OpenAI, it's a separate process for billing/credits for an API key targeted at a developer audience. Which I don't need or use for Gemini CLI or Antigravity at all, it's a one step "click link to authorize with your Google Account" and done.

You could decide to use an API key for usage based billing instead (just like you could with Claude Code) but that's entirely unnecessary with a subscription.

Sure, for the API anything involving a hyperscalar cloud is going to have a higher complexity floor with legacy cruft here and there, but for individual subscriptions that's irrelevant and it's pretty much as straightforward of a click and pay flow you'd find anywhere else.

dybber1 hour ago

Eventually the models will be generally be so good that the competition moves from the best model to the best user experience and here I think we can expect others will win, e.g. Microsoft with GitHub and VS Code

eknkc1 hour ago

That's my hope but Google has unlimited cash to throw at model development and can basically burn more cash can openai and anthropic combined. Might tip the scale in the long run.

upmind31 minutes ago

In my experience, while Gemini does really well in benchmarks I find it much worse when I actually use the model. It's too verbose / doesn't follow instructions very well. Let's see if that changes with this model.

lysecret5 minutes ago

Please I need 3 in ga…

timabdulla1 hour ago

Google tends to trumpet preview models that aren't actually production-grade. For instance, both 3 Pro and Flash suffer from looping and tool-calling issues.

I would love them for to eliminate these issues because just touting benchmark scores isn't enough.

veselin26 minutes ago

I am actually going to complain about this: that neither of the Gemini models are not preview ones.

Anthropic seems the best in this. Everything is in the API on day one. OpenAI tend to want to ask you for subscription, but the API gets there a week or a few later. Now, Gemini 3 is not for production use and this is already the previous iteration. So, does Google even intent to release this model?

dxbednarczyk2 hours ago

Every time I've used Gemini models for anything besides code or agentic work they lean so far into the RLHF induced bold lettering and bullet point list barf that everything they output reads as if the model was talking _at_ me and not _with_ me. In my Openclaw experiment(s) and in the Gemini web UI, I've specifically added instructions to avoid this type of behavior, but it only seemed to obey those rules when I reminded the model of them.

For conversational contexts, I don't think the (in some cases significantly) better benchmark results compared to a model like Sonnet 4.6 can convince me to switch to Gemini 3.1. Has anyone else had a similar experience, or is this just a me issue?

losvedir5 minutes ago

It definitely has the worst "voice" in my opinion. Feels very overachieving McKinsey intern to me.

augusto-moura2 hours ago

Gemini sounds less personal, but I think that is good. From my experience, the quality of response is much higher than ChatGPT or Grok, and it cites real sources. I want to have a mini-wikipedia response for my questions, not a friend's group chat response

gavinray2 hours ago

I have the opposite viewpoint:

If a model doesn't optimize the formatting of its output display for readability, I don't want to read it.

Tables, embedded images, use of bulleted lists and bold/italicizing etc.

staticman22 hours ago

I'm not familiar with Openclaw and but the trick to solve this would be to embed a style reminder at the bottom of each user message and ideally hide that from the user with the UI.

This is how roleplay apps like Sillytavern customize the experience for power users by allowing hidden style reminders as part of the user message that accompany each chat message.

InkCanon2 hours ago

I think they all output that bold lettering, point by point style output. I strongly suspect it's part of a synthetic data pipeline all these AI companies have, and it improves performance. Claude seems to be the least of them, but it will start writing code at the drop of a hat. What annoys me in Gemini is that it has a really strange tendency to come up with weird analogies, especially in Pro mode. You'll be asking it about something like red black trees and it'll say "Red Black Trees (The F1 of Tree Data Structures)".

hydrolox29 minutes ago

Yes, the analogy habit is the most annoying of all. Overall formatting for me is doable, if it didn't divide up an answer into these silly arbitrary categories with useless analogies. I've tried adding in my user preferences to never use analogies but it inevitably falls back into that habit.

markab212 hours ago

You just articulated why I struggle to personally connect with Gemini. It feels so unrelatable and exhausting to read its output. I prefer to read Opus/Deepseek/GLM over Gemini, Qwen and the open source GPT models. Maybe it is RLHF that is creating my distaste from using it. (I pay for Gemini; I should be using it more... but the outputs just bug me and feel more work to get actionable insight.)

verdverm1 hour ago

I have no issues adjusting gemini tone & style with system prompt content

pRusya42 minutes ago

I'm using gemini.google.com/app with AI Pro subscription. "Something went wrong" in FF, works in Chrome.

Below is one of my test prompts that previous Gemini models were failing. 3.1 Pro did a decent job this time.

> use c++, sdl3. use SDL_AppInit, SDL_AppEvent, SDL_AppIterate callback functions. use SDL_main instead of the default main function. make a basic hello world app.

pawelduda2 hours ago

It's safe to assume they'll be releasing improved Gemini Flash soon? The current one is so good & fast I rarely switch to pro anymore

derac1 hour ago

When 3 came out they mentioned that flash included many improvements that didn't make it into pro (via an hn comment). I imagine this release includes those.

tucnak14 minutes ago

Gemini 3 Pro (high) is a joke compared to Gemini 3 Flash in Antigravity, except it's not even funny. Flash is insane value, and super capable, too. I've had it implement a decompiler for very obscure bytecode, and it was passing all tests in no time. PITA to refactor later, but not insurmountable. Gemini 3 Pro (high) choked on this problem in the early stages... I'm looking forward to comparing 3.1 Pro vs 3.0 Flash, hopefully they have improved on it enough to finally switch over.

josalhor2 hours ago

I speculated that 3 pro was 3.1... I guess I was wrong. Super impressive numbers here. Good job Google.

refulgentis2 hours ago

> I speculated that 3 pro was 3.1

?

josalhor53 minutes ago

Sorry... I speculated that 3 deep think is 3.1 pro.. model names are confusing..

janalsncm2 hours ago

This model says it accepts video inputs. I asked it to transcribe a 5 second video of a digital water curtain which spelled “Boo Happy Halloween”, and it came back with “Happy” which wasn’t the first frame, but also is incomplete.

This kind of test is good because it requires stitching together info from the whole video.

aabhay1 hour ago

It reads videos at 1fps by default. You have to set the video resolution to high in ai studio

trilogic32 minutes ago

Humanity last exam 44%, Scicode 59, and that 80, and this 78 but not 100% ever.

Would be nice to see that this models, Plus, Pro, Super, God mode can do 1 Bench 100%. I am missing smth here?

impulser_2 hours ago

Seems like they actually fixed some of the problems with the model. Hallucinations rate seems to be much better. Seems like they also tuned the reasoning maybe that were they got most of the improvements from.

whynotminot1 hour ago

The hallucination rate with the Gemini family has always been my problem with them. Over the last year they’ve made a lot of progress catching the Gemini models up to/near the frontier in general capability and intelligence, but they still felt very late 2024 in terms of hallucination rate.

Which made the Gemini models untrustworthy for anything remotely serious, at least in my eyes. If they’ve fixed this or at least significantly improved, that would be a big deal.

azuanrb1 hour ago

The CLI needs work, or they should officially allow third-party harnesses. Right now, the CLI experience is noticeably behind other SOTA models. It actually works much better when paired with Opencode.

But with accounts reportedly being banned over ToS issues, similar to Claude Code, it feels risky to rely on it in a serious workflow.

onlyrealcuzzo2 hours ago

We've gone from yearly releases to quarterly releases.

If the pace of releases continues to accelerate - by mid 2027 or 2028 we're headed to weekly releases.

rubicon332 hours ago

But actual progress seems to be slower. These modes are releasing more often but aren’t big leaps.

gallerdude1 hour ago

We used to get one annual release which was 2x as good, now we get quarterly releases which are 25% better. So annually, we’re now at 2.4x better.

minimaxir1 hour ago

Due to the increasing difficulty of scaling up training, it appears the gains are instead being achieved through better model training which appears to be working well for everyone.

wahnfrieden1 hour ago

GPT 5.3 (/Codex) was a huge leap over 5.2 for coding

Murfalo1 hour ago

I like to think that all these pelican riding a bicycle comments are unwittingly iteratively creating the optimal cyclist pelican as these comment threads are inevitably incorporated in every training set.

alpineman59 minutes ago

More like half of Google's AI team is hanging out on HN, and they can optimise for that outcome to get a good rep among the dev community.

Barbing44 minutes ago

See: fish in bike front basket

zokier2 hours ago

> Last week, we released a major update to Gemini 3 Deep Think to solve modern challenges across science, research and engineering. Today, we’re releasing the upgraded core intelligence that makes those breakthroughs possible: Gemini 3.1 Pro.

So this is same but not same as Gemini 3 Deep Think? Keeping track of these different releases is getting pretty ridiculous.

WarmWash1 hour ago

Deep Think is a few 3.1 models working together. It was suspected last week that Deep Think was composed using the new 3.1 model.

verdverm1 hour ago

3.1 == model

deep think == turning up thinking knob (I think)

deep research == agent w/ search

jcims2 hours ago

Pelican on a bicycle in drawio - https://imgur.com/a/tNgITTR

(FWIW I'm finding a lot of utility in LLMs doing diagrams in tools like drawio)

pqdbr1 hour ago

How are you prompting it to draw diagrams in drawio

ac2947 minutes ago

Drawio drawings are just XML, its possible it can generate that directly

hsaliak2 hours ago

The eventual nerfing gives me pause. Flash is awesome. What we really want is gemini-3.1-flash :)

makeavish2 hours ago

Great model until it gets nerfed. I wish they had a higher paid tier to use non nerfed model.

Mond_1 hour ago

Bad news, John Google told me they already quantized it immediately after the benchmarks were done and it sucks now.

I miss when Gemini 3.1 was good. :(

spyckie22 hours ago

I think there is a pattern it will always be nerfed the few weeks before launching a new model. Probably because they are throwing a bunch of compute at the new model.

makeavish2 hours ago

Yeah maybe that but atleast let us know about this Or have dynamic limits? Nerfing breaks trust. Though I am not sure if they actually nerf it intentionally. Haven't heard from any credible source. I did experience in my workflow though.

xnx2 hours ago

What are you talking about?

quacky_batak2 hours ago

I’m keen to know how and where are you using Gemini.

Anthropic is clearly targeted to developers and OpenAI is general go to AI model. Who are the target demographic for Gemini models? ik that they are good and Flash is super impressive. but i’m curious

jdc05892 hours ago

I use it as my main platform right now both for work/swe stuff, and person stuff. It works pretty well, they have the full suite of tools I want from general LLM chat, to notebookLM, to antigravity.

My main use-cases outside of SWE generally involve the ability to compare detailed product specs and come up with answers/comparisons/etc... Gemini does really well for that, probably because of the deeper google search index integration.

Also I got a year of pro for free with my phone....so thats a big part.

hunta20972 hours ago

I use the Gemini web interface just as I would ChatGPT. They also have coding environment analogues of Claude-Code in Anti-gravity and Gemini-CLI.

When you sign up for the pro tier you also get 2TB of storage, Gemini for workspace and Nest Camera history.

If you're in the Google sphere it offers good value for money.

fatherwavelet1 hour ago

I feel like Gemini 3 was incredible on non-software/coding research. I have learned so much systems biology the last two months it blows my mind.

I had only started using Opus 4.6 this week. Sonnet it seems like is much better at having a long conversation with. Gemini is good for knowledge retrieval but I think Opus 4.6 has caught up. The biggest thing that made Gemini worth it for me the last 3 months is I crushed it with questions. I wouldn't have even got 10% of the Opus use that I got from Gemini before being made to slow down.

I have a deep research going right now on 3.1 for the first time and I honestly have no idea how I am going to tell if it is better than 3.

It seems like agentic coding Gemini wasn't as good but just asking it to write a function, I think it only didn't one shot what I asked it twice. Then fixed the problem on the next prompt.

I haven't logged in to bother with chatGPT in about 3 months now.

thornewolf34 minutes ago

I have swapped to using gemini over chatgpt for casual conversation and question answering. there are some lacking features in the app but i get faster and more intelligent responses.

minimaxir2 hours ago

Gemini has an obvious edge over its competitors in one specific area: Google Search. The other LLMs do have a Web Search tool but none of them are as effective.

jug2 hours ago

I personally use it as my general purpose and coding model. It's good enough for my coding tasks most of the time, has very good and rapid web search grounding that makes the Google index almost feel like part of its training set, and Google has a family sharing plan with individual quotas for Google AI Pro at $20/month for 5 users which also includes 2 TB in the cloud. Family sharing is a unique feature for Gemini 3 Flash Thinking (300 prompts per day and user) & Pro (100 prompts per day and user).

dinosor2 hours ago

I find gemini to be the best at travel planning and for story telling of geographical places. For a road trip, I tried all three mainstream providers and I liked Gemini (also personal preference because Gemini took a verbose approach instead of bullet points from others) for it's responses, ways it discovered stories about places I wanted to explore, places it suggested for me and things it gave me to consider those places in the route.

dekhn1 hour ago

I am a professional software developer who has been programming for 40 years (C, C++, Python, assembly, any number of other languages). I work in ML (infrastructure, not research) and spent a decade working at Google.

In short, I consider Gemini to be a highly capable intern (grad student level) who is smarter and more tenacious than me, but also needs significant guidance to reach a useful goal.

I used Gemini to completely replace the software stack I wrote for my self-built microscope. That includes:

writing a brand new ESP32 console application for controlling all the pins of my ESP32 that drives the LED illuminator. It wrote the entire ESP-IDF project and did not make any major errors. I had to guide with updated prompts a few times but otherwise it wrote the entire project from scratch and ran all the build commands, fixing errors along the way. It also easily made a Python shared library so I can just import this object in my Python code. It saved me ~2-3 days of working through all the ESP-IDF details, and did a better job than I would have.

writing a brand new C++-based Qt camera interface (I have a camera with a special SDK that allows controlling strobe and trigger and other details. It can do 500FPS). It handled all the concurrency and message passing details. I just gave it the SDK PDF documentation for the camera (in mixed english/chinese), and asked it to generate an entire project. I had to spend some time guiding it around making shared libraries but otherwise it wrote the entire project from scratch and I was able to use it to make a GUI to control the camera settings with no additional effort. It ran all the build commands and fixed errors along the way. Saved me another 2-3 days and did a better job than I could have.

Finally, I had it rewrite the entire microscope stack (python with qt) using the two drivers I described above- along with complex functionality like compositing multiple images during scanning, video recording during scanning, mesaurement tools, computer vision support, and a number of other features. This involved a lot more testing on my part, and updating prompts to guide it towards my intended destination (fully functional replacement of my original self-written prototype). When I inspect the code, it definitely did a good job on some parts, while it came up with non-ideal solutions for some problems (for example, it does polling when it could use event-driven callbacks). This saved literally weeks worth of work that would have been a very tedious slog.

From my perspective, it's worked extremely well: doing what I wanted in less time than it would take me (I am a bit of a slow programmer, and I'm doing this in hobby time) and doing a better job (With appropriate guidance) than I could have (even if I'd had a lot of time to work on it). This greatly enhances my enjoyment of my hobby by doing tedious work, allowing me to spend more time on the interesting problems (tracking tardigrades across a petri dish for hours at a time). I used gemini pro 3 for this- it seems to do better than 2.5, and flash seemed to get stuck and loop more quickly.

I have only lightly used other tools, such as ChatGPT/Codex and have never used Claude. I tend to stick to the Google ecosystem for several reasons- but mainly, I think they will end up exceeding the capabilities of their competitors, due to their inherent engineering talent and huge computational resources. But they clearly need to catch up in a lot of areas- for example, the VS Code Gemini extension has serious problems (frequent API call errors, messed up formatting of code/text, infinite loops, etc).

aberoham34 minutes ago

Wow, you have to try claude code with Opus-4.6..

mehagar2 hours ago

I use Gemini for personal stuff such as travel planning and research on how to fix something, which product to buy, etc. My company has as Pro subscription so I use that instead of ChatGPT.

epolanski2 hours ago

Various friends of mine work in non-technology companies (banking, industries, legal, Italy) and in pretty much all of them there's Gemini enterprise + NotebookLM.

In all of them the approach is: this is the solution, now find problems you can apply it to.

esafak2 hours ago

I'd use it for planning, knowledge, and anything visual.

verdverm1 hour ago

I use gemini for everything because I trust google to keep the data I send them safe, because they know how to run prod at scale, and they are more environmentally friendly than everyone else (tpu,us-central1).

This includes my custom agent / copilot / cowork (which uses vertex ai and all models therein). This is where I do more searching now (with genAi grounding) I'm about to work on several micro projects that will hold Ai a little differently.

All that being said, google Ai products suck hard. I hate using every one of them. This is more a reflection on the continued degradation of PM/Design at Big G, from before Ai, but accellationally worse since. I support removing Logan from the head of this shit show

disclaimer: long time g-stan, not so stan any more

seizethecheese2 hours ago

I use Gemini flash lite in a side project, and it’s stuck on 2.5. It’s now well behind schedule. Any speculation as to what’s going on?

foruhar1 hour ago

Gemini-3.0-flash-preview came out right away with the 3.0 release and I was expecting 3.0-flash-lite before a bump on the pro model. I wonder if they have abandoned that part of the Pareto/price-performance.

1024core1 hour ago

It's been hugged to death. I keep getting "Something went wrong".

eric153423352 hours ago

My first impression is that the model sounds slightly more human and a little more praising. Still comparing the ability.

matrix25962 hours ago

Gemini 3.1 Pro is based on Gemini 3 Pro

skerit2 hours ago

Lol, and this line:

> Geminin 3.1 Pro can comprehend vast datasets

Someone was in a hurry to get this out the door.

LZ_Khan2 hours ago

biggest problem is that it's slow. also safety seems overtuned at the moment. getting some really silly refusals. everything else is pretty good.

naiv2 hours ago

ok , so they are scared that 5.3 (pro) will be released today/tomorrow and blow it out of the water and rushed it while they could still reference 5.2 benchmarks.

PunchTornado2 hours ago

I don't think models blow other models anymore. We have the big 3 which are neck to neck in most benchmarks and the rest. I doubt that 5.3 will blow the others.

scld2 hours ago

easy now

himata411338 minutes ago

The visual capabilities of this model are frankly kind of ridicioulus what the hell.

PunchTornado2 hours ago

The biggest increase is LiveCodeBench Pro: 2887. The rest are in line with Opus 4.6 or slightly better or slightly worse.

shmoogy2 hours ago

but is it still terrible at tool calls in actual agentic flows?

mustaphah2 hours ago

Google is terrible at marketing, but this feels like a big step forward.

As per the announcement, Gemini 3.1 Pro score 68.5% on Terminal-Bench 2.0, which makes it the top performer on the Terminus 2 harness [1]. That harness is a "neutral agent scaffold," built by researchers at Terminal-Bench to compare different LLMs in the same standardized setup (same tools, prompts, etc.).

It's also taken top model place on both the Intelligence Index & Coding Index of Artificial Analysis [2], but on their Agentic Index, it's still lagging behind Opus 4.6, GLM-5, Sonnet 4.6, and GPT-5.2.

---

[1] https://www.tbench.ai/leaderboard/terminal-bench/2.0?agents=...

[2] https://artificialanalysis.ai

saberience2 hours ago

Benchmarks aren't everything.

Gemini consistently has the best benchmarks but the worst actual real-world results.

Every time they announce the best benchmarks I try again at using their tools and products and each time I immediately go back to Claude and Codex models because Google is just so terrible at building actual products.

They are good at research and benchmaxxing, but the day to day usage of the products and tools is horrible.

Try using Google Antigravity and you will not make it an hour before switching back to Codex or Claude Code, it's so incredibly shitty.

mustaphah2 hours ago

That's been my experience too; can't disagree. Still, when it comes to tasks that require deep intelligence (esp. mathematical reasoning [1]), Gemini has consistently been the best.

[1] https://arxiv.org/abs/2602.10177

gregorygoc2 hours ago

What’s so shitty about it?

nautilus122 hours ago

Ok, why don't you work on getting 3.0 out of preview first? 10 min response time is pretty heinous

mucai821 hour ago

I agree, according to Googles terms you are not allowed to use the preview model for production use cases. And 3.0 has been in preview for a loooong time now :(

jeffbee2 hours ago

Relatedly, Gemini chat seems to be if not down then extremely slow.

ETA: They apparently wiped out everyone's chats (including mine). "Our engineering team has identified a background process that was causing the missing user conversation metadata and has successfully stopped the process to prevent further impact." El Mao.

sergiotapia2 hours ago

To use in OpenCode, you can update the models it has:

    opencode models --refresh
Then /models and choose Gemini 3.1 Pro

You can use the model through OpenCode Zen right away and avoid that Google UI craziness.

---

It is quite pricey! Good speed and nailed all my tasks so far. For example:

    @app-api/app/controllers/api/availability_controller.rb 
    @.claude/skills/healthie/SKILL.md 

    Find Alex's id, and add him to the block list, leave a comment 
    that he has churned and left the company. we can't disable him 
    properly on the Healthie EMR for now so 
    this dumb block will be added as a quick fix.
Result was:

    29,392 tokens
    $0.27 spent
So relatively small task, hitting an API, using one of my skills, but a quarter. Pricey!
gbalduzzi1 hour ago

I don't see it even after refresh. Are you using the opencode-gemini-auth plugin as well?

sergiotapia1 hour ago

No I am not just vanilla OpenCode. I do have OpenCode Zen credits, and I did opencode login whatever their command is to auth against opencode itself. Maybe that's the reason I see these premium models.

johnwheeler1 hour ago

I know Google has anti-gravity but do they have anything like Claude code as far as user interface terminal basically TUI?

johnwheeler47 minutes ago

ThankS!!

dude2507112 hours ago

I hereby allow you to release models not at the same time as your competitors.

sigmar2 hours ago

It is super interesting that this is the same thing that happened in November (ie all labs shipping around the same week 11/12-11/23).

zozbot2341 hour ago

They're just throwing a big Chinese New Year celebration.

throwaw121 hour ago

Can we switch from Claude Code to Google yet?

Benchmarks are saying: just try

But real world could be different

foruhar31 minutes ago

My sense is that the Gemini models are very capable but the Gemini CLI experience is subpar compared to Claude Code and Codex. I'm guess that it's the harness but since it can get confused, fall into doom loops, and generally lose the plot in a way that the model does not in Gemini Studio or the Gemini app.

I think a bunch of these harnesses are open source so it surprises me that there can be such a gulf between them.

boxingdog1 hour ago

[dead]