Model: gemma3:27b
https://hn.algolia.com/api/v1/items/43340491
meetpateltech:
meetpateltech: Gemma 3 is out! Multimodal (image + text), 128K context, supports 140+ languages, and comes in 1B, 4B, 12B, and 27B sizes with open weights & commercial use.<p>Gemma 3 model overview:
<a href="https://ai.google.dev/gemma/docs/core" rel="nofollow">https://ai.google.dev/gemma/docs/core</a><p>Huggingface collection:
<a href="https://huggingface.co/collections/google/gemma-3-release-67c6c6f89c4f76621268bb6d" rel="nofollow">https://huggingface.co/collections/google/gemma-3-release-67...</a><p>ollama:
<a href="https://ollama.com/library/gemma3">https://ollama.com/library/gemma3</a>
derbaum: The ollama page shows Gemma 27B beating Deepseek v3 and o3-mini on lmarena. I'm very excited to try it out.
LeoPanthera: Doesn't yet work in LM Studio. Barfs an error when trying to load the model. (Error 6, whatever that means. Happy I missed the first 5.)
upghost: I'm still a huge fan of gemma-22b. Looking forward to this one!
behnamoh: > We also change the architecture of
the model to reduce the KV-cache memory that tends to ex plo de with long context<p>This is key (pun not intended). It's one thing to run these models locally; it's a totally different game when you need longer context.<p>Sure, the new M3 Ultra can fit a Q4 DeepSeek r1 in URAM, but as soon as you wanna get usable context like +64k, the t/s and PP quickly become prohibitive.<p>Speaking of M3 Ultra, I really wish Apple had put more bandwidth in this beast of a machine. It's got a lot of "energy", not a lot of "power" to actually use that energy.
kbrannigan: Can someone explain Gemma vs Gemini for me please?
hargup: Gemma is their open-source series of models. Gemini is the propertierary ones. Gemini models are bigger and better. But Gemma are pretty good too.
tpm: open-weights, not open-source (sorry to be that one but open source in this case would mean you can build it yourself from provided "source", which you can't, because it's not provided)
danielhanchen: Super cool models! Love the mixture of sliding window and global attention to cut down on KV cache sizes! And 4B, 12B and 27B are vision + text by default! Super cool!
atarus: Looks great! So excited about this! We have been using gemma models since gemma 1.0 and they are so far ahead of the curve!
pilooch: Someone knows whether there is support for multiple images as input ? I don't see it from the docs yet.
ibash: Yes<p>> If you want to prompt with more than one image, you must include a <start_of_image> tag for each image included in your prompt.<p>From here: <a href="https://github.com/google/generative-ai-docs/blob/78688755dbc7089398e0b1c04eb272867fadda07/site/en/gemma/docs/core/gemma_library.ipynb#L314" rel="nofollow">https://github.com/google/generative-ai-docs/blob/78688755db...</a>
Patrick_Devine: Not quite yet on Ollama, but hopefully we'll add this soon. Also, we didn't add the pan-and-scan algorithm yet for getting better clarity in the original image.
canyon289: Hey, I'm Ravin from the Gemma team. It's on ollama! Try `ollama run gemma3` to get it pulled locally
kgwgk: They talked about support for multiple images as input.
Patrick_Devine: My point was multi-images and pan-and-scan. We haven't implemented those yet in Ollama, but soon!
pilooch: Good, FYI the number one usage is vision RAGs (RAGs that deal with documents as images instead of text).
khimaros: doesn't seem to have hit the LMArena yet. will be interesting to see where it places there.
Squarex: Pretty highly. It’s on page 5 of the report.
leonardhussenot: Leo from Gemma team here: it's also live on lmsys now !
LeoPanthera: > They are designed to help prevent our models from generating harmful content, i.e.,<p>> [...]<p>> Sexually explicit content<p>Dear tech companies. Sexually explicit content is not harmful. Why are you all run by puritans? I don't even want to make edgy porn, I just want to be treated like an adult.
swyx: usual answer to "why can't I have nice things":<p>lawyers.<p>(on both sides)
BriggyDwiggs42: Advertisers might be a better reduction
maccard: In my experience, it’s nothing to do with actual lawyers and everything to do with cultural and societal norms.
esafak: <i>Lawyering</i> by puritans, maybe. The lawyers themselves are not particularly imposing their prejudices.
charcircuit: Generating sexually explicit content can cause reputational damage or have legal risk. Not generating such content is something that many developers are looking for. There is people who may want such harmful content and other players can cover such a niche.
logicchains: That's a bullshit excuse. The Chinese model creators live in a totalitarian dictatorship where porn is banned and the creators could be arbitrarily jailed, but even they don't go to such effort to censor their open source models (there's censorship on their hosting websites but minimal if you run the models locally).
charcircuit: Filtering is not necessarily a good user experience and comes with a cost to do. Google making a model they expect there to be demand for is not just an excuse.
logicchains: They don't expect to make money serving Gemma; it benchmarks worse in almost every way than their closed-source Gemini. Believe it or not, one of the main sources of demand for these small, non-SOTA models is people using them for roleplay locally. Anyone corporate has the money to use a bigger, more effective model.
patates: They mean "harmful to us", not the users. It's harmful because they live an echo chamber of a single mention of genitals makes all the stakeholders run away. Why do they run away? Because they also have stakeholders, and so on.
ibash: This could be a historical accident.<p>Early models were censored, making uncensored releases have bad optics.<p>If the first models had been uncensored, no one would care if another was added.
bloomingkales: Have an uncensored model loop through nypost articles and ask it to synthesize content from that. Nypost has tons of scandalous content and can easily get spun into erotica by an uncensored model.<p>It’s unsafe for that reason, so you absolutely needed both censored and uncensored. It wasn’t an accident.
littlestymaar: > can easily get spun into erotica by an uncensored model.<p>A sexualized fine-tune yes, but that's because you have to make them overly horny to overcome the original censorship.<p>Nothing prevent them to train a model that will have an appropriate level of sexual content (that is, only upon user explicit request) the same way they train it not to have sexual content at all.<p>The reason they do that is because they are American companies, the same companies who also censored nude paintings and statues from European museums' pages.
Arkhaine_kupo: The early models were uncensored, but people seeing early llms give meth recipes and how to make car bombs made them quickly get neutered before public release (additional controls, for pirvate info, nudity, swearing etc all come from additional guardrails and improvements of the protection they can offer the company and not end users)
Karrot_Kream: The model is open weight, I'll bet someone or the other will abliterate it soon. Maybe you want to do the honors? I have an abliterated Llama running on a server shared with friends and it works great.
LeoPanthera: This only works until it doesn't. Start with a model that simply hasn't been trained on anything your shareholders find objectionable, and there will be nothing to reveal with abliteration.
xpl: Maybe there exists a dataset consisting <i>entirely</i> of objectionable content, so people can finetune neutered models on it?
anticensor: PH maybe?
xpl: I mean not only sex, but also swearing, drugs, violence, etc. Basically everything R-rated (but not illegal) which usually gets censored.
Sharlin: More like literotica.
anticensor: Such models would actually run against their long term interests of being able to automate away the work currently done by humans.
thunkingdeep: Sexually explicit content literally IS harmful. Children use the internet, they shouldn’t be exposed to pornography until adulthood, if ever. To suggest otherwise reveals a morally repulsive foundation, imo.
LeoPanthera: This is a false dichotomy. We can make tech for adults, <i>and</i> children, either with optional settings, filters, or simply multiple versions, managed by their parents or guardians. It's not tech's responsibility to raise every child.
jiggawatts: The transformer algorithm was originally intended for AI language translation use-cases, and it <i>excels</i> at this task. It's far better than anything else I've tried.<p>Except that nearly 100% of the capable models out there <i>refuse</i> to translate swearing, sexual, or violent content.<p>Legal content! Content you can see in the cinema, borrow from a library, or watch on free-to-air television!<p>Most models will regularly refuse to translate <i>subtitles</i>, for example, because they've been indoctrinated by Puritan priests to put their fingers in their digital ears and scream la-la-la-la to avoid their pure digital souls from being sullied by the bad words.
smrq: Hard to get more puritanical than "if you disagree with my opinion then you're morally repulsive". Not to mention that your argument implies that all traces of sex ought to be scrubbed from the <i>entire Internet</i>? And that that conclusion is the <i>only</i> moral one?
thunkingdeep: Hard to get more perverse than “kids should have access to sexually explicit material at all times in any medium.<p>If that sounds fucked up when I say it like that, consider what assumptions you’re making, because that’s literally YOUR argument here.
alpaca128: > that’s literally YOUR argument here<p>No, that's the opposite of yours.
oezi: Things do not exist on a black and white basis but there are relevant gray scales to be considered:<p>It is quite different if we talk about removing any sort of text about body parts related to consensual sexual activities or if we try to censor hard pornography or illegal sexual acitivities. I personally find LLM producing sexual content as text rather irrelevant in the same way that you could go to a library or bookstore and buy a romance.<p>It is also quite different if your definition of kids goes all the way to 18 years. I don't want my kids <i>not</i> to encounter topics surrounding sex until the become legal adults. They absolutely have to learn about it, and be able to develop healthy relationships to their own body and sexuality and have insights that enable them to understand sexuality in others.<p>I want to protect my kids from harm, but there must be some balance with other aspects as well.
Workaccount2: Millennials, who are now well into adulthood, grew up with easy and readily available access to porn. I know when I was in middle school it was everywhere. Kids would even hand out burned CDs of porn.<p>Please show the damage that it did to that generation. If you are "sky is blue" levels of correct, the evidence should be everywhere. So please, present it.<p>If there is no real evidence, reply with "So you are saying we should endorse porn for everyone" or some other strawman along those lines. Thanks.
maccard: When we’ve solved the access to explicit content in the rest of the internet wr can come back and have this conversation. Until then teenagers will just go to Reddit or wherever and get it there. If we ban that it’ll just move to sexting on Snapchat which if you have ever spent any time with parents of teenagers you’ll know has a tendency to be screenshotted and distributed.<p>So you’re arguing for teenagers to be encouraged to share explicit content of minors with each other?
bloomingkales: You can discuss something kosher and have the LLM misinterpret it as something sexually explicit. Yours or their logs will now have all of this miscommunication, and this is a liability. Using models that can’t generate this content even by accident is good legal decision for many. Same goes for images. Stay safe!
littlestymaar: > you'll have to do that locally<p>The Gemma family is a family of local models!
mightysashiman: on the other hand running with guns is fine.
mdp2021: Have you considered that selection of material contributes to specialization and efficiency? This is meant to be a weights-small model.
swyx: its also apparently a well known result that filtering nsfw content IMPROVES scores<p><a href="https://x.com/swyx/status/1661359483447316480" rel="nofollow">https://x.com/swyx/status/1661359483447316480</a>
ddalex: LLMs get distracted by porn too !?!?
alpaca128: The word "gay" mentioned in your link isn't nsfw content though.
Lerc: Or perhaps it was removing the curly brackets that improved it more than the damage caused by losing the nsfw content.<p>Or perhaps the measurement of improvement was biased. If a model doesn't understand the word gay there would certainly be people who would find real world use of the model to be substandard.<p>Did the assessment of what counts as improvement come from the same community that decided that excluding things with 'gay' was cleaning the data?
42lux: Everyone is treating this like corps have anything to gain from an open uncensored model. Switch your view and give me a single argument for it? That random nerds on HN stop jerking each other about what „open“ means? You are just not their target group. Having this discussion every time no matter if the model released is censored or not is just insanity. Bring new arguments or don’t use the models you don’t like. There will be a new sota „tomorrow“, maybe even one open enough for you.
practice9: But who is the target group?<p>Last time only some groups of enthusiasts were willing to work through bugs to even run the buggy release of Gemma<p>Surely nobody runs this in production
42lux: The press and decision makers without technical knowledge are the target group, it doesn’t matter if it’s used in production or not. They need a locally deployable model to keep up with enterprises to risk averse to put their data into the cloud and also don’t care that their shitty homegrown ChatGPT replacement barely works. It’s a checkbox.
xvector: This is what HNers surprisingly seem to not understand.<p>The risk of the model generating illegal content and then the company getting bad PR from vultures in journalism simply outweighs any benefits of including this content in the training data.<p>This is also why you will never see the big companies release a capable open weight image or video gen model.
logicchains: >The risk of the model generating illegal sexual content and then the company getting bad PR from vultures in journalism simply outweighs any benefits of including this content in the training data.<p>This is completely unsubstantiated. The original Sydney (Bing AI) was violently unhinged and this only drew more users; I haven't met a single person who prefers the new Bing AI to the old Sydney, and for that matter I haven't even heard of anyone using Bing AI for ages now they toned it down. Trust in journalists is at an all-time low ( <a href="https://news.gallup.com/poll/651977/americans-trust-media-remains-trend-low.aspx" rel="nofollow">https://news.gallup.com/poll/651977/americans-trust-media-re...</a> ) and America recently elected an extremely unorthodox president in big part due to the sheer hatred of the media shared by a large proportion of the population. Even the most hardcore social conservatives aren't calling for companies to censor the training of open source models so they don't produce adult textual content even when prompted to do so; it's not a political issue.
42lux: Brings an argument from nearly a decade ago ignores everything on google in the last four years. Ofc the „first“ rogue AI drew in more users because of the novelty of it… what a shit argument.
DJHenk: The argument is that it simply improves the product. For instance, Github Copilot is apparently refusing to do anything with variable names like "trans" and anything related to sex or gender, regardless of the intended meaning. That is a serious flaw and makes the product less useful.<p>See this: <a href="https://github.com/orgs/community/discussions/72603" rel="nofollow">https://github.com/orgs/community/discussions/72603</a>
42lux: You don’t know if the censorship is in the model or the system prompt.
DJHenk: That is not relevant to the argument. Censoring limits possibilities. While that sometimes has its uses, the overly puritanical approach American companies generally take degrades the value of their products.
42lux: I am talking about an „open“ weight model you are talking about a service. If the service wants to censor that’s fine and on them and their leadership if an „open“ model gets released with censorship it’s not, because it’s just „open, but how my manager likes it“
logicchains: >You are just not their target group. Having this discussion every time no matter if the model released is censored or not is just insanit<p>Who is their target group for small local models that benchmark inferiorly to their proprietary solution (Gemini 2.0) then, if not hobbyists and researchers?
42lux: >> The press and decision makers without technical knowledge are the target group, it doesn’t matter if it’s used in production or not. They need a locally deployable model to keep up with enterprises that are to risk averse to put their data into the cloud and also don’t care that their shitty homegrown ChatGPT replacement barely works. It’s a checkbox.
miki123211: It's harmful in that there exists a significant and vocal subset of users who does not wish to see that content or does not wish their children to do so. It's easier to teach your model never to produce that kind of content than to teach it to perfectly distinguish whether this user should see that content or not. TV channels are barred from broadcasting this kind of content for similar reasons.<p>Sure, there are always jailbreaks, but then the narrative changes from "we made a model that tells erotic stories to children" to "this ingenious teenager figured out a way to hack our model to make it produce erotic stories." In other words, jailbreak move the fault from the model producer to the model user.<p>It's also worth keeping in mind that erotica comprises a surprisingly large portion of fiction easily available on the internet for free, and "unfiltered" models tend to produce that kind of content unprompted (see e.g. the original Mistral). The major AI labs are probably filtering it out, but I suspect they can't go too far there, as having a model that is good at fiction is something they actually want.<p>Then there are the non-chat-gpt-app use cases (like customer support chatbots, automatic summarization etc), for which unprompted erotica is highly inappropriate. Those are the "business travelers" of AI, not the first thing one thinks of when talking about who uses AI models, but extremely important nonetheless.
logicchains: >It's harmful in that there exists a significant and vocal subset of users who does not wish to see that content or does not wish their children to do so<p>It's hard to think of a scenario where there's a child technical enough to run Gemma 3 locally but somehow unable to access any other written erotica. Project Gutenberg is full of erotic textual content and I haven't heard of anyone calling for that to be banned.<p>>Then there are the non-chat-gpt-app use cases (like customer support chatbots, automatic summarization etc), for which unprompted erotica is highly inappropriate. Those are the "business travelers" of AI, not the first thing one thinks of when talking about who uses AI models, but extremely important nonetheless.<p>And how many of these are going to be using Gemma, when Gemini over the API is cheaper, faster and easier to use?
miki123211: More than you think, particularly outside the US.<p>Companies and government organizations who have sensitive data are still unwilling to use these models over any API they don't host themselves.<p>I work in this space in the EU, and this is absolutely a problem.
philipjoubert: > It's hard to think of a scenario where there's a child technical enough to run Gemma 3 locally but somehow unable to access any other written erotica.<p>The reason you're struggling to understand is that you're thinking about this logically.<p>Adult content is obviously freely available to any child or adult with minimum technical skills. What makes LLMs different is that it's "the new thing" and people respond differently to "the new thing".
fragmede: Won't somebody think of children‽
Al-Khwarizmi: All of this is true but then it's as easy as releasing censored and uncensored versions of the model.<p>Then it's up to users (or parents, in the case of children) to choose the adequate version for each purpose. Just like there are child-friendly movies and adult-only movies, and no one beyond fringe puritan crusaders would say that the latter should outright not exist.
andai: >censored and uncensored<p>Well here you still have the same problem, since they're not gonna release an <i>actually</i> uncensored version, that tells you how to do awful things (or indeed, that tells you to do them).<p>So then you'd have censored and less censored, and it would still be a matter of where to draw those lines.
Al-Khwarizmi: True, "uncensored" is not the best term for what I meant (as I'm aware that fully uncensored is not a realistic thing to ask from companies).<p>What I mean is a model for all audiences and an adult model, and the line would be drawn at the law of the country producing it (if it's something that would be legal to publish for a human author at a website, then it should be allowed as an LLM response). So erotica would be fine, while instructions for making a bomb wouldn't.
Zambyte: Companies release uncensored models all the time. They're called "text" models. I just had llama3.2:3b-text-fp16 give me step by step instructions on how to make a pipe bomb.
rcleveng: I think it's easy to released the uncensored version, it's just the censored version that's likely super super hard.<p>Since this is just giving the model directly, there's no ability to do any filtering as part of inference, so I would imagine you have to assume the worst (intent) on any input coming into it.
andai: I heard of this described as the minority effect, that a small minority can have a disproportionate impact. The example given is that it's cheaper to make <i>all</i> instances of a product kosher or halal than to make an entirely separate product.
idiotsecant: Yes, it would be absolutely <i>shameful</i> if there was pornography on the internet, easily available to anyone, even <i>children</i>. Society would crumble!
esafak: Porn sites are blocked in many jurisdictions, so I would not use that argument.
idiotsecant: No, there's no movement to shut down pornography on the internet. There's a movement to shut down <i>specific</i> websites and make a lot of noise about it but continue consuming pornography behind closed doors.<p>People like pornography. They'll as soon ban alcohol again (which worked so well last time)
esafak: On the contrary. Porn is inaccessible, along with many other things, in much of the world. <a href="https://worldpopulationreview.com/country-rankings/countries-where-porn-is-illegal" rel="nofollow">https://worldpopulationreview.com/country-rankings/countries...</a>
Workaccount2: It's funny because the results are in, millennials grew up with pretty easy access to all manner of porn from an early age and the effect has been nothing. Even a reduction in intimacy if anything.<p>I'm sure the hysterical puritans of the past will come out any day now and admit that they weren't even 1% correct in their assertions.
igleria: it follows the historical trend of American puritanism:<p>nipple BAD.<p>exploding someone into bits GOOD.
michaelt: There are very few pro-porn voices in the corporate, tie-wearing environments that have the money to train new LLMs from scratch.<p>Oh, there are loads of porn <i>enjoyers</i> working in such companies - but traditional professionalism means you leave the porn at home during the work day. It is, after all, NSFW.<p>So at the meeting where censorship decisions were being made, even a weak argument for censoring explicit content will be accepted unopposed.
bbminner: Not all sexually explicit content is harmful in all contexts for sure, but in many contexts it is fairly universally considered harmful (eg content involving minors). Do you have means of distinguishing between the two? Are you suggesting that a company must invests millions into teaching the model where exactly the red line lines so that it can have a conversation close to it but without crossing it? Or you suggest biting the bullet and releasing the model not only capable of generating eg child porn, but also having a >0 chance of randomly discussing it in unrelated contexts? Chance of error is always there, and companies decided that a risk of really bad behavior in benign context overweights the gains. Imho, a decision to not play whack a mole with this land mine is quite rational, esp considering gains vs risks vs costs. Think of it as a cost cutting measure, not as an infringement on free speech. You are free to invest you own money into this problem if you think that's a grave mistake and a missed opportunity. The first project to push the automated generated content moderation against what is considered appropriate in the given context far enough to make it economical for companies to put their guard down could actually be worth a lot if you think there's market for it (eg agents on dating websites? idk, you tell me)
msp26: I want to use a multimodal model for manga translation, analysis, and tagging.<p>If this gives me the "aschually as a ethical safe harmless assistant I can't ..." spiel on anything mildly mature, that would be very disappointing. I'll run a test with Berserk and see how it goes.<p>I'm not a big believer in abliteration, it seems to always hurt performance. Safety should be handled by a separate system, no need to cripple the actual LLM.
Zambyte: Whenever they say things like "harmful" or "unsafe" there is an implied "for our brand" that follows.
wyager: Wireheading humanity into population collapse via pervasive sexual hyperstimuli (which is realistically what is on the table here) is basically the definition of "harmful".<p>This is just silly because it only takes one AI company to defect and start enabling it, and the problem is already pretty bad even without AI.<p>I think all of the solutions are demand-side, not supply side. I would expect differential reproductive rate trends between populations with and without proscriptions on ersatz reality consumption (i.e. aniconist Muslims, Mennonites, etc.) to accelerate
rybthrow2: Google DeepMind are the best :)
vimgrinder: very excited for this. my current fav model on my mac mini for text processing is gemma 9b + gemma 2b combo spec decoding. great times to have all this getting drop left and right.
Tepix: Very cool to see two promising new LLMs on the same day (the other one being Reka Flash 3 21b) with open weights.<p>Now, bring on those multimodal LLMs with voice input and output please!
tomthe: Very cool open release. Impressive that a 27b model can be as good as the much bigger state of the art models (according to their table of Chatbot Arena, tied with O1-preview and above Sonnet 3.7).<p>But the example image shows that this model still makes dumb errors or has a poor common sense although it read every information correctly.
vessenes: I was thinking the same thing about the receipt calculation: a warning that only tourists tip 18% in Switzerland would no doubt have been appreciated!
jiangdayuan: The performance of Gemma 3 is insane.
YetAnotherNick: They say all the models were distilled from a teacher model but they didn't specify what that teacher model is. Interesting thing to hide.
LeoPanthera: It's a safe bet that it's either one of the Gemini models or a relative of it.
YetAnotherNick: That's what I thought. And it could be pulicity of Gemini as well that it is so good that it can teach students say 5x faster. If it is Gemini, there isn't any reason to hide. My bet is it is some unreleased Gemma or some model.
pzo: would be good to see how gemma 3:4b compare to phi4-mini
alekandreev: Greetings from the Gemma team! We just got Gemma 3 out of the oven and are super excited to show it to you! Please drop any questions here and we'll answer ASAP.<p>(Opinions our own and not of Google DeepMind.)<p>PS we are hiring: <a href="https://boards.greenhouse.io/deepmind/jobs/6590957" rel="nofollow">https://boards.greenhouse.io/deepmind/jobs/6590957</a>
magicalhippo: Thanks, been using Gemma 2 a lot at home as it still holds up very well and the 9B version runs great on my 2080Ti. Strong prompt adherence coupled with overall capability makes it very useful. Looking forward to trying Gemma 3.<p>I have some dumb questions though, might as well ask. How do you decide on the model sizes? And how do you train them? Independently or are they related somehow?
alekandreev: Picking model sizes is not an exact science. We look for sizes that will fit quantized on different categories on devices (e.g., low-end and high-end smartphone, laptops and 16GB GPUs, and bigger GPUs/TPUs). We also want the ratio of model width to depth (number of layers) to be consistently around 90, which we found works best.<p>The models are trained with distillation from a bigger teacher. We train them independently, but for v3 we have unified the recipes for 4B-27B, to give you more predictably when scaling up and down to different model sizes.
magicalhippo: Thanks again, very interesting.<p>One unexpected (to me) use-case appeared not long ago when I found myself without internet but wanting to fix some non-standard Linux configuration issue. As a Windows guy I tend to web search such things, but local LLM to the rescue!<p>Even smaller models like Gemma 2 9B has enough compressed knowledge that it managed to help me quickly solve my issue.<p>This got me thinking how such smaller, but very capable models might be a game-changer in communities where internet might not be available or too expensive for continuous use. It's almost like having a portion of the internet in a box, just add electricity.
alekandreev: Thank you for the feedback! This is why we are so excited to push more and more on small models for both low end and high end smartphones!
swyx: will there ever be a Gemma 3 Thinking? how copyable is the Flash Thinking approach to the Gemma series?
alekandreev: That's a very interesting area, but nothing we can announce today.
mdp2021: Thank you!<p>Question: your model supports 140 languages. Given that you are focusing on compactness and efficiency, would you not have gains in also developing models on a selected limited number of languages (e.g. the topmost (in cultural production) four "western" ones with shared alphabet - or similar set)?<p>Edit: of course the multilingual capability can be can be welcome. On the other hand, there are evident cases in which efficiency can be paramount. We can wonder about the tradeoff: how much in efficiency is sacrificed by features.
alekandreev: That's an idea we've thought about. However, we think the open source community has already created a very impressive set of language or region-specific finetunes [1] [2]. Also there is a lot of cultural and nuance context in every language that we don't have the capacity to cover sufficiently. So for v3 we focused on creating the best foundational multilingual model.<p>[1] <a href="https://huggingface.co/aiplanet/buddhi-indic" rel="nofollow">https://huggingface.co/aiplanet/buddhi-indic</a><p>[2] <a href="https://ai.google.dev/gemma/gemmaverse/sealion" rel="nofollow">https://ai.google.dev/gemma/gemmaverse/sealion</a>
mdp2021: And have you measured the trade-off that could come with embracing such a large number of languages and alphabets? It would be interesting to note whether you are sacrificing some response quality, or if such supposed sacrifice is interestingly negligible, or if - even more interestingly - the quality increases with the added proficiency.
miki123211: How good is Gemma at structured output generation, JSON schema compliance and tool use? Particularly the smaller versions, particularly in foreign languages?<p>We will run our internal evals on it for sure, but just wanted to ask whether that's even a use case that the team considered and trained for.
canyon289: Hey, I'm from the Gemma team. There's a couple of angles to your question<p>We do care about prompted instructions, like json schema, and it is something we eval for and encourage you to try. Here's an example from Gemma2 to guide folks looking to do what it sounds like you're interested in.<p><a href="https://www.youtube.com/watch?v=YxhzozLH1Dk" rel="nofollow">https://www.youtube.com/watch?v=YxhzozLH1Dk</a><p>Multilinguality was a big focus in Gemma3. Give it a try<p>And for structured output Gemma works well with many structured output libraries, for example the one built into Ollama<p><a href="https://github.com/ollama/ollama/blob/main/docs/api.md#structured-outputs" rel="nofollow">https://github.com/ollama/ollama/blob/main/docs/api.md#struc...</a><p>In short you should have all the functionality you need!
seektable: Just tried gemma3:4b for structured output and it fails with a strange error ( ollama is the latest):<p>Ollama error: POST predict: Post "<a href="http://127.0.0.1:49675/completion" rel="nofollow">http://127.0.0.1:49675/completion</a>": read tcp 127.0.0.1:49677->127.0.0.1:49675: wsarecv: An existing connection was forcibly closed by the remote host.<p>Not sure this is Ollama or gemma3:4b problem. At the same time, gemma3:12b works fine for the same API request (100% identical, only difference is model id).
heinrichf: I'm comparing Gemma3 12 B (<a href="https://ollama.com/library/gemma3;">https://ollama.com/library/gemma3;</a> running fully on my 3060 12GB) and Mistral Small 3 24B (<a href="https://ollama.com/library/mistral-small;">https://ollama.com/library/mistral-small;</a> 10% offloaded to the CPU).<p>- Gemma3 12B: ~100 t/s on prompt eval; 15 t/s on eval<p>- MistralSmall3 24B: ~500 t/s on prompt eval; 10 t/s on eval<p>Do you know what different in architecture could make the prompt eval (prefill) so much slower on the 2x smaller Gemma3 model?
alekandreev: Thank you for the report! We are working with the Ollama team directly and will look into it.
moffkalast: What's the official take on the system prompt? The technical report doesn't mention it, but the official QAT GGUFs include some form of prepending it to the first user message. Has it been trained with any <start_of_turn>system turns with tool calls and such?
chickenbig: In Switzerland isn't it customary to round the food bill up to the nearest 10?
gundmc: What do companies like Meta and Google gain from releasing open models? Is it just reputational? Attractive to top AI talent?
npodbielski: I believe (and some other people on the internet having more knowledge in LLM believe too) that open source local models are the future.
Probably big models with API and chat like OpenAI is doing will have its niche toot but it is very costly and it is not AGI and it will not be in the near future.
On the other hand with rise of NPU chips and small models you can have your own assistant on your phone using your own data almost instaneously with almost no cost. Whoever will build the best OS model will win this race. As the winner you will be able to set the standard.
Basically it is why we have Linux on the severs not Windows and why even browsers are free you still get one from every tech giant.
colejhudson: Those are certainly benefits, but it's most likely a prophylactic move.<p>LLMs will be (are?) a critical piece of infrastructure. Commoditizing that infrastructure ensures that firms like Google and Meta won't be dependent on any other (OpenAI) for access to that infrastructure.<p>Meta in particular has had this issue wrt Ads on iOS. And Google wrt paying Apple to be the default search engine.<p>See also: Joel Spoelsky's famous Strategy Letter V [0].<p>[0]: <a href="https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/" rel="nofollow">https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/</a>
xnx: Linking to he announcement (which links to his PDF) would probably be more useful.<p>Introducing Gemma 3: The most capable model you can run on a single GPU or TPU<p><a href="https://blog.google/technology/developers/gemma-3/" rel="nofollow">https://blog.google/technology/developers/gemma-3/</a>
xnx: Even though Gemma 3 takes much less inference processing power and delivers better results than DeepSeek v3, I'm certain this won't cause the same Nvidia stock price panic that DeepSeek did.
xnx: This is the first model I can think of that advertises itself as being optimized for AMD ROCm.
vessenes: Lots to be excited about here - in particular new architecture that allows subquadratic scaling of memory needs for long context; looks like 128k+ context is officially now available on a local model. The charts make it look like if you have the RAM the model is pretty good out to 350k or so(!) with RoPE.<p>In addition, it flavor tests well on chat arena, ELO significantly above yesterday’s best open model, Qwen 2.5 72b, has some pretty interesting properties that indicate it has not spent much of its model weight space on memorization, hopefully implying that it has spent it on cognition and conceptual stuff.<p>And, oh also vision and 140 languages.<p>This seems like one worth downloading and keeping; Gemma models have at times not performed quite to benchmark, but I’d guess from all this that this will be a useful strong local model for some time. I’m curious about coding abilities and tool following, and about ease of fine tuning for those.<p>Thanks open sourcing this, DeepMind team! It looks great.
igleria: > The Gemma 3 models are multimodal—processing text and images—and feature a 128K context window with support for over 140 languages.<p>I'm curious as a multilingual person: would a single language (english/spanish/cantonese) allow for the model to be bigger and still fit in a single GPU?
alpaca128: In the context of another multilingual model I've heard that the majority of its training was in mainly one language, as that training seems to be applicable to languages added later too. To me that sounds plausible given adding a new language would mean vocabulary & grammar while the understanding of concepts should already be there.<p>Intuitively adding 140 languages instead of e.g. the 5 most common would otherwise be in conflict with making a small model that fits a single GPU.
jampekka: Per quick testing the 27b model seems very strong at least in natural language. It produces even good Finnish, in which smaller models tend to really struggle. Very promising.<p>Edit: Per even quicker testing the Finnish language performance degrades rapidly with the smaller models, as is usually the case. Would be great to have language specific distillations from larger models.
RandyOrion: Thanks for these cool models!<p>One suggestion (or just rant): Less censorship for local models, PLEASE.<p>One question: 100+ elo gains from gemma 2 to gemma 3 on Chatbot arena is really something, any estimates on how this is achieved?
danielhanchen: If it helps anyone, I wrote a detailed analysis here: <a href="https://x.com/danielhanchen/status/1899735308180267176" rel="nofollow">https://x.com/danielhanchen/status/1899735308180267176</a><p>TLDR:<p>1. 1B text only, 4, 12, 27B Vision + text. 14T tokens<p>2. 128K context length further trained from 32K. 1B is 32K.<p>3. Removed attn softcapping. Replaced with QK norm<p>4. 5 sliding + 1 global attn<p>5. 1024 sliding window attention<p>6. RL - BOND, WARM, WARP
jerrygenser: I suppose siglipv2 wasn't out yet when they were training this - I wonder if there will be an update to the multimodal models or pali-gemma which utilizes Siglip2. Aya Vision from Cohere utilized siglip 2 to great effect
deepsquirrelnet: Anybody tried training with trl yet?
dhbradshaw: Quote:<p>The Gemma 3 models are trained with distillation and achieve superior performance to Gemma 2
for both pre-trained and instruction finetuned versions. In particular, our novel post-training recipe
significantly improves the math, chat, instruction-following and multilingual abilities, making Gemma3-
4B-IT competitive with Gemma2-27B-IT and Gemma3-27B-IT comparable to Gemini-1.5-Pro across
benchmarks. We release all our models to the community.<p>Really cool
Summarize the themes of the opinions expressed here. For each theme, output a markdown header. Include direct "quotations" (with author attribution) where appropriate. You MUST quote directly from users when crediting them, with double quotes. Fix HTML entities. Output markdown. Go long. Include a section of quotes that illustrate opinions uncommon in the rest of the piece
Okay, here's a breakdown of the key discussion points from the provided text, focusing on the Gemma 3 models, and addressing the initial question about the performance difference between the 1B and 27B models, as well as other relevant details:
1. Gemma 3 Overview & Key Features
- Multimodal: Gemma 3 models process both text and images.
- Long Context Window: 128K context window (with support for even longer, up to 350k with RoPE), a significant improvement.
- Multilingual: Supports over 140 languages.
- Sizes: Models come in 1B, 4B, 12B, and 27B parameter sizes, with vision and text versions available.
- Training Data: Trained on 14T tokens.
- Open Source: DeepMind has released the models to the community.
2. Performance Differences (1B vs. 27B) & Improvements in Gemma 3
- Significant Gains: Gemma 3 shows substantial improvements over Gemma 2, with the 4B model being competitive with the 27B Gemma 2 model, and the 27B Gemma 3 model approaching the performance of Gemini 1.5 Pro.
- Distillation: The models are trained using distillation techniques.
- Novel Post-Training Recipe: A new post-training recipe significantly improves math, chat, instruction-following, and multilingual abilities.
- Specific Technical Improvements:
- Removed Attention Softcapping: Replaced with QK norm.
- Attention Mechanism: 5 sliding + 1 global attention, 1024 sliding window attention.
- Context Length: 128K context length further trained from 32K (1B model is 32K).
- Language Performance: The 27B model shows strong performance in multiple languages, including Finnish, where smaller models often struggle. Language-specific distillations from larger models could further improve performance in specific languages.
3. Discussion Points & Observations
- Multilingual Training: The majority of training data is likely in a few dominant languages, but the model can still generalize to other languages.
- Vision Model Updates: There's discussion about potentially updating the multimodal models to use Siglipv2 (a visual encoder) for improved performance.
- Fine-tuning: People are interested in fine-tuning the models with tools like TRL (Transformer Reinforcement Learning).
- Censorship: There's a request for less censorship in local models.
- RL Training: The models were trained with RL using BOND, WARM, and WARP.
- Cognition vs. Memorization: There's a hope that the model has spent more weight on cognition and conceptual understanding rather than memorization.
In summary, Gemma 3 represents a significant step forward in open-source language models, offering improved performance, longer context windows, and multimodal capabilities. The improvements in the training recipe and attention mechanisms are key factors in its success.