r/LocalLLaMA
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B...
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B... I've had better results quality wise with 35B AND it's much faster than 27B. Just curious cause I see lots of people post about 27B. Am I doing something wrong with 27B? Use cases are multi-stage pipelines for coding and internet research. I also use Opencode a bit. All use cases I normally apply Opus to I've tried, as wβ¦
Similar stories
- r/LocalLLaMAdistance: 0.258Have Qwen said anything about further Qwen 3.6 models?
- r/LocalLLaMAdistance: 0.381[RELEASE] - Finally, my first TTS model is out! ποΈ Flare-TTS 28M
- Businessdistance: 0.424China is quietly upstaging America with its open models
- Ars Technica - All contentdistance: 0.426Study: AI models that consider user's feeling are more likely to make errors
- Businessdistance: 0.434Chinese AI models are popular. But can they make money?
- Ars Technica - All contentdistance: 0.454There's a lot of hype about Chinese EVsβis any of it true?
- Finance & economicsdistance: 0.473Asiaβs inexpensive AI stocks should worry American investors
- Businessdistance: 0.487Will the smartphone survive the AI age?
- Finance & economicsdistance: 0.493AI is not the only threat menacing big tech
- Finance & economicsdistance: 0.519The tech jobs bust is real. Donβt blame AI (yet)