3.6 27b is probably most powerful/efficient (to size) model out there. Qwen has a history of leveraging deepseek power as well. (deepseek creating small models with Qwen as the base), and Alibaba is main hosting service for deepseek. Alibaba/Qwen in talks to invest in Deepseek, atm.
Been using Qwen 3.x for a while now for local LLM with search capability. The 3.5 and 3.6 ones are great and run very fast.
Qwen 3.6 is already out? Damn, I swear I switched to 3.5 not even a month ago.
3.6 27b is probably most powerful/efficient (to size) model out there. Qwen has a history of leveraging deepseek power as well. (deepseek creating small models with Qwen as the base), and Alibaba is main hosting service for deepseek. Alibaba/Qwen in talks to invest in Deepseek, atm.
Yeah. The 80b Coder-Next runs at about the same speed on my hw too. I don’t know if it’s any better than 3.6 27b.