09:11 CETWednesday · May 13, 2026

shipfeed

K SEARCHJK NAVO OPEN
on the wire
home/§ models/cluster
ad slot opena single understated line lives here — sponsor wordmark + a short line.advertise on shipfeed →
§ models · cluster

not much happened today

Mar 11 · · primary fetch1 sourcecluster a69b4ba0updated Mar 11 ·

NVIDIA’s Nemotron 3 Super is a 120B parameter / ~12B active open model featuring a hybrid Mamba-Transformer / SSM Latent MoE architecture and 1M context window, delivering up to 2.2x faster inference than GPT-OSS-120B in FP4 with strong throughput gains. It supports agentic workloads and is unusually open with weights, data, and infrastructure details released. The model scored 36 on the AA Intelligence Index, outperforming GPT-OSS-120B but behind Qwen3.5-122B-A10B. Community and infrastructure support from projects like vLLM, llama.cpp, Ollama, Together, Baseten, W&B Inference, LangChain, and Unsloth GGUFs was immediate.

Key technical innovations include native multi-token prediction (MTP) and a significant KV-cache efficiency advantage. On the product side, a shift towards persistent agent runtimes and orchestration layers is highlighted, with Andrej Karpathy advocating for a "bigger IDE" concept where agents replace files as the unit of work, enabling legible, forkable agentic organizations with real-time control. New launches fitting this vision include Perplexity’s Personal Computer, an always-on local/cloud hybrid running on Mac mini, and Computer for Enterprise…

read full article on news.smol.ai
§ sources1 publication · timeline below
  1. news.smol.ainot much happened todayprimary
not much happened today · shipfeed