Inferencing at the edge has very different needs than training large language models or large-scale inferencing in AI data ...
Nvidia BlueField-4 STX adds a context memory layer to storage to close the agentic AI throughput gap
Nvidia's BlueField-4 STX reference architecture inserts a dedicated context memory layer between GPUs and traditional storage, claiming 5x token throughput and 4x energy efficiency for agentic AI ...
Nvidia's KV Cache Transform Coding (KVTC) compresses LLM key-value cache by 20x without model changes, cutting GPU memory costs and time-to-first-token by up to 8x for multi-turn AI applications.
Phison's CEO predicts growing interest in running AI models, such as OpenClaw, over PCs threatens to extend the memory shortage. It could also solve the crunch too.
At GTC, Nvidia announced the Groq 3 LPU chip, which uses tech licensed from the AI company Groq. The LPU was part of seven upcoming data center chips intended to supercharge AI.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results