Show HN: PureBee – A software-defined GPU running Llama 3.2 1B at 3.6 tok/SEC https://ift.tt/ORL9wD5

Show HN: PureBee – A software-defined GPU running Llama 3.2 1B at 3.6 tok/SEC This started as a question about simulation theory: if a GPU is just rules applied to a grid in parallel, do you actually need the silicon? Turns out, no. PureBee is a complete GPU defined as a software specification — Memory, Engine, Instruction Set, Runtime. It runs Llama 3.2 1B inference at 3.6 tok/sec on a single CPU core. The model answers questions correctly. What makes it different from llama.cpp or WebLLM: The WASM compute kernel is constructed byte-by-byte in JavaScript at runtime. No Emscripten. No Rust. No compiler. No build step. The binary that runs the Q4 SIMD matrix math is itself readable JavaScript. Every layer of the stack — including the thing executing the math — is auditable source. The progression from first principles: ``` Baseline JS 0.08 tok/sec Typed arrays 0.21 tok/sec WASM kernels 0.70 tok/sec Q4 quantization 1.30 tok/sec SIMD 3.00 tok/sec Worker threads 3.60 tok/sec ``` 45× total. Single CPU core. Zero npm dependencies. The claim isn't that this is faster than a real GPU. The claim is that a GPU was never the hardware — it was always the math. The hardware is just one way to run the math fast. PureBee is another way. If that's true, it changes where inference can run. To run: ``` git clone https://ift.tt/jARCkq7 node download.js llama3 node --max-old-space-size=4096 chat-llama3.js ``` Requires Node.js ≥ 20. The heap flag is not optional. Licensed FSL-1.1 (converts to Apache 2.0 in 2 years). Free for personal and internal use. Happy to go deep on the WASM binary construction, the Q4 nibble layout, or the SharedArrayBuffer weight cache that runs a 4.5GB model in 1.8GB of RAM. https://ift.tt/jARCkq7 February 24, 2026 at 12:45AM

Post a Comment

0 Comments