Workers AI gets a speed boost, batch workload support, more LoRAs, new models, and a refreshed dashboard
2025-04-11
We just made Workers AI inference faster with speculative decoding & prefix caching. Use our new batch inference for handling large request volumes seamlessly....