Close Menu

    Subscribe to Updates

    Get the latest news from tastytech.

    What's Hot

    Xbox Is Rebranding Itself, Sort Of

    May 16, 2026

    Everything You Need to Know Before ‘The Mandalorian and Grogu’

    May 16, 2026

    Mini Rocketman Concept Possible Production

    May 16, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    tastytech.intastytech.in
    Subscribe
    • AI News & Trends
    • Tech News
    • AI Tools
    • Business & Startups
    • Guides & Tutorials
    • Tech Reviews
    • Automobiles
    • Gaming
    • movies
    tastytech.intastytech.in
    Home»Tech Reviews»OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips
    OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips
    Tech Reviews

    OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips

    gvfx00@gmail.comBy gvfx00@gmail.comFebruary 13, 2026No Comments2 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email



    But 1,000 tokens per second is actually modest by Cerebras standards. The company has measured 2,100 tokens per second on Llama 3.1 70B and reported 3,000 tokens per second on OpenAI’s own open-weight gpt-oss-120B model, suggesting that Codex-Spark’s comparatively lower speed reflects the overhead of a larger or more complex model.

    AI coding agents have had a breakout year, with tools like OpenAI’s Codex and Anthropic’s Claude Code reaching a new level of usefulness for rapidly building prototypes, interfaces, and boilerplate code. OpenAI, Google, and Anthropic have all been racing to ship more capable coding agents, and latency has become what separates the winners; a model that codes faster lets a developer iterate faster.

    With fierce competition from Anthropic, OpenAI has been iterating on its Codex line at a rapid rate, releasing GPT-5.2 in December after CEO Sam Altman issued an internal “code red” memo about competitive pressure from Google, then shipping GPT-5.3-Codex just days ago.

    Table of Contents

    Toggle
    • Diversifying away from Nvidia
      • Related posts:
    • Do I need a dehumidifier? Here's exactly what they can help with
    • Celebrate Apple’s 50th birthday with these deals on Watches and AirPods
    • The awesome Mac mini M4 just dropped to $479 at Best Buy – that's a new record low

    Diversifying away from Nvidia

    Spark’s deeper hardware story may be more consequential than its benchmark scores. The model runs on Cerebras’ Wafer Scale Engine 3, a chip the size of a dinner plate that Cerebras has built its business around since at least 2022. OpenAI and Cerebras announced their partnership in January, and Codex-Spark is the first product to come out of it.

    OpenAI has spent the past year systematically reducing its dependence on Nvidia. The company signed a massive multi-year deal with AMD in October 2025, struck a $38 billion cloud computing agreement with Amazon in November, and has been designing its own custom AI chip for eventual fabrication by TSMC.

    Meanwhile, a planned $100 billion infrastructure deal with Nvidia has fizzled so far, though Nvidia has since committed to a $20 billion investment. Reuters reported that OpenAI grew unsatisfied with the speed of some Nvidia chips for inference tasks, which is exactly the kind of workload that OpenAI designed Codex-Spark for.

    Regardless of which chip is under the hood, speed matters, though it may come at the cost of accuracy. For developers who spend their days inside a code editor waiting for AI suggestions, 1,000 tokens per second may feel less like carefully piloting a jigsaw and more like running a rip saw. Just watch what you’re cutting.

    Related posts:

    The 14 Best Refreshing Beverages for Staying Hydrated and Healthy

    Malicious Microsoft VSCode AI extensions might have hit over 1.5 million users

    Ubiquiti Unveils Exciting UniFi 5G Options

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleYakuza Kiwami 3 interactive maps
    Next Article How Andrej Karpathy Built a Transformer in 243 Lines of Code?
    gvfx00@gmail.com
    • Website

    Related Posts

    Tech Reviews

    Livestream FA Cup Final Soccer: Watch Chelsea vs. Man City for Free

    May 16, 2026
    Tech Reviews

    After testing over a dozen digital notebooks, I’ve realized that the stylus is the real MVP in the e-ink tablet equation

    May 16, 2026
    Tech Reviews

    Zero-day exploit completely defeats default Windows 11 BitLocker protections

    May 16, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Black Swans in Artificial Intelligence — Dan Rose AI

    October 2, 2025154 Views

    Every Clue That Tony Stark Was Always Doctor Doom

    October 20, 202590 Views

    We let ChatGPT judge impossible superhero debates — here’s how it ruled

    December 31, 202581 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram

    Subscribe to Updates

    Get the latest tech news from tastytech.

    About Us
    About Us

    TastyTech.in brings you the latest AI, tech news, cybersecurity tips, and gadget insights all in one place. Stay informed, stay secure, and stay ahead with us!

    Most Popular

    Black Swans in Artificial Intelligence — Dan Rose AI

    October 2, 2025154 Views

    Every Clue That Tony Stark Was Always Doctor Doom

    October 20, 202590 Views

    We let ChatGPT judge impossible superhero debates — here’s how it ruled

    December 31, 202581 Views

    Subscribe to Updates

    Get the latest news from tastytech.

    Facebook X (Twitter) Instagram Pinterest
    • Homepage
    • About Us
    • Contact Us
    • Privacy Policy
    © 2026 TastyTech. Designed by TastyTech.

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.