Fernando (P5) and Lance (P7) may have turned in their best purpose-built performance of the year in Hungary, but CoreWeave is the real winner here. Did you see that amazing post-race interview with Fernando?? We're so excited to help the #AMF1 team push the boundaries of AI innovation and F1 performance. Aston Martin F1 Team
CoreWeave
Technology, Information and Internet
New York, NY 76,177 followers
CoreWeave is the AI Hyperscaler™
About us
CoreWeave is the AI Hyperscaler™
- Website
-
http://xmrrwallet.com/cmx.pwww.coreweave.com
External link for CoreWeave
- Industry
- Technology, Information and Internet
- Company size
- 501-1,000 employees
- Headquarters
- New York, NY
- Type
- Privately Held
- Founded
- 2017
- Specialties
- Cloud, Kubernetes, VFX Rendering, Bare Metal, GPU Compute, and AI Compute Acceleration
Locations
-
Primary
New York, NY, US
-
Livingston, NJ, US
-
Philadelphia, PA, US
Employees at CoreWeave
Updates
-
Episode 2 of our AI Cloud Horizons series is live! Tune in to hear Solutions Architect Jacob Feldman and Customer Experience VP John Mancuso discuss the issues with traditional SLA's and the modern AI cloud advantage. 🎥 Watch: https://xmrrwallet.com/cmx.phubs.la/Q03zBxFy0 Make sure to sign up for early access to episode 3 here: https://xmrrwallet.com/cmx.phubs.la/Q03zBvQT0 #CoreWeave #AI #Cloud #Horzons #SLA
AI Cloud Horizons -- Episode 2: The SLA is Not Enough
https://xmrrwallet.com/cmx.pwww.youtube.com/
-
Qwen-3 Thinking is live on W&B Inference ‼️ Get started here ➡️ https://xmrrwallet.com/cmx.plnkd.in/ej37JhBi #Qwen3 #CoreWeave
🚨 Qwen3-Thinking is live on W&B Inference — FP16 serving at $0.10 / M tokens input and output. • 235B params (⅓ DeepSeek-R1 size) • 256K native context window • Cracked the “Fantastic Four” puzzle in 10 min with 81,920 thinking tokens Matches o3 & Gemini 2.5 Pro on MMLU-Pro and beats o3 on LiveCodeBench v6 (+15 pts) while running at ~40× lower cost. All weights are Apache 2.0 open source. Spin it up on our W&B Inference platform powered by CoreWeave GPUs! Colab to get started → https://xmrrwallet.com/cmx.plnkd.in/gdgJCkNY
-
-
CoreWeave reposted this
🚨 Two new open models from Alibaba Group just dropped on W&B Inference, powered by CoreWeave and they’re monsters! Say hello to Qwen3-2507 and Qwen3-Coder, now live and inference-ready with unbeatable pricing. These aren’t just fast. They’re state-of-the-art in alignment, reasoning, and agentic coding. Fully integrated into W&B Weave and ready to run without spinning up a single server. 🧠 Qwen3-235B-A22B-Instruct-2507-FP8 235B parameter MoE (22B active), 256K context window, and it tops ARC-AGI at 41.8—outperforming Kimi-K2 just 9 days after release. It streams in ~200 ms and shows big leaps in alignment, tool use, and multilingual reasoning. Ideal for long-doc processing, tool-chaining, and fast iteration. 💻 Qwen3-Coder (released today!) Our favorite kind of debut: A 480B-param MoE model (35B active), purpose-built for agentic coding, browser use, and function-calling workflows. It’s also one of the lowest cost per million tokens on the market today. With native 256K context (scaling to 1M), support for 358+ coding languages, and best-in-class results in tool use and fill-in-the-middle tasks, this model is already being used in real-world dev flows. 🧪 Both are available now in the W&B Inference. No infra. No spin-up. Just plug and build. We’re giving builders $20 in credits to try these models for themselves. Drop a reply with “Coding Capybara” and we'll select a few builders to get the credits. Try it here: https://xmrrwallet.com/cmx.plnkd.in/gPp8kXVW Huge props to the Qwen team for pushing open models forward in a major way!
-
-
Episode 2 of our AI Cloud Horizons webcast series is dropping next week! Below is a sneak peak of the conversation between Jacob Feldman (Solutions Architecture) and John Mancuso (Customer Experience), discussing traditional SLA shortcomings and the CoreWeave advantage. Sign up here for early access to our episodes: https://xmrrwallet.com/cmx.phubs.la/Q03yhJ4z0 #AICloudHorizons #CoreWeave #SLA
Episode 2 of our AI Cloud Horizons webcast series is dropping next week!
-
Kimi K2 is now live on W&B Inference, powered by CoreWeave. Try it out for yourself on W&B Weave 👇
NEW: Kimi K2 is now live on W&B Inference (powered by CoreWeave). Last winter, DeepSeek proved that an “open-weights” moment could shake up the LLM landscape. The team at Moonshot AI raises that bar, with their Kimi K2 release. It's the first truly open challenger, ready for production with no waitlists, no restrictions, and SOTA performance. In their release blog, they showcased an agent leveraging wandb API to analyze data and craft a report. This (non-reasoning) model shows incredible performance across SWE-bench verified, LCBv6, GPQA-Diamond and more. Kimi K2 is also the ultimate copilot in ML research, due to the tool use abilities. ICYMI, we've rolled out a cutting-edge inference service driven by CoreWeave. The same powerhouse that OpenAI, Mistral AI, and Jane Street rely on. The perfect infrastructure to run K2, with its massive 1T parameters. 🔧 Ready to test-drive it? We are giving away $50 in inference credits for you to fire this model up on W&B Weave. All you have to do is repost this post. Collab notebook for you to get started in the comments!
-
AI is transforming industries, and the world of sports is no exception. Recently, our Chief of Staff Paddy Woods joined The Global Game: The Future of Soccer, Tech & Media Summit at the Prudential Center, hosted by GK Digital Ventures and Harris Blitzer Sports & Entertainment, along global leaders in sports, media, and technology to discuss how AI and digital infrastructure are transforming soccer on a global scale. During his panel, Paddy explored how advancements in AI, wearables, and data analytics are driving performance on the field and revolutionizing fan engagement off it. We play a critical role in powering this transformation by providing the high-performance compute infrastructure that enables teams and organizations to turn these innovations into reality. Special thanks to Governor Phil Murphy and the New Jersey Economic Development Authority (NJEDA) for championing the event and New Jersey as a hub of innovation. We’re proud to partner with both public and private institutions to help build that ecosystem, not just for sports, but for every sector AI is set to transform. https://xmrrwallet.com/cmx.phubs.la/Q03xvHjc0
-
-
In a bold step forward, we are proud to announce our intent to commit more than $6 billion to equip a 100MW state-of-the-art AI data center in Lancaster, Pennsylvania, with the potential to expand to 300MW. Unveiled at the inaugural Pennsylvania Energy and Innovation Summit at Carnegie Mellon University, this strategic commitment underscores our dedication to scaling next-generation AI infrastructure and joins our expanding network of 28 data centers across the U.S.. The facility is expected to create over 600 jobs during the construction phase and approximately 175 full-time roles as operations scale. Beyond job creation, it will serve as a long-term engine for regional growth and a key contributor to strengthening America’s position in AI and high-performance computing. In partnering with Chirisa Technology Parks and Machine Investment Group LP we’re working to ensure this facility delivers lasting economic and community impact. We’re proud to collaborate with government leaders at all levels who share our commitment to building a strong, AI-driven future for the United States. https://xmrrwallet.com/cmx.phubs.la/Q03xnlXp0
-
In an interview with Bloomberg's Caroline Hyde, our CEO Michael Intrator shared why the recent acquisition of Core Scientific is a defining move in our mission to scale AI infrastructure. Mike spoke to the importance of owning both the physical and software layers of the AI stack, from data centers and power requirements to orchestration and monitoring, so we can move faster, operate more efficiently, and deliver for the world’s leading AI builders. As Mike put it, by integrating with Core Scientific, we’re gaining the control needed to decide “how we build, where we build, when we build. Watch more: https://xmrrwallet.com/cmx.phubs.la/Q03w-0zb0