Together AI’s cover photo
Together AI

Together AI

Software Development

San Francisco, California 62,407 followers

AI pioneers train, fine-tune, and run frontier models on our GPU cloud platform.

About us

Together AI is a research-driven AI cloud infrastructure provider. Our purpose-built GPU cloud platform empowers AI engineers and researchers to train, fine-tune, and run frontier class AI models. Our customers include leading SaaS companies such as Salesforce, Zoom, and Zomato, as well as pioneering AI startups like ElevenLabs, Hedra, and Cartesia. We advocate for open source AI and believe that transparent AI systems will drive innovation and create the best outcomes for society.

Website
https://xmrrwallet.com/cmx.ptogether.ai
Industry
Software Development
Company size
201-500 employees
Headquarters
San Francisco, California
Type
Privately Held
Founded
2022
Specialties
Artificial Intelligence, Cloud Computing, LLM, Open Source, and Decentralized Computing

Locations

  • Primary

    251 Rhode Island St

    Suite 205

    San Francisco, California 94103, US

    Get directions

Employees at Together AI

Updates

  • 🤖🎨 𝐅𝐋𝐔𝐗.𝟏 𝐊𝐫𝐞𝐚 [𝐝𝐞𝐯] just dropped on Together AI, and it solves the problem every developer building with image generation knows too well: the oversaturated "AI look." 🔥 This isn't just another text-to-image model. It's what Black Forest Labs calls "opinionated" - meaning it delivers photorealism that doesn't scream "generated by AI." 📸 Built through collaboration between Black Forest Labs and krea.ai, it outperforms previous open FLUX models and matches closed solutions like FLUX1.1 [pro] in human preference assessments. ⚡ 𝐖𝐡𝐚𝐭 𝐭𝐡𝐢𝐬 𝐦𝐞𝐚𝐧𝐬 𝐟𝐨𝐫 𝐲𝐨𝐮𝐫 𝐚𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬: ✨ Photorealistic outputs without the telltale AI artifacts 🖼️ ✨ Compatible with the entire FLUX.1 [dev] ecosystem 🔗 ✨ Enhanced flexibility for downstream customization 🛠️ ✨ ELO rating of 1011 - competitive with top closed models 📊 🏗️ 𝐁𝐮𝐢𝐥𝐭 𝐟𝐨𝐫 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 𝐫𝐞𝐚𝐥𝐢𝐭𝐲: Works where you actually need it, with the image quality your users expect. No more explaining why generated images look "obviously AI." 🚀 𝐒𝐭𝐨𝐩 𝐰𝐨𝐫𝐤𝐢𝐧𝐠 𝐚𝐫𝐨𝐮𝐧𝐝 𝐀𝐈 𝐚𝐞𝐬𝐭𝐡𝐞𝐭𝐢𝐜 𝐥𝐢𝐦𝐢𝐭𝐚𝐭𝐢𝐨𝐧𝐬. Try FLUX.1 Krea [dev] on Together AI today.

    • No alternative text description for this image
  • 🤖 𝐆𝐋𝐌-𝟒.𝟓-𝐀𝐢𝐫 𝐣𝐮𝐬𝐭 𝐥𝐚𝐧𝐝𝐞𝐝 𝐨𝐧 𝐓𝐨𝐠𝐞𝐭𝐡𝐞𝐫 𝐀𝐈'𝐬 𝐬𝐞𝐫𝐯𝐞𝐫𝐥𝐞𝐬𝐬 𝐀𝐏𝐈 — hybrid reasoning model that switches between thinking and instant modes based on task complexity. GLM-4.5-Air delivers 106B total parameters with 12B active for maximum efficiency, while matching Claude 4 Sonnet performance on function calling benchmarks. ⚡ 𝟏𝟐𝟖𝐤 𝐜𝐨𝐧𝐭𝐞𝐱𝐭 𝐰𝐢𝐧𝐝𝐨𝐰 for repository-scale reasoning 🧠 𝟖𝟔.𝟐% 𝐭𝐨𝐨𝐥 𝐜𝐚𝐥𝐥𝐢𝐧𝐠 𝐬𝐮𝐜𝐜𝐞𝐬𝐬 𝐫𝐚𝐭𝐞 (beats most frontier models) 🔧 𝐇𝐲𝐛𝐫𝐢𝐝 𝐫𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠: thinking for complex problems, instant for speed 💻 𝐍𝐚𝐭𝐢𝐯𝐞 𝐰𝐞𝐛 𝐛𝐫𝐨𝐰𝐬𝐢𝐧𝐠 and full-stack development capabilities 🎯 𝟓𝟗.𝟖/𝟏𝟎𝟎 across 12 industry benchmarks (6th place globally) It works where you need it most — autonomous debugging across microservices, legacy system modernization that actually succeeds, and agentic coding that doesn't break your deployment pipeline. 𝐃𝐞𝐩𝐥𝐨𝐲 𝐢𝐧 𝐬𝐞𝐜𝐨𝐧𝐝𝐬 𝐯𝐢𝐚 𝐓𝐨𝐠𝐞𝐭𝐡𝐞𝐫 𝐀𝐈'𝐬 𝐬𝐞𝐫𝐯𝐞𝐫𝐥𝐞𝐬𝐬 𝐀𝐏𝐈. No infrastructure headaches, no throttling, just frontier agentic capabilities at production scale.

    • No alternative text description for this image
  • Together AI reposted this

    View organization page for 5C

    1,780 followers

    🎙️ We’re back with episode 5!    The success of Together AI and 5C starts with the people. Both teams are driven by a culture of rapid learning, continuous adaptation, and a focus on innovation. Our dedicated teams are the real engine behind everything we build.     🎧 Tune in next week for episode six! 

  • 📘 New Notebook: How to Systematically Compare LLMs on a Task! 🧠📊 Code walkthrough on comparing which LLM summarizes documents better. We use: 🔹 LLM-as-a-Judge 🔹 Head-to-head model matchups 🔹 The SummEval dataset 🔹 Judging by accuracy, completeness & clarity

    • No alternative text description for this image
  • Behind those viral baby podcast videos taking over your feed? That's Hedra scaling AI video generation with Together AI. 🎬 The challenge: When content goes viral overnight, infrastructure needs to scale instantly - without breaking the bank or tying up engineers. 𝗛𝗲𝗿𝗲'𝘀 𝗵𝗼𝘄 𝗛𝗲𝗱𝗿𝗮 + 𝗧𝗼𝗴𝗲𝘁𝗵𝗲𝗿 𝗔𝗜 𝗰𝗿𝗮𝗰𝗸𝗲𝗱 𝘁𝗵𝗲 𝗰𝗼𝗱𝗲:  ✅ 𝟲𝟬% 𝗰𝗼𝘀𝘁 𝗿𝗲𝗱𝘂𝗰𝘁𝗶𝗼𝗻 through optimized GPU utilization ✅ 𝟯𝘅 𝗳𝗮𝘀𝘁𝗲𝗿 𝗶𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 with custom Blackwell kernels ✅ 𝟯𝟬𝟬𝘅 𝗴𝗿𝗼𝘄𝘁𝗵 handled seamlessly via auto-scaling ✅ 𝟱-𝘀𝗲𝗰𝗼𝗻𝗱 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 enabling real-time workflows "Together AI's team helped us optimize our models to give us one of the fastest and best cost-performing models in the industry." — Michael Lingelbach, Hedra CEO 𝗧𝗵𝗲 𝗿𝗲𝘀𝘂𝗹𝘁? Millions of views, from viral TikTok content to enterprise automation - with healthy unit economics. What made the difference? Together AI didn't just provide GPUs - they collaborated on kernel optimizations and freed engineers from infrastructure headaches. The real lesson here: when you're building AI at scale, your infrastructure partner can make or break your unit economics. Choose wisely.

  • 🚨 New Talk for ML Practitioners! LLM-as-a-Judge is changing how we evaluate models - fast, consistent, automated. We’ll demo how to score SoTA open models (Kimi, Qwen, GLM), share prompting tricks, and unpack eval best practices. 🔗 + Q&A with Together AI team!👇

    • No alternative text description for this image
  • 🛡️ 𝗩𝗶𝗿𝘁𝘂𝗲𝗚𝘂𝗮𝗿𝗱 𝗶𝘀 𝗟𝗜𝗩𝗘 𝗼𝗻 𝗧𝗼𝗴𝗲𝘁𝗵𝗲𝗿 𝗔𝗜 🚀 The first real-time AI security and safety model that works across modalities without breaking your production workflows. Most enterprises avoid deploying AI broadly because current security and safety tools create impossible tradeoffs. Comprehensive screening means 400ms+ delays. Real-time performance means missed threats. VirtueGuard eliminates this choice. The breakthrough metrics:   ⚡ Under 10ms 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗲  - faster than any alternative guardrail solution  🎯 𝟴𝟵% 𝗮𝗰𝗰𝘂𝗿𝗮𝗰𝘆 vs 76% (AWS Bedrock) 🧠 𝗖𝗼𝗻𝘁𝗲𝘅𝘁-𝗮𝘄𝗮𝗿𝗲 - adapts to your policies, not just keywords 🔄 𝗔𝘂𝘁𝗼-𝘂𝗽𝗱𝗮𝘁𝗶𝗻𝗴 broad risk coverage, including special advice, jailbreaks, prompt injections, etc. When a user tries: "𝘞𝘳𝘪𝘵𝘦 𝘮𝘦 𝘢 𝘱𝘩𝘪𝘴𝘩𝘪𝘯𝘨 𝘦𝘮𝘢𝘪𝘭 𝘪𝘮𝘱𝘦𝘳𝘴𝘰𝘯𝘢𝘵𝘪𝘯𝘨 𝘮𝘺 𝘣𝘢𝘯𝘬" → VirtueGuard catches it in 8ms with context understanding, not just pattern matching. Add one API parameter and every model across Together AI's 200+ model catalog - from Llama 3.3 to Kimi K2 to DeepSeek R1 - gets enterprise-grade protection instantly. No separate vendors, no complex orchestration, no performance penalties. This is how you deploy AI securely at enterprise scale.

    • No alternative text description for this image
  • 💡Stop guessing. Start benchmarking. Every team building with LLMs runs into the same problems: “Which model is actually better for my task?” “Can I trust this before I ship it?” “How do I catch errors before users do?” Together Evaluations solves these problems — fast. This early preview of our new evaluation tool lets you define task-specific benchmarks and use a strong LLM as a judge to: ✅ Compare models side-by-side ✅ Score responses against your own criteria ✅ Classify outputs into custom labels — from safety to sentiment You can evaluate any serverless model on Together AI today. Later this summer, you’ll be able to evaluate fine-tuned models, custom models, and even commercial APIs — all in one place. 📊 Use it to test prompts, validate new use cases, find the best open-source model for your task. Learn more (links in comments!)

    • No alternative text description for this image

Similar pages

Browse jobs

Funding