Now it is TikTok mum or dad ByteDance’s flip for a reasoning AI: enter Seed-Pondering-v1.5!

Now it is TikTok mum or dad ByteDance’s flip for a reasoning AI: enter Seed-Pondering-v1.5!

Be part of our each day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Be taught Extra


It began with the announcement of OpenAI’s o1 mannequin in September 2024, however actually took off with DeepSeek R1 launched in January 2025.

Now, plainly most main AI mannequin suppliers and trainers are in a brand new race to ship higher, quicker, cheaper, extra reasonably priced or extra highly effective and performant “reasoning” AI language fashions — that’s, ones that possibly take slightly longer to answer a human person, however ideally achieve this with higher, extra complete, extra effectively “reasoned” solutions, which these class of fashions get by performing “chain-of-thought,” reflecting on their very own conclusions and interrogating them for veracity earlier than responding.

ByteDance, the Chinese language net media big mum or dad of TikTok, is the most recent to affix the social gathering with announcement and publication of the technical paper behind Seed-Pondering-v1.5, an upcoming giant language mannequin (LLM) designed to advance reasoning efficiency throughout each science, tech, math, and engineering (STEM) fields and general-purpose domains.

The mannequin isn’t but accessible for obtain or use, and it’s unclear what the licensing phrases will likely be — whether or not it is going to be proprietary/closed supply or open supply/free for all to make use of and modify at will, or someplace in between. However the technical paper offers some noteworthy particulars which might be value going over now prematurely of every time it’s made accessible.

Like Meta’s new Llama 4 and Mistral’s Mixtral earlier than it, Seed-Pondering-v1.5 is constructed utilizing a Combination-of-Consultants (MoE) structure.

This structure is designed to make fashions extra environment friendly, primarily combining the capabilities of a number of fashions into one, every mannequin specializing in a distinct area.

On this case, the MoE structure implies that Seed-Pondering-v1.5 makes use of solely 20 billion parameters at a time from a complete of 200 billion.

ByteDance says in its technical paper printed to GitHub that Seed-Pondering-v1.5 prioritizes structured reasoning and considerate response technology.

The outcomes almost converse for themselves, with Seed-Pondering-v1.5 outperforming DeepSeek R1 and approaching Google’s newly launched Gemini 2.5 Professional and OpenAI’s o3-mini-high reasoner on many third-party benchmark evaluations, even exceeding these two within the case of the ARC-AGI benchmark, which measures progress in the direction of synthetic normal intelligence, seen because the aim or “Holy Grail” of AI — a mannequin that outperforms people on most economically precious duties, in accordance with OpenAI’s definition.

Positioned as a compact but succesful different to bigger state-of-the-art fashions, Seed-Pondering-v1.5 achieves aggressive benchmark outcomes and introduces improvements in reinforcement studying (RL), coaching knowledge curation, and AI infrastructure.

Efficiency benchmarks and mannequin focus

Seed-Pondering-v1.5 exhibits sturdy efficiency on a collection of difficult duties, scoring 86.7% on AIME 2024, 55.0% go@8 on Codeforces, and 77.3% on the GPQA science benchmark. These outcomes place it near or matching fashions like OpenAI’s o3-mini-high and Google’s Gemini 2.5 Professional on particular reasoning metrics.

On non-reasoning duties, the mannequin was evaluated via human desire comparisons and achieved an 8.0% larger win fee over DeepSeek R1, suggesting that its strengths generalize past simply logic or math-heavy challenges.

To handle saturation in frequent benchmarks like AIME, ByteDance launched BeyondAIME, a brand new, more durable math benchmark with curated issues designed to withstand memorization and higher discriminate mannequin efficiency. This and the Codeforces analysis set are anticipated to be publicly launched to help future analysis.

Information technique

Coaching knowledge performed a central function within the mannequin’s growth. For supervised fine-tuning (SFT), the group curated 400,000 samples, together with 300,000 verifiable (STEM, logic, and coding duties) and 100,000 non-verifiable issues like inventive writing and role-playing.

For RL coaching, knowledge was segmented into:

  • Verifiable issues: 100,000 rigorously filtered STEM questions and logic puzzles with identified solutions, sourced from elite competitions and professional assessment.
  • Non-verifiable duties: Human-preference datasets targeted on open-ended prompts, evaluated utilizing pairwise reward fashions.

The STEM knowledge leaned closely on superior arithmetic, accounting for over 80% of the issue set. Extra logic knowledge included duties like Sudoku and 24-point puzzles, with adjustable issue to match mannequin progress.

Reinforcement studying method

Reinforcement studying in Seed-Pondering-v1.5 is powered by customized actor-critic (VAPO) and policy-gradient (DAPO) frameworks, developed to handle identified instabilities in RL coaching. These strategies concentrate on decreasing reward sign sparsity and enhancing coaching stability, particularly in lengthy chain-of-thought (CoT) settings.

Reward fashions play a important function in supervising RL outputs. ByteDance launched two key instruments:

  • Seed-Verifier: A rule-based LLM that checks if generated and reference solutions are mathematically equal.
  • Seed-Pondering-Verifier: A step-by-step reasoning-based choose that improves judgment consistency and resists reward hacking.

This two-tiered reward system allows nuanced analysis for each easy and complicated duties.

Infrastructure and scaling

To help environment friendly large-scale coaching, ByteDance constructed a system atop its HybridFlow framework, with execution dealt with by Ray clusters and co-located coaching and inference processes to cut back GPU idle time.

A notable innovation is the Streaming Rollout System (SRS), which separates mannequin evolution from runtime execution. It accelerates iteration pace by asynchronously managing partially accomplished generations throughout mannequin variations. This structure reportedly delivers as much as 3× quicker RL cycles.

Extra infrastructure strategies embody:

  • Combined precision (FP8) for reminiscence financial savings
  • Skilled parallelism and kernel auto-tuning for MoE effectivity
  • ByteCheckpoint for resilient and versatile checkpointing
  • AutoTuner for optimizing parallelism and reminiscence configurations

Human analysis and real-world influence

To guage alignment with human-centric preferences, ByteDance carried out human testing throughout a spread of domains together with inventive writing, humanities information, and normal dialog.

Seed-Pondering-v1.5 persistently outperformed DeepSeek R1 throughout classes, reinforcing its applicability to real-world person wants.

The event group notes that reasoning fashions educated totally on verifiable duties demonstrated sturdy generalization to inventive domains—an consequence attributed to the construction and rigor embedded in mathematical coaching workflows.

What it means for technical leaders, knowledge engineers and enterprise decision-makers

For technical leads managing the lifecycle of enormous language fashions—from knowledge curation to deployment—Seed-Pondering-v1.5 presents a possibility to rethink how reasoning capabilities are built-in into enterprise AI stacks.

Its modular coaching course of, which incorporates verifiable reasoning datasets and multi-phase reinforcement studying, is especially interesting to groups seeking to scale LLM growth whereas retaining fine-grained management.

ByteDance’s strikes to introduce Seed-Verifier and Seed-Pondering-Verifier provide mechanisms for extra reliable reward modeling, which could be important when deploying fashions into customer-facing or regulated environments.

For groups that always function below tight deadlines and restricted bandwidth, the mannequin’s stability below reinforcement studying—enabled by improvements like VAPO and dynamic sampling—might scale back iteration cycles and streamline fine-tuning for particular duties.

From an orchestration and deployment perspective, the mannequin’s hybrid infrastructure method—together with the Streaming Rollout System (SRS) and help for FP8 optimization—suggests vital good points in coaching throughput and {hardware} utilization.

These options could be precious for engineers chargeable for scaling LLM operations throughout cloud and on-prem techniques. The truth that Seed-Pondering-v1.5 was educated with mechanisms to adapt reward suggestions based mostly on runtime dynamics speaks on to the challenges of managing heterogeneous knowledge pipelines and sustaining consistency throughout domains.

For groups tasked with guaranteeing reliability, reproducibility, and steady integration of recent instruments, Seed-Pondering-v1.5’s system-level design might function a blueprint for constructing strong, multi-modal orchestration techniques.

For knowledge engineering professionals, the structured method to coaching knowledge—together with rigorous filtering, augmentation, and professional verification—reinforces the significance of knowledge high quality as a multiplier of mannequin efficiency. This might encourage extra deliberate approaches to dataset growth and validation pipelines.

Future outlook

Seed-Pondering-v1.5 is the results of collaboration inside ByteDance’s Seed LLM Techniques group, led by Yonghui Wu and with public illustration by Haibin Lin, a long-time AI contributor.

The venture additionally attracts on earlier efforts like Doubao 1.5 Professional and incorporates shared strategies in RLHF and knowledge curation.

Trying forward, the group plans to proceed refining reinforcement studying strategies, with a concentrate on coaching effectivity and reward modeling for non-verifiable duties. The general public launch of inner benchmarks corresponding to BeyondAIME is meant to foster broader development in reasoning-focused AI analysis.


Leave a Reply

Your email address will not be published. Required fields are marked *