AI-powered reasoning fashions are taking the world by storm in 2025! With the launch of DeepSeek-R1 and o3-mini, we’ve got seen unprecedented ranges of logical reasoning capabilities in AI chatbots. On this article, we are going to entry these fashions through their APIs and consider their logical reasoning abilities to search out out if o3-mini can change DeepSeek-R1. We might be evaluating their efficiency on normal benchmarks in addition to real-world functions like fixing logical puzzles and even constructing a Tetris recreation! So buckle up and be a part of the journey.
DeepSeek-R1 vs o3-mini: Logical Reasoning Benchmarks
DeepSeek-R1 and o3-mini provide distinctive approaches to structured considering and deduction, making them apt for varied sorts of advanced problem-solving duties. Earlier than we converse of their benchmark efficiency, let’s first have a sneak peek on the structure of those fashions.
o3-mini is OpenAI’s most superior reasoning mannequin. It makes use of a dense transformer structure, processing every token with all mannequin parameters for robust efficiency however excessive useful resource consumption. In distinction, DeepSeek’s most obvious mannequin, R1, employs a Combination-of-Consultants (MoE) framework, activating solely a subset of parameters per enter for better effectivity. This makes DeepSeek-R1 extra scalable and computationally optimized whereas sustaining stable efficiency.
Study Extra: Is OpenAI’s o3-mini Higher Than DeepSeek-R1?
Now what we have to see is how properly these fashions carry out in logical reasoning duties. First, let’s take a look at their efficiency within the livebench benchmark assessments.
![Can o3-mini Substitute DeepSeek-R1 for Logical Reasoning? Can o3-mini Substitute DeepSeek-R1 for Logical Reasoning?](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/benchmarks-1.webp)
Sources: livebench.ai
The benchmark outcomes present that OpenAI’s o3-mini outperforms DeepSeek-R1 in nearly all elements, apart from math. With a world common rating of 73.94 in comparison with DeepSeek’s 71.38, the o3-mini demonstrates barely stronger total efficiency. It notably excels in reasoning, attaining 89.58 versus DeepSeek’s 83.17, reflecting superior analytical and problem-solving capabilities.
Additionally Learn: Google Gemini 2.0 Professional vs DeepSeek-R1: Who Does Coding Higher?
DeepSeek-R1 vs o3-mini: API Pricing Comparability
Since we’re testing these fashions by their APIs, let’s see how a lot these fashions price.
Mannequin | Context size | Enter Value | Cached Enter Value | Output Value |
o3-mini | 200k | $1.10/M tokens | $0.55/M tokens | $4.40/M tokens |
deepseek-chat | 64k | $0.27/M tokens | $0.07/M tokens | $1.10/M tokens |
deepseek-reasoner | 64k | $0.55/M tokens | $0.14/M tokens | $2.19/M tokens |
As seen within the desk, OpenAI’s o3-mini is almost twice as costly as DeepSeek R1 by way of API prices. It costs $1.10 per million tokens for enter and $4.40 for output, whereas DeepSeek R1 provides a cheaper price of $0.55 per million tokens for enter and $2.19 for output, making it a extra budget-friendly possibility for large-scale functions.
Sources: DeepSeek-R1 | o3-mini
Learn how to Entry DeepSeek-R1 and o3-mini through API
Earlier than we step into the hands-on efficiency comparability, let’s discover ways to entry DeepSeek-R1 and o3-mini utilizing APIs.
All you need to do for this, is import the required libraries and api keys:
from openai import OpenAI
from IPython.show import show, Markdown
import time
with open("path_of_api_key") as file:
openai_api_key = file.learn().strip()
with open("path_of_api_key") as file:
deepseek_api = file.learn().strip()
DeepSeek-R1 vs o3-mini: Logical Reasoning Comparability
Now that we’ve gotten the API entry, let’s examine DeepSeek-R1 and o3-mini based mostly on their logical reasoning capabilities. For this, we are going to give the identical immediate to each the fashions and consider their responses based mostly on these metrics:
- Time taken by the mannequin to generate the response,
- High quality of the generated response, and
- Price incurred to generate the response.
We’ll then rating the fashions 0 or 1 for every process, relying on their efficiency. So let’s check out the duties and see who emerges because the winner within the DeepSeek-R1 vs o3-mini reasoning battle!
Process 1: Constructing a Tetris Recreation
This process requires the mannequin to implement a completely practical Tetris recreation utilizing Python, effectively managing recreation logic, piece motion, collision detection, and rendering with out counting on exterior recreation engines.
Immediate: “Write a python code for this downside: generate a Python code for the Tetris recreation“
Enter to DeepSeek-R1 API
INPUT_COST_CACHE_HIT = 0.14 / 1_000_000 # $0.14 per 1M tokens
INPUT_COST_CACHE_MISS = 0.55 / 1_000_000 # $0.55 per 1M tokens
OUTPUT_COST = 2.19 / 1_000_000 # $2.19 per 1M tokens
# Begin timing
task1_start_time = time.time()
# Initialize OpenAI shopper for DeepSeek API
shopper = OpenAI(api_key=api_key, base_url="https://api.deepseek.com")
messages = [
{
"role": "system",
"content": """You are a professional Programmer with a large experience."""
},
{
"role": "user",
"content": """write a python code for this problem: generate a python code for Tetris game."""
}
]
# Get token rely utilizing tiktoken (alter mannequin title if vital)
encoding = tiktoken.get_encoding("cl100k_base") # Use a suitable tokenizer
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
# Name DeepSeek API
response = shopper.chat.completions.create(
mannequin="deepseek-reasoner",
messages=messages,
stream=False
)
# Get output token rely
output_tokens = len(encoding.encode(response.selections[0].message.content material))
task1_end_time = time.time()
total_time_taken = task1_end_time - task1_start_time
# Assume cache miss for worst-case pricing (alter if cache information is obtainable)
input_cost = (input_tokens / 1_000_000) * INPUT_COST_CACHE_MISS
output_cost = (output_tokens / 1_000_000) * OUTPUT_COST
total_cost = input_cost + output_cost
# Print outcomes
print("Response:", response.selections[0].message.content material)
print("------------------ Complete Time Taken for Process 1: ------------------", total_time_taken)
print(f"Enter Tokens: {input_tokens}, Output Tokens: {output_tokens}")
print(f"Estimated Price: ${total_cost:.6f}")
# Show outcome
from IPython.show import Markdown
show(Markdown(response.selections[0].message.content material))
Response by DeepSeek-R1
![DeepSeek-R1 task 1 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/task1deepseekr1.webp)
You’ll find DeepSeek-R1’s full response right here.
Output token price:
Enter Tokens: 28 | Output Tokens: 3323 | Estimated Price: $0.0073
Code Output
Enter to o3-mini API
task1_start_time = time.time()
shopper = OpenAI(api_key=api_key)
messages = messages=[
{
"role": "system",
"content": """You are a professional Programmer with a large experience ."""
},
{
"role": "user",
"content": """write a python code for this problem: generate a python code for Tetris game.
"""
}
]
# Use a suitable encoding (cl100k_base is the best choice for brand new OpenAI fashions)
encoding = tiktoken.get_encoding("cl100k_base")
# Calculate token counts
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
completion = shopper.chat.completions.create(
mannequin="o3-mini-2025-01-31",
messages=messages
)
output_tokens = len(encoding.encode(completion.selections[0].message.content material))
task1_end_time = time.time()
input_cost_per_1k = 0.0011 # Instance: $0.005 per 1,000 enter tokens
output_cost_per_1k = 0.0044 # Instance: $0.015 per 1,000 output tokens
# Calculate price
input_cost = (input_tokens / 1000) * input_cost_per_1k
output_cost = (output_tokens / 1000) * output_cost_per_1k
total_cost = input_cost + output_cost
print(completion.selections[0].message)
print("----------------=Complete Time Taken for process 1:----------------- ", task1_end_time - task1_start_time)
print(f"Enter Tokens: {input_tokens}, Output Tokens: {output_tokens}")
print(f"Estimated Price: ${total_cost:.6f}")
# Show outcome
from IPython.show import Markdown
show(Markdown(completion.selections[0].message.content material))
Response by o3-mini
![o3-mini task 1 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/task1o3mini.webp)
You’ll find o3-mini’s full response right here.
Output token price:
Enter Tokens: 28 | Output Tokens: 3235 | Estimated Price: $0.014265
Code Output
Comparative Evaluation
On this process, the fashions had been required to generate practical Tetris code that enables for precise gameplay. DeepSeek-R1 efficiently produced a completely working implementation, as demonstrated within the code output video. In distinction, whereas o3-mini’s code appeared well-structured, it encountered errors throughout execution. Consequently, DeepSeek-R1 outperforms o3-mini on this state of affairs, delivering a extra dependable and playable resolution.
Rating: DeepSeek-R1: 1 | o3-mini: 0
Process 2: Analyzing Relational Inequalities
This process requires the mannequin to effectively analyze relational inequalities moderately than counting on primary sorting strategies.
Immediate: “ Within the following query assuming the given statements to be true, discover which of the conclusion among the many given conclusions is/are undoubtedly true after which give your solutions accordingly.
Statements:
H > F ≤ O ≤ L; F ≥ V < D
Conclusions: I. L ≥ V II. O > D
The choices are:
A. Solely I is true
B. Solely II is true
C. Each I and II are true
D. Both I or II is true
E. Neither I nor II is true.”
Enter to DeepSeek-R1 API
INPUT_COST_CACHE_HIT = 0.14 / 1_000_000 # $0.14 per 1M tokens
INPUT_COST_CACHE_MISS = 0.55 / 1_000_000 # $0.55 per 1M tokens
OUTPUT_COST = 2.19 / 1_000_000 # $2.19 per 1M tokens
# Begin timing
task2_start_time = time.time()
# Initialize OpenAI shopper for DeepSeek API
shopper = OpenAI(api_key=api_key, base_url="https://api.deepseek.com")
messages = [
{"role": "system", "content": "You are an expert in solving Reasoning Problems. Please solve the given problem."},
{"role": "user", "content": """ In the following question, assuming the given statements to be true, find which of the conclusions among given conclusions is/are definitely true and then give your answers accordingly.
Statements: H > F ≤ O ≤ L; F ≥ V < D
Conclusions:
I. L ≥ V
II. O > D
The options are:
A. Only I is true
B. Only II is true
C. Both I and II are true
D. Either I or II is true
E. Neither I nor II is true
"""}
]
# Get token rely utilizing tiktoken (alter mannequin title if vital)
encoding = tiktoken.get_encoding("cl100k_base") # Use a suitable tokenizer
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
# Name DeepSeek API
response = shopper.chat.completions.create(
mannequin="deepseek-reasoner",
messages=messages,
stream=False
)
# Get output token rely
output_tokens = len(encoding.encode(response.selections[0].message.content material))
task2_end_time = time.time()
total_time_taken = task2_end_time - task2_start_time
# Assume cache miss for worst-case pricing (alter if cache information is obtainable)
input_cost = (input_tokens / 1_000_000) * INPUT_COST_CACHE_MISS
output_cost = (output_tokens / 1_000_000) * OUTPUT_COST
total_cost = input_cost + output_cost
# Print outcomes
print("Response:", response.selections[0].message.content material)
print("------------------ Complete Time Taken for Process 2: ------------------", total_time_taken)
print(f"Enter Tokens: {input_tokens}, Output Tokens: {output_tokens}")
print(f"Estimated Price: ${total_cost:.6f}")
# Show outcome
from IPython.show import Markdown
show(Markdown(response.selections[0].message.content material))
Output token price:
Enter Tokens: 136 | Output Tokens: 352 | Estimated Price: $0.000004
Response by DeepSeek-R1
![deepseek-r1 task 2 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/task2deepseekr1.webp)
Enter to o3-mini API
task2_start_time = time.time()
shopper = OpenAI(api_key=api_key)
messages = [
{
"role": "system",
"content": """You are an expert in solving Reasoning Problems. Please solve the given problem"""
},
{
"role": "user",
"content": """In the following question, assuming the given statements to be true, find which of the conclusions among given conclusions is/are definitely true and then give your answers accordingly.
Statements: H > F ≤ O ≤ L; F ≥ V < D
Conclusions:
I. L ≥ V
II. O > D
The options are:
A. Only I is true
B. Only II is true
C. Both I and II are true
D. Either I or II is true
E. Neither I nor II is true
"""
}
]
# Use a suitable encoding (cl100k_base is the best choice for brand new OpenAI fashions)
encoding = tiktoken.get_encoding("cl100k_base")
# Calculate token counts
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
completion = shopper.chat.completions.create(
mannequin="o3-mini-2025-01-31",
messages=messages
)
output_tokens = len(encoding.encode(completion.selections[0].message.content material))
task2_end_time = time.time()
input_cost_per_1k = 0.0011 # Instance: $0.005 per 1,000 enter tokens
output_cost_per_1k = 0.0044 # Instance: $0.015 per 1,000 output tokens
# Calculate price
input_cost = (input_tokens / 1000) * input_cost_per_1k
output_cost = (output_tokens / 1000) * output_cost_per_1k
total_cost = input_cost + output_cost
# Print outcomes
print(completion.selections[0].message)
print("----------------=Complete Time Taken for process 2:----------------- ", task2_end_time - task2_start_time)
print(f"Enter Tokens: {input_tokens}, Output Tokens: {output_tokens}")
print(f"Estimated Price: ${total_cost:.6f}")
# Show outcome
from IPython.show import Markdown
show(Markdown(completion.selections[0].message.content material))
Output token price:
Enter Tokens: 135 | Output Tokens: 423 | Estimated Price: $0.002010
Response by o3-mini
![o3-mini task 2 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/task2o3mini.webp)
Comparative Evaluation
o3-mini delivers probably the most environment friendly resolution, offering a concise but correct response in considerably much less time. It maintains readability whereas guaranteeing logical soundness, making it very best for fast reasoning duties. DeepSeek-R1, whereas equally right, is way slower and extra verbose. Its detailed breakdown of logical relationships enhances explainability however could really feel extreme for simple evaluations. Although each fashions arrive on the identical conclusion, o3-mini’s velocity and direct method make it the higher alternative for sensible use.
Rating: DeepSeek-R1: 0 | o3-mini: 1
Process 3: Logical Reasoning in Math
This process challenges the mannequin to acknowledge numerical patterns, which can contain arithmetic operations, multiplication, or a mix of mathematical guidelines. As an alternative of brute-force looking out, the mannequin should undertake a structured method to infer the hidden logic effectively.
Immediate: “Examine the given matrix fastidiously and choose the quantity from among the many given choices that may change the query mark (?) in it.
____________
| 7 | 13 | 174|
| 9 | 25 | 104|
| 11 | 30 | ? |
|_____|____|___|
The choices are:
A 335
B 129
C 431
D 100
Please point out your method that you’ve got taken at every step.“
Enter to DeepSeek-R1 API
INPUT_COST_CACHE_HIT = 0.14 / 1_000_000 # $0.14 per 1M tokens
INPUT_COST_CACHE_MISS = 0.55 / 1_000_000 # $0.55 per 1M tokens
OUTPUT_COST = 2.19 / 1_000_000 # $2.19 per 1M tokens
# Begin timing
task3_start_time = time.time()
# Initialize OpenAI shopper for DeepSeek API
shopper = OpenAI(api_key=api_key, base_url="https://api.deepseek.com")
messages = [
{
"role": "system",
"content": """You are a Expert in solving Reasoning Problems. Please solve the given problem"""
},
30
]
# Get token rely utilizing tiktoken (alter mannequin title if vital)
encoding = tiktoken.get_encoding("cl100k_base") # Use a suitable tokenizer
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
# Name DeepSeek API
response = shopper.chat.completions.create(
mannequin="deepseek-reasoner",
messages=messages,
stream=False
)
# Get output token rely
output_tokens = len(encoding.encode(response.selections[0].message.content material))
task3_end_time = time.time()
total_time_taken = task3_end_time - task3_start_time
# Assume cache miss for worst-case pricing (alter if cache information is obtainable)
input_cost = (input_tokens / 1_000_000) * INPUT_COST_CACHE_MISS
output_cost = (output_tokens / 1_000_000) * OUTPUT_COST
total_cost = input_cost + output_cost
# Print outcomes
print("Response:", response.selections[0].message.content material)
print("------------------ Complete Time Taken for Process 3: ------------------", total_time_taken)
print(f"Enter Tokens: {input_tokens}, Output Tokens: {output_tokens}")
print(f"Estimated Price: ${total_cost:.6f}")
# Show outcome
from IPython.show import Markdown
show(Markdown(response.selections[0].message.content material))
Output token price:
Enter Tokens: 134 | Output Tokens: 274 | Estimated Price: $0.000003
Response by DeepSeek-R1
![deepseek r1 task 3 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/deepseekr1task3.webp)
Enter to o3-mini API
task3_start_time = time.time()
shopper = OpenAI(api_key=api_key)
messages = [
{
"role": "system",
"content": """You are a Expert in solving Reasoning Problems. Please solve the given problem"""
},
30
]
# Use a suitable encoding (cl100k_base is the best choice for brand new OpenAI fashions)
encoding = tiktoken.get_encoding("cl100k_base")
# Calculate token counts
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
completion = shopper.chat.completions.create(
mannequin="o3-mini-2025-01-31",
messages=messages
)
output_tokens = len(encoding.encode(completion.selections[0].message.content material))
task3_end_time = time.time()
input_cost_per_1k = 0.0011 # Instance: $0.005 per 1,000 enter tokens
output_cost_per_1k = 0.0044 # Instance: $0.015 per 1,000 output tokens
# Calculate price
input_cost = (input_tokens / 1000) * input_cost_per_1k
output_cost = (output_tokens / 1000) * output_cost_per_1k
total_cost = input_cost + output_cost
# Print outcomes
print(completion.selections[0].message)
print("----------------=Complete Time Taken for process 3:----------------- ", task3_end_time - task3_start_time)
print(f"Enter Tokens: {input_tokens}, Output Tokens: {output_tokens}")
print(f"Estimated Price: ${total_cost:.6f}")
# Show outcome
from IPython.show import Markdown
show(Markdown(completion.selections[0].message.content material))
Output token price:
Enter Tokens: 134 | Output Tokens: 736 | Estimated Price: $0.003386
Output by o3-mini
![o3-mini vs DeepSeek-R1 API logical reasoning](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/o3-minitaks3_1.webp)
![logical reasoning task 3 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/o3minitask3_2.webp)
![logical reasoning task 3 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/o3minitask3_3.webp)
![](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/o3mintask3_4.webp)
Comparative Evaluation
Right here, the sample adopted in every row is:
(1st quantity)^3−(2nd quantity)^2 = third quantity
Making use of this sample:
- Row 1: 7^3 – 13^2 = 343 – 169 = 174
- Row 2: 9^3 – 25^2 = 729 – 625 = 104
- Row 3: 11^3 – 30^2 = 1331 – 900 = 431
Therefore, the proper reply is 431.
DeepSeek-R1 accurately identifies and applies this sample, resulting in the proper reply. Its structured method ensures accuracy, although it takes considerably longer to compute the outcome. o3-mini, however, fails to ascertain a constant sample. It makes an attempt a number of operations, akin to multiplication, addition, and exponentiation, however doesn’t arrive at a definitive reply. This leads to an unclear and incorrect response. Total, DeepSeek-R1 outperforms o3-mini in logical reasoning and accuracy, whereas O3-mini struggles because of its inconsistent and ineffective method.
Rating: DeepSeek-R1: 1 | o3-mini: 0
Remaining Rating: DeepSeek-R1: 2 | o3-mini: 1
Logical Reasoning Comparability Abstract
Process No. | Process Sort | Mannequin | Efficiency | Time Taken (seconds) | Price |
1 | Code Era | DeepSeek-R1 | ✅ Working Code | 606.45 | $0.0073 |
o3-mini | ❌ Non-working Code | 99.73 | $0.014265 | ||
2 | Alphabetical Reasoning | DeepSeek-R1 | ✅ Appropriate | 74.28 | $0.000004 |
o3-mini | ✅ Appropriate | 8.08 | $0.002010 | ||
3 | Mathematical Reasoning | DeepSeek-R1 | ✅ Appropriate | 450.53 | $0.000003 |
o3-mini | ❌ Improper Reply | 12.37 | $0.003386 |
Conclusion
As we’ve got seen on this comparability, each DeepSeek-R1 and o3-mini show distinctive strengths catering to totally different wants. DeepSeek-R1 excels in accuracy-driven duties, notably in mathematical reasoning and complicated code era, making it a powerful candidate for functions requiring logical depth and correctness. Nonetheless, one important downside is its slower response occasions, partly because of ongoing server upkeep points which have affected its accessibility. Alternatively, o3-mini provides considerably sooner response occasions, however its tendency to provide incorrect outcomes limits its reliability for high-stakes reasoning duties.
This evaluation underscores the trade-offs between velocity and accuracy in language fashions. Whereas o3-mini could also be helpful for speedy, low-risk functions, DeepSeek-R1 stands out because the superior alternative for reasoning-intensive duties, supplied its latency points are addressed. As AI fashions proceed to evolve, hanging a stability between efficiency effectivity and correctness might be key to optimizing AI-driven workflows throughout varied domains.
Additionally Learn: Can OpenAI’s o3-mini Beat Claude Sonnet 3.5 in Coding?
Often Requested Questions
A. DeepSeek-R1 excels in mathematical reasoning and complicated code era, making it very best for functions that require logical depth and accuracy. o3-mini, however, is considerably sooner however typically sacrifices accuracy, resulting in occasional incorrect outputs.
A. DeepSeek-R1 is the higher alternative for coding and reasoning-intensive duties because of its superior accuracy and skill to deal with advanced logic. Whereas o3-mini offers faster responses, it could generate errors, making it much less dependable for high-stakes programming duties.
A. o3-mini is finest fitted to low-risk, speed-dependent functions, akin to chatbots, informal textual content era, and interactive AI experiences. Nonetheless, for duties requiring excessive accuracy, DeepSeek-R1 is the popular possibility.
A. DeepSeek-R1 has superior logical reasoning and problem-solving capabilities, making it a powerful alternative for mathematical computations, programming help, and scientific queries. o3-mini offers fast however typically inconsistent responses in advanced problem-solving situations.