AI-powered reasoning fashions are taking the world by storm in 2025! With the launch of DeepSeek-R1 and o3-mini, we now have seen unprecedented ranges of logical reasoning capabilities in AI chatbots. On this article, we are going to entry these fashions by way of their APIs and consider their logical reasoning expertise to search out out if o3-mini can substitute DeepSeek-R1. We will likely be evaluating their efficiency on normal benchmarks in addition to real-world functions like fixing logical puzzles and even constructing a Tetris recreation! So buckle up and be part of the journey.
DeepSeek-R1 vs o3-mini: Logical Reasoning Benchmarks
DeepSeek-R1 and o3-mini supply distinctive approaches to structured pondering and deduction, making them apt for varied sorts of complicated problem-solving duties. Earlier than we communicate of their benchmark efficiency, let’s first have a sneak peek on the structure of those fashions.
o3-mini is OpenAI’s most superior reasoning mannequin. It makes use of a dense transformer structure, processing every token with all mannequin parameters for sturdy efficiency however excessive useful resource consumption. In distinction, DeepSeek’s most reasonable mannequin, R1, employs a Combination-of-Consultants (MoE) framework, activating solely a subset of parameters per enter for larger effectivity. This makes DeepSeek-R1 extra scalable and computationally optimized whereas sustaining stable efficiency.
Study Extra: Is OpenAI’s o3-mini Higher Than DeepSeek-R1?
Now what we have to see is how effectively these fashions carry out in logical reasoning duties. First, let’s take a look at their efficiency within the livebench benchmark assessments.
![Can o3-mini Change DeepSeek-R1 for Logical Reasoning? Can o3-mini Change DeepSeek-R1 for Logical Reasoning?](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/benchmarks-1.webp)
Sources: livebench.ai
The benchmark outcomes present that OpenAI’s o3-mini outperforms DeepSeek-R1 in virtually all features, aside from math. With a worldwide common rating of 73.94 in comparison with DeepSeek’s 71.38, the o3-mini demonstrates barely stronger general efficiency. It notably excels in reasoning, attaining 89.58 versus DeepSeek’s 83.17, reflecting superior analytical and problem-solving capabilities.
Additionally Learn: Google Gemini 2.0 Professional vs DeepSeek-R1: Who Does Coding Higher?
DeepSeek-R1 vs o3-mini: API Pricing Comparability
Since we’re testing these fashions by means of their APIs, let’s see how a lot these fashions value.
Mannequin | Context size | Enter Value | Cached Enter Value | Output Value |
o3-mini | 200k | $1.10/M tokens | $0.55/M tokens | $4.40/M tokens |
deepseek-chat | 64k | $0.27/M tokens | $0.07/M tokens | $1.10/M tokens |
deepseek-reasoner | 64k | $0.55/M tokens | $0.14/M tokens | $2.19/M tokens |
As seen within the desk, OpenAI’s o3-mini is sort of twice as costly as DeepSeek R1 by way of API prices. It fees $1.10 per million tokens for enter and $4.40 for output, whereas DeepSeek R1 provides a more cost effective price of $0.55 per million tokens for enter and $2.19 for output, making it a extra budget-friendly possibility for large-scale functions.
Sources: DeepSeek-R1 | o3-mini
The right way to Entry DeepSeek-R1 and o3-mini by way of API
Earlier than we step into the hands-on efficiency comparability, let’s discover ways to entry DeepSeek-R1 and o3-mini utilizing APIs.
All it’s a must to do for this, is import the mandatory libraries and api keys:
from openai import OpenAI
from IPython.show import show, Markdown
import time
with open("path_of_api_key") as file:
openai_api_key = file.learn().strip()
with open("path_of_api_key") as file:
deepseek_api = file.learn().strip()
DeepSeek-R1 vs o3-mini: Logical Reasoning Comparability
Now that we’ve gotten the API entry, let’s examine DeepSeek-R1 and o3-mini based mostly on their logical reasoning capabilities. For this, we are going to give the identical immediate to each the fashions and consider their responses based mostly on these metrics:
- Time taken by the mannequin to generate the response,
- High quality of the generated response, and
- Value incurred to generate the response.
We’ll then rating the fashions 0 or 1 for every activity, relying on their efficiency. So let’s check out the duties and see who emerges because the winner within the DeepSeek-R1 vs o3-mini reasoning battle!
Activity 1: Constructing a Tetris Sport
This activity requires the mannequin to implement a completely useful Tetris recreation utilizing Python, effectively managing recreation logic, piece motion, collision detection, and rendering with out counting on exterior recreation engines.
Immediate: “Write a python code for this downside: generate a Python code for the Tetris recreation“
Enter to DeepSeek-R1 API
INPUT_COST_CACHE_HIT = 0.14 / 1_000_000 # $0.14 per 1M tokens
INPUT_COST_CACHE_MISS = 0.55 / 1_000_000 # $0.55 per 1M tokens
OUTPUT_COST = 2.19 / 1_000_000 # $2.19 per 1M tokens
# Begin timing
task1_start_time = time.time()
# Initialize OpenAI consumer for DeepSeek API
consumer = OpenAI(api_key=api_key, base_url="https://api.deepseek.com")
messages = [
{
"role": "system",
"content": """You are a professional Programmer with a large experience."""
},
{
"role": "user",
"content": """write a python code for this problem: generate a python code for Tetris game."""
}
]
# Get token rely utilizing tiktoken (alter mannequin title if crucial)
encoding = tiktoken.get_encoding("cl100k_base") # Use a suitable tokenizer
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
# Name DeepSeek API
response = consumer.chat.completions.create(
mannequin="deepseek-reasoner",
messages=messages,
stream=False
)
# Get output token rely
output_tokens = len(encoding.encode(response.selections[0].message.content material))
task1_end_time = time.time()
total_time_taken = task1_end_time - task1_start_time
# Assume cache miss for worst-case pricing (alter if cache data is offered)
input_cost = (input_tokens / 1_000_000) * INPUT_COST_CACHE_MISS
output_cost = (output_tokens / 1_000_000) * OUTPUT_COST
total_cost = input_cost + output_cost
# Print outcomes
print("Response:", response.selections[0].message.content material)
print("------------------ Complete Time Taken for Activity 1: ------------------", total_time_taken)
print(f"Enter Tokens: {input_tokens}, Output Tokens: {output_tokens}")
print(f"Estimated Value: ${total_cost:.6f}")
# Show outcome
from IPython.show import Markdown
show(Markdown(response.selections[0].message.content material))
Response by DeepSeek-R1
![DeepSeek-R1 task 1 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/task1deepseekr1.webp)
Yow will discover DeepSeek-R1’s full response right here.
Output token value:
Enter Tokens: 28 | Output Tokens: 3323 | Estimated Value: $0.0073
Code Output
Enter to o3-mini API
task1_start_time = time.time()
consumer = OpenAI(api_key=api_key)
messages = messages=[
{
"role": "system",
"content": """You are a professional Programmer with a large experience ."""
},
{
"role": "user",
"content": """write a python code for this problem: generate a python code for Tetris game.
"""
}
]
# Use a suitable encoding (cl100k_base is the best choice for brand new OpenAI fashions)
encoding = tiktoken.get_encoding("cl100k_base")
# Calculate token counts
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
completion = consumer.chat.completions.create(
mannequin="o3-mini-2025-01-31",
messages=messages
)
output_tokens = len(encoding.encode(completion.selections[0].message.content material))
task1_end_time = time.time()
input_cost_per_1k = 0.0011 # Instance: $0.005 per 1,000 enter tokens
output_cost_per_1k = 0.0044 # Instance: $0.015 per 1,000 output tokens
# Calculate value
input_cost = (input_tokens / 1000) * input_cost_per_1k
output_cost = (output_tokens / 1000) * output_cost_per_1k
total_cost = input_cost + output_cost
print(completion.selections[0].message)
print("----------------=Complete Time Taken for activity 1:----------------- ", task1_end_time - task1_start_time)
print(f"Enter Tokens: {input_tokens}, Output Tokens: {output_tokens}")
print(f"Estimated Value: ${total_cost:.6f}")
# Show outcome
from IPython.show import Markdown
show(Markdown(completion.selections[0].message.content material))
Response by o3-mini
![o3-mini task 1 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/task1o3mini.webp)
Yow will discover o3-mini’s full response right here.
Output token value:
Enter Tokens: 28 | Output Tokens: 3235 | Estimated Value: $0.014265
Code Output
Comparative Evaluation
On this activity, the fashions had been required to generate useful Tetris code that permits for precise gameplay. DeepSeek-R1 efficiently produced a completely working implementation, as demonstrated within the code output video. In distinction, whereas o3-mini’s code appeared well-structured, it encountered errors throughout execution. Consequently, DeepSeek-R1 outperforms o3-mini on this situation, delivering a extra dependable and playable resolution.
Rating: DeepSeek-R1: 1 | o3-mini: 0
Activity 2: Analyzing Relational Inequalities
This activity requires the mannequin to effectively analyze relational inequalities slightly than counting on fundamental sorting strategies.
Immediate: “ Within the following query assuming the given statements to be true, discover which of the conclusion among the many given conclusions is/are undoubtedly true after which give your solutions accordingly.
Statements:
H > F ≤ O ≤ L; F ≥ V < D
Conclusions: I. L ≥ V II. O > D
The choices are:
A. Solely I is true
B. Solely II is true
C. Each I and II are true
D. Both I or II is true
E. Neither I nor II is true.”
Enter to DeepSeek-R1 API
INPUT_COST_CACHE_HIT = 0.14 / 1_000_000 # $0.14 per 1M tokens
INPUT_COST_CACHE_MISS = 0.55 / 1_000_000 # $0.55 per 1M tokens
OUTPUT_COST = 2.19 / 1_000_000 # $2.19 per 1M tokens
# Begin timing
task2_start_time = time.time()
# Initialize OpenAI consumer for DeepSeek API
consumer = OpenAI(api_key=api_key, base_url="https://api.deepseek.com")
messages = [
{"role": "system", "content": "You are an expert in solving Reasoning Problems. Please solve the given problem."},
{"role": "user", "content": """ In the following question, assuming the given statements to be true, find which of the conclusions among given conclusions is/are definitely true and then give your answers accordingly.
Statements: H > F ≤ O ≤ L; F ≥ V < D
Conclusions:
I. L ≥ V
II. O > D
The options are:
A. Only I is true
B. Only II is true
C. Both I and II are true
D. Either I or II is true
E. Neither I nor II is true
"""}
]
# Get token rely utilizing tiktoken (alter mannequin title if crucial)
encoding = tiktoken.get_encoding("cl100k_base") # Use a suitable tokenizer
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
# Name DeepSeek API
response = consumer.chat.completions.create(
mannequin="deepseek-reasoner",
messages=messages,
stream=False
)
# Get output token rely
output_tokens = len(encoding.encode(response.selections[0].message.content material))
task2_end_time = time.time()
total_time_taken = task2_end_time - task2_start_time
# Assume cache miss for worst-case pricing (alter if cache data is offered)
input_cost = (input_tokens / 1_000_000) * INPUT_COST_CACHE_MISS
output_cost = (output_tokens / 1_000_000) * OUTPUT_COST
total_cost = input_cost + output_cost
# Print outcomes
print("Response:", response.selections[0].message.content material)
print("------------------ Complete Time Taken for Activity 2: ------------------", total_time_taken)
print(f"Enter Tokens: {input_tokens}, Output Tokens: {output_tokens}")
print(f"Estimated Value: ${total_cost:.6f}")
# Show outcome
from IPython.show import Markdown
show(Markdown(response.selections[0].message.content material))
Output token value:
Enter Tokens: 136 | Output Tokens: 352 | Estimated Value: $0.000004
Response by DeepSeek-R1
![deepseek-r1 task 2 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/task2deepseekr1.webp)
Enter to o3-mini API
task2_start_time = time.time()
consumer = OpenAI(api_key=api_key)
messages = [
{
"role": "system",
"content": """You are an expert in solving Reasoning Problems. Please solve the given problem"""
},
{
"role": "user",
"content": """In the following question, assuming the given statements to be true, find which of the conclusions among given conclusions is/are definitely true and then give your answers accordingly.
Statements: H > F ≤ O ≤ L; F ≥ V < D
Conclusions:
I. L ≥ V
II. O > D
The options are:
A. Only I is true
B. Only II is true
C. Both I and II are true
D. Either I or II is true
E. Neither I nor II is true
"""
}
]
# Use a suitable encoding (cl100k_base is the best choice for brand new OpenAI fashions)
encoding = tiktoken.get_encoding("cl100k_base")
# Calculate token counts
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
completion = consumer.chat.completions.create(
mannequin="o3-mini-2025-01-31",
messages=messages
)
output_tokens = len(encoding.encode(completion.selections[0].message.content material))
task2_end_time = time.time()
input_cost_per_1k = 0.0011 # Instance: $0.005 per 1,000 enter tokens
output_cost_per_1k = 0.0044 # Instance: $0.015 per 1,000 output tokens
# Calculate value
input_cost = (input_tokens / 1000) * input_cost_per_1k
output_cost = (output_tokens / 1000) * output_cost_per_1k
total_cost = input_cost + output_cost
# Print outcomes
print(completion.selections[0].message)
print("----------------=Complete Time Taken for activity 2:----------------- ", task2_end_time - task2_start_time)
print(f"Enter Tokens: {input_tokens}, Output Tokens: {output_tokens}")
print(f"Estimated Value: ${total_cost:.6f}")
# Show outcome
from IPython.show import Markdown
show(Markdown(completion.selections[0].message.content material))
Output token value:
Enter Tokens: 135 | Output Tokens: 423 | Estimated Value: $0.002010
Response by o3-mini
![o3-mini task 2 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/task2o3mini.webp)
Comparative Evaluation
o3-mini delivers essentially the most environment friendly resolution, offering a concise but correct response in considerably much less time. It maintains readability whereas making certain logical soundness, making it superb for fast reasoning duties. DeepSeek-R1, whereas equally right, is way slower and extra verbose. Its detailed breakdown of logical relationships enhances explainability however could really feel extreme for easy evaluations. Although each fashions arrive on the identical conclusion, o3-mini’s pace and direct method make it the higher selection for sensible use.
Rating: DeepSeek-R1: 0 | o3-mini: 1
Activity 3: Logical Reasoning in Math
This activity challenges the mannequin to acknowledge numerical patterns, which can contain arithmetic operations, multiplication, or a mixture of mathematical guidelines. As a substitute of brute-force looking out, the mannequin should undertake a structured method to infer the hidden logic effectively.
Immediate: “Research the given matrix rigorously and choose the quantity from among the many given choices that may substitute the query mark (?) in it.
____________
| 7 | 13 | 174|
| 9 | 25 | 104|
| 11 | 30 | ? |
|_____|____|___|
The choices are:
A 335
B 129
C 431
D 100
Please point out your method that you’ve taken at every step.“
Enter to DeepSeek-R1 API
INPUT_COST_CACHE_HIT = 0.14 / 1_000_000 # $0.14 per 1M tokens
INPUT_COST_CACHE_MISS = 0.55 / 1_000_000 # $0.55 per 1M tokens
OUTPUT_COST = 2.19 / 1_000_000 # $2.19 per 1M tokens
# Begin timing
task3_start_time = time.time()
# Initialize OpenAI consumer for DeepSeek API
consumer = OpenAI(api_key=api_key, base_url="https://api.deepseek.com")
messages = [
{
"role": "system",
"content": """You are a Expert in solving Reasoning Problems. Please solve the given problem"""
},
7
]
# Get token rely utilizing tiktoken (alter mannequin title if crucial)
encoding = tiktoken.get_encoding("cl100k_base") # Use a suitable tokenizer
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
# Name DeepSeek API
response = consumer.chat.completions.create(
mannequin="deepseek-reasoner",
messages=messages,
stream=False
)
# Get output token rely
output_tokens = len(encoding.encode(response.selections[0].message.content material))
task3_end_time = time.time()
total_time_taken = task3_end_time - task3_start_time
# Assume cache miss for worst-case pricing (alter if cache data is offered)
input_cost = (input_tokens / 1_000_000) * INPUT_COST_CACHE_MISS
output_cost = (output_tokens / 1_000_000) * OUTPUT_COST
total_cost = input_cost + output_cost
# Print outcomes
print("Response:", response.selections[0].message.content material)
print("------------------ Complete Time Taken for Activity 3: ------------------", total_time_taken)
print(f"Enter Tokens: {input_tokens}, Output Tokens: {output_tokens}")
print(f"Estimated Value: ${total_cost:.6f}")
# Show outcome
from IPython.show import Markdown
show(Markdown(response.selections[0].message.content material))
Output token value:
Enter Tokens: 134 | Output Tokens: 274 | Estimated Value: $0.000003
Response by DeepSeek-R1
![deepseek r1 task 3 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/deepseekr1task3.webp)
Enter to o3-mini API
task3_start_time = time.time()
consumer = OpenAI(api_key=api_key)
messages = [
{
"role": "system",
"content": """You are a Expert in solving Reasoning Problems. Please solve the given problem"""
},
7
]
# Use a suitable encoding (cl100k_base is the best choice for brand new OpenAI fashions)
encoding = tiktoken.get_encoding("cl100k_base")
# Calculate token counts
input_tokens = sum(len(encoding.encode(msg["content"])) for msg in messages)
completion = consumer.chat.completions.create(
mannequin="o3-mini-2025-01-31",
messages=messages
)
output_tokens = len(encoding.encode(completion.selections[0].message.content material))
task3_end_time = time.time()
input_cost_per_1k = 0.0011 # Instance: $0.005 per 1,000 enter tokens
output_cost_per_1k = 0.0044 # Instance: $0.015 per 1,000 output tokens
# Calculate value
input_cost = (input_tokens / 1000) * input_cost_per_1k
output_cost = (output_tokens / 1000) * output_cost_per_1k
total_cost = input_cost + output_cost
# Print outcomes
print(completion.selections[0].message)
print("----------------=Complete Time Taken for activity 3:----------------- ", task3_end_time - task3_start_time)
print(f"Enter Tokens: {input_tokens}, Output Tokens: {output_tokens}")
print(f"Estimated Value: ${total_cost:.6f}")
# Show outcome
from IPython.show import Markdown
show(Markdown(completion.selections[0].message.content material))
Output token value:
Enter Tokens: 134 | Output Tokens: 736 | Estimated Value: $0.003386
Output by o3-mini
![o3-mini vs DeepSeek-R1 API logical reasoning](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/o3-minitaks3_1.webp)
![logical reasoning task 3 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/o3minitask3_2.webp)
![logical reasoning task 3 output](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/o3minitask3_3.webp)
![](https://cdn.analyticsvidhya.com/wp-content/uploads/2025/02/o3mintask3_4.webp)
Comparative Evaluation
Right here, the sample adopted in every row is:
(1st quantity)^3−(2nd quantity)^2 = third quantity
Making use of this sample:
- Row 1: 7^3 – 13^2 = 343 – 169 = 174
- Row 2: 9^3 – 25^2 = 729 – 625 = 104
- Row 3: 11^3 – 30^2 = 1331 – 900 = 431
Therefore, the proper reply is 431.
DeepSeek-R1 accurately identifies and applies this sample, resulting in the proper reply. Its structured method ensures accuracy, although it takes considerably longer to compute the outcome. o3-mini, then again, fails to determine a constant sample. It makes an attempt a number of operations, akin to multiplication, addition, and exponentiation, however doesn’t arrive at a definitive reply. This leads to an unclear and incorrect response. Total, DeepSeek-R1 outperforms o3-mini in logical reasoning and accuracy, whereas O3-mini struggles as a result of its inconsistent and ineffective method.
Rating: DeepSeek-R1: 1 | o3-mini: 0
Last Rating: DeepSeek-R1: 2 | o3-mini: 1
Logical Reasoning Comparability Abstract
Activity No. | Activity Kind | Mannequin | Efficiency | Time Taken (seconds) | Value |
1 | Code Era | DeepSeek-R1 | ✅ Working Code | 606.45 | $0.0073 |
o3-mini | ❌ Non-working Code | 99.73 | $0.014265 | ||
2 | Alphabetical Reasoning | DeepSeek-R1 | ✅ Right | 74.28 | $0.000004 |
o3-mini | ✅ Right | 8.08 | $0.002010 | ||
3 | Mathematical Reasoning | DeepSeek-R1 | ✅ Right | 450.53 | $0.000003 |
o3-mini | ❌ Mistaken Reply | 12.37 | $0.003386 |
Conclusion
As we now have seen on this comparability, each DeepSeek-R1 and o3-mini show distinctive strengths catering to completely different wants. DeepSeek-R1 excels in accuracy-driven duties, notably in mathematical reasoning and sophisticated code technology, making it a robust candidate for functions requiring logical depth and correctness. Nonetheless, one vital disadvantage is its slower response instances, partly as a result of ongoing server upkeep points which have affected its accessibility. Then again, o3-mini provides considerably sooner response instances, however its tendency to supply incorrect outcomes limits its reliability for high-stakes reasoning duties.
This evaluation underscores the trade-offs between pace and accuracy in language fashions. Whereas o3-mini could also be helpful for speedy, low-risk functions, DeepSeek-R1 stands out because the superior selection for reasoning-intensive duties, supplied its latency points are addressed. As AI fashions proceed to evolve, placing a steadiness between efficiency effectivity and correctness will likely be key to optimizing AI-driven workflows throughout varied domains.
Additionally Learn: Can OpenAI’s o3-mini Beat Claude Sonnet 3.5 in Coding?
Incessantly Requested Questions
A. DeepSeek-R1 excels in mathematical reasoning and sophisticated code technology, making it superb for functions that require logical depth and accuracy. o3-mini, then again, is considerably sooner however usually sacrifices accuracy, resulting in occasional incorrect outputs.
A. DeepSeek-R1 is the higher selection for coding and reasoning-intensive duties as a result of its superior accuracy and talent to deal with complicated logic. Whereas o3-mini supplies faster responses, it could generate errors, making it much less dependable for high-stakes programming duties.
A. o3-mini is greatest fitted to low-risk, speed-dependent functions, akin to chatbots, informal textual content technology, and interactive AI experiences. Nonetheless, for duties requiring excessive accuracy, DeepSeek-R1 is the popular possibility.
A. DeepSeek-R1 has superior logical reasoning and problem-solving capabilities, making it a robust selection for mathematical computations, programming help, and scientific queries. o3-mini supplies fast however generally inconsistent responses in complicated problem-solving eventualities.