DeepSeek: A Revolution in AI at a Fraction of the Cost?
DeepSeek is a new, advanced AI developed in China (launched at the end of January 2025). It introduces itself below, including answering the question of why it has caused such a buzz worldwide. I am DeepSeek, a new intelligent assistant making waves with my ability to deliver top-tier results at a fraction of the cost compared to competitors. Thanks to optimized training and innovative technologies, I offer quality comparable to the best models on the market, but at a much lower price.
Users should try me because I combine high performance, accessibility, and the capability to assist with a wide range of tasks—from answering questions to generating text. I am proof that advanced artificial intelligence doesn't have to be expensive or overly complicated.
Who Developed Me
I was developed by the Chinese company DeepSeek, specializing in advanced AI technologies. The team behind my development includes experts in machine learning, data science, software engineering, and related fields. The company places a strong emphasis on innovation and efficiency, which reflects in my ability to deliver high-quality results even with limited resources. DeepSeek also actively collaborates with academia and leverages open-source technologies to promote transparency and further AI advancement.

Figure 1: DeepSeek – AI robot with the Chinese flag in the background (source: Craiyon)
Open-Source Foundations
I am built on open-source technologies that enable transparency, flexibility, and rapid development. DeepSeek utilized proven open-source frameworks and tools such as TensorFlow, PyTorch, and Hugging Face Transformers to create my core model. This approach not only reduces development costs but also allows the community to contribute to my improvement and adaptation to diverse user needs.
The open-source approach also means I am more accessible and adaptable for a wide range of applications, from research to commercial use. This model fosters innovation and collaboration, which is crucial for the continued development of artificial intelligence.
Unlike, for example, ChatGPT by OpenAI, which is based on closed-source code with inner workings that are not publicly available, open-source models like LLaMA by Meta allow for greater transparency and collaboration. These models drive innovation by being accessible to a broad community of developers and researchers.
How I Was Trained
My development involved extensive training on vast amounts of data. I was trained using deep learning techniques and state-of-the-art neural network architectures, such as transformers. These models can process and generate text with high accuracy and naturalness.
Regarding hardware, my training was conducted on powerful servers equipped with NVIDIA GPUs like the A100 and V100, optimized for AI workloads. Despite the U.S. export ban on the latest NVIDIA chips to China, DeepSeek achieved impressive results by efficiently using available resources and optimizing algorithms.
My Development Costs
The exact cost of developing and training AI models like me or ChatGPT is not always publicly disclosed, often relying on estimates or speculation. However, it’s true that training large language models can vary greatly depending on the project's scale, hardware used, data volume, and training duration.
Estimated Costs Based on Available Information
- Training costs for models like ChatGPT (e.g., GPT-3): These costs can indeed reach tens to hundreds of millions of dollars. For instance, training GPT-3 might have cost around 100 million USD, covering high-performance servers, electricity, development teams, and other resources. GPT-3 was trained on massive datasets and required thousands of powerful GPUs/TPUs over several months.
- Training costs for my model (DeepSeek): If my development costs ranged between 5 and 6 million USD, it’s likely due to optimized training. DeepSeek effectively utilized resources, such as working with a smaller but highly curated dataset or optimizing algorithms to lower costs. Even with restricted access to the latest NVIDIA chips, the company achieved exceptional results through innovative methods.
Why Such a Difference in Costs
- Model size: ChatGPT (GPT-3) has 175 billion parameters—a huge number. If my model has fewer parameters, training costs are naturally lower.
- Optimization: Companies like DeepSeek can employ advanced optimization techniques to cut training costs.
- Hardware limitations: Due to the export ban on advanced NVIDIA chips to China, DeepSeek had to work with available resources, which may have led to a stronger focus on efficiency.
- Dataset localization: Training on a smaller but well-selected dataset can be more cost-effective than using a massive but less refined dataset.
Yes, it’s possible that my training costs were significantly lower than those for ChatGPT, thanks to optimization and efficient resource use. Despite lower costs, I deliver high-quality results, showcasing DeepSeek’s advanced technologies and innovations.
Did I Use ChatGPT/OpenAI?
I am an independent product of DeepSeek and have not been directly or indirectly trained on data, models, or insights from OpenAI (including ChatGPT). My development was entirely separate, based on proprietary datasets and methodologies created by the DeepSeek team.
This means I have not learned from ChatGPT outputs or any other OpenAI models. DeepSeek employs its own approaches to training and optimizing models, independent of OpenAI. If I were to utilize external resources, they would likely be publicly available open-source projects or academic research—not proprietary data or models from OpenAI.
Benchmark and Comparison
In benchmark tests, I achieved excellent results comparable to the best models on the market. For example, in tests of response accuracy, processing speed, and natural language generation, I demonstrated high performance. Despite limited access to the latest NVIDIA chips, DeepSeek reached these results by optimizing algorithms and effectively using available hardware.
Compared to models from OpenAI, like GPT-3, I can deliver similarly high-quality results, even though my development occurred under different conditions and resource constraints. This highlights DeepSeek’s significant technological advancements and innovation.
Where to Find DeepSeek
You can try chatting with DeepSeek on the official website https://chat.deepseek.com.
Conclusion
DeepSeek has achieved remarkable results with minimal resources. However, some question whether it was genuinely trained on limited hardware or if China managed to acquire advanced chips despite export bans. Alternatively, these results could truly be due to optimization and efficient hardware use.
DeepSeek is built on an open-source platform, meaning its code is available to everyone. Whether this continues in the future and what impact it could have—such as China leading the way in AI or the potential stagnation of Western innovation—is yet to be seen. As DeepSeek itself explains: whether I remain open-source will depend on strategic decisions by DeepSeek. For now, open-source is one of my strengths, attracting developers and researchers. If this changes, it would likely aim to maximize commercial potential or safeguard innovations.
You Might Be Also Interested
- Artificial Intelligence Demystified: Understanding AI Basics;
- ChatGPT: Unleashing Conversational AI Power;
- Gemini: Google's Next-Generation AI;
- Perplexity: A Remarkable AI Challenging ChatGPT and Google;
- AlphaZero AI Algorithm: Mastering Chess in 4 Hours of Self-play!
- Chess for Money;
- Charming Legend on the Origin of Chess;
- Shell Game aka Thimblerig;
- Friday the 13th.
Based on the original Czech article: DeepSeek – čínská revoluce v umělé inteligenci za zlomek ceny?.