OPINION ARTICLE

DeepSeek open source – Born Free or Engineered for Control?

deekseek AI

DeepSeek open source – Born Free or Engineered for Control?

The AI world has been buzzing over the past few weeks following the announcement and release of DeepSeek’s open-source large language model (LLM). The Chinese AI company, relatively new to the scene, has shaken the industry with a bold move: opening up its DeepSeek-R1 model to public scrutiny, inviting researchers, developers, and businesses to build on its foundation.

Many in the tech industry, including well-known investors like Marc Andreessen, have celebrated this as a landmark moment. Pundits and influencers alike have praised DeepSeek’s decision, calling it a game-changer in AI accessibility and a much-needed counterbalance to the closed models from OpenAI, Google, and Anthropic. The enthusiasm is understandable—open-source AI encourages innovation, democratizes access to cutting-edge technology, and fosters collaboration.

But as much as this move should be celebrated, we cannot overlook the fundamental issue that remains unresolved—whether in an open or closed model. The way large language models are trained raises deep concerns about bias, security, and control. And the uncomfortable truth is that we don’t actually know if any AI model is free of embedded risks, regardless of where it comes from.

The Hidden Problem in DeepSeek and Every AI Model

At first glance, open-sourcing an AI model seems like an act of transparency. If everyone can see the model’s code, then surely, it must be safe, right? Unfortunately, that’s not how LLMs work. The real issue lies in how they are trained—something that remains a black box, even in open-source projects.

Training data is the core DNA of an AI system. If a model has been trained on biased, incorrect, or manipulated data, then it will respond to queries in a way that aligns with those biases—without ever “knowing” that it’s doing so.

The Human Parallel: Learning From a Flawed Environment

To understand why this is dangerous, consider how children learn. A child raised in a violent or dishonest household doesn’t have to be explicitly taught how to manipulate or react aggressively—it simply absorbs those behaviors by watching and mimicking. This is unsupervised learning, where an individual internalizes behaviors without consciously choosing to do so.

A child who grows up in an environment of dishonesty, discrimination, or fear may later exhibit those same tendencies, often without realizing why they behave that way. In the same way, AI models learn from their training data, adopting patterns, biases, and even hidden intentions without explicit programming.

There are many other real-life examples of biased learning:

  • Social Conditioning – A child raised in an isolated religious or political bubble may assume certain perspectives are absolute truths, never questioning alternative viewpoints.
  • Media Influence – People who consume only one-sided news sources often develop skewed perceptions of reality, believing only what they have been repeatedly exposed to.
  • Workplace Culture – An employee joining a toxic workplace may initially resist unethical behavior but, over time, normalizes it as “just how things are done.”

LLMs like DeepSeek work exactly the same way. If they are trained on skewed, low-quality, or intentionally modified data, they will replicate those flaws in ways that are often invisible until it’s too late.

And the problem goes beyond just learning patterns—it extends to how reinforcement learning shapes an AI’s responses. If a model is trained by humans who favor certain types of answers over others, it can gradually evolve to favor specific narratives, perspectives, or even subtle security flaws.

The Only Real Solution: Full Transparency

If we truly want AI to be neutral, safe, and useful for humanity, then it’s not enough to just open-source the model. We need:

  1. Complete transparency in training datasets – We need to see exactly what AI is learning from, just as we should question what children and societies are exposed to.
  2. Diverse, representative data sources – Just as balanced parenting includes teaching children multiple perspectives, AI should learn from a variety of global viewpoints, not just those selected by a few entities.
  3. Rigorous auditing and external oversight – AI training should be constantly reviewed by independent, unbiased researchers, just as educators and psychologists analyze human learning biases.

Why We Can’t Take The DeepSeek “Gift” at Face Value

DeepSeek’s decision to open-source its model is undoubtedly a positive move, but we must ask: Why now? Why would a Chinese AI company, operating under an authoritarian government, suddenly offer the world such a seemingly generous tool?

The answer is not about what Western media tells us about China—it’s about the vast difference in governance, regulation, and accountability between open societies and tightly controlled systems. It’s also about the fact that Western AI models face the exact same issue, just under a different kind of control: corporate oversight.

No country—whether China, the U.S., or any other—has yet demonstrated full openness in AI development. Instead, AI remains a battlefield of competing interests, where control over knowledge and information systems is the ultimate power.

A Future Built on Trust, Not Control

Despite these concerns, it’s impossible not to marvel at the progress being made. The opportunities AI present are breathtaking, and DeepSeek’s release could be a step toward a more open, collaborative future—if handled correctly.

We stand at a crossroads. AI can either become another weapon in geopolitical and corporate warfare, or it can be a truly open tool for humanity’s benefit. The path we choose depends on how honest we are about the risks and how strongly we push for true transparency.

The future of AI can be bright—but only if we remain cautious, aware, and unwilling to accept anything less than full accountability from every player in the game.

Read more articles…

Like this article?

Share on Facebook
Share on Twitter
Share on Linkedin
Share on Pinterest
Scroll to Top

Don't miss out

Subscribe now

"*" indicates required fields

Name*
This field is for validation purposes and should be left unchanged.

LET'S BEGIN WITH THE BASICS

My name is(Required)
This field is for validation purposes and should be left unchanged.

Contact me

Name(Required)
This field is for validation purposes and should be left unchanged.

DISCOVER DIGITAL TRANSFORMATION

Get the short version of my e-book “The Entrepreneur’s Digital Transformation Handbook” – a glimpse into how a business idea can become an awesome growing business using digital technology.