The IT world is currently in the middle of an ‘AI rush,’ and we have just been hit by a new next wave with the launch of DeepSeek, an open source AI-powered chatbot that rivals OpenAI’s architecture. With any new artificial intelligence innovation, we must also discuss its potential data privacy impact. In the wake of Data Privacy Day, now is a good time to take a closer look at the potential of this new AI tool and its related data protection considerations.
AI is fundamentally about handling and enriching data, as without data (for now) there is no artificial intelligence. The more data and power it is fed, the more powerful the artificial intelligence becomes. The contextual engines of tools like ChatGPT and now DeepSeek rely on the data as context for their modeling and outcomes. And this raises the question of who controls this data and who has access to it. What data goes into the AI tools and which biases are potentially existing in this box?
The democratization of AI
DeepSeek claims to have power not only to process massive amounts of data efficiently, but to throw stock markets into turmoil due to the substantially lower cost than rivals. For many years, companies from the United States have dominated the digital innovation space; and it would appear that in the first two years of the AI rush, many of the companies in the space like OpenAI, are also American.
No wonder these digital natives are taking this AI newcomer from China as a massive threat that endangers the land grab for artificial intelligence, very similar to how the Cloud race and other IT land grabs have been before.
DeepSeek’s entrance is expected to have a democratization effect on AI and shows that the insular group of Silicon Valley companies are no longer the only ones capable of shaping the future of this technology. The fact that DeepSeek is an open source AI platform, however, has to be evaluated carefully. While this AI tool’s codes are open, its training data sources remain largely opaque, making it difficult to assess potential biases or security risks.
What makes DeepSeek nevertheless so powerful is its unique level of efficiency. The biggest problems that Silicon Valley has had in the wake of the AI rush over the last two years are the enormous processing required and the consequential energy consumption of all these chatbots and applications that are suddenly in vogue.
With the development of DeepSeek there is the potential to have AI consume more efficiently, and therefore less energy as it needs less computing power. The compute curve was approaching an asymptote governed by supply and costs were rising and driving market caps for companies in the ecosystem. That supply, such as for GPUs, faces a change in the balance between supply and demand.
But this potential disruption is only one side of the coin. This new AI tool will function as a catalyst to speed up demand of new applications and, in the short-to-medium term, organizations will likely accelerate AI innovation and come to a point where the capacity maximization from an energy and compute perspective returns to the same asymptote yet again. Barring breakthroughs in energy production or computing, such as with quantum computing, the ecosystem will stabilize in due order.
Mature AI foundations
In the rush to roll out new AI driven applications as fast as they can, organizations should not forget about solid data protection foundations. There are various governance, privacy, security, legal, social, and ethical considerations that should be taken into account, alongside improved efficiency and performance of an AI tool.
Organizations have to make sure all these components are in alignment before pushing forward, and those that have done so are ready to leap ahead flexibly and quickly while those that haven’t will find themselves at more risk than peers. Each of these dimensions require not only a framework and deliberation but the articulation and clarity as well.
When organizations accelerate the rate of information being fed into their AI tools to supercharge adoption, they have to review the data sets for bias and be transparent about what data they are using and collecting in their model. The final step is to evaluate not only the outputs of their AI tools but also the supply chain that has access to it. The very minute that data is introduced into the AI world, organizations should be aware that they have the appropriate security controls in place.
So with all this goldrush mindset of AI, organizations must not forget data protection. Companies that have invested time and effort in their AI governance and preparation around data protection mechanisms in the last two years will be able to get to the AI gold first. They will have mature AI policies in place about who they work with and how they treat their data, have ethical guidelines and oversight into AI projects to enable the departments that are eagerly evaluating new AI tools and functionality.
We’ve listed the best privacy tool and anonymous browser.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro