Despite the disturbance of the market that caused the technical performance of the new model of the large language R1 in China, personal data protection experts would warn companies that should not be too fast to dive in the first head.
However, the opinion on the market is already divided. Some data protection experts, traders and technical executing executives advocate further testing and better railings than companies accept the latest AI Deepseek. Meanwhile, the progress of Deepseek has shook the psyche of Silicon Valley – and its investors.
After last week, the release of Open-Weight LLM, launching AI based on young China quickly attracted its low cost, fast speed and high performance. Deepseek’s own chatbot – opponent Chatgpt – has also increased to become the best free application in the Apple App Store. (Deepseek also released a new model AI image called Janus-Pro.)
Deepseek, founded in 2023, launched Liang Wenfeng, which also founded the Chinese quantitative Hedge Fond High-Flyer, which is also supposedly One of Deepseek’s investors.
The rise of R1 comes when Chinese technology companies face more US control on personal data protection and national security. While Tiktok and Capcut Face Regulators Purgator, others – including the game and social gaming Tencent – have recently been added to the list of companies with an alleged link to the Chinese army.
Experts on technology and marketing are enthusiastic about the use of cheaper alternative LLMs from OpenI, Anthropic, Google and Meta. However, personal data protection experts warn about the possible risks for user privacy, content censorship, and corporate IP theft. Will merchants around the AI of China gather or will last under the pretending privacy and uncertainty?
Key Reflections on Personal Data Protection
According to Deepseek’s own Privacy PolicyThere are a number of conditions that experts claim that they could endanger the privacy of users. Some examples:
Deepseek user data is stored in China
Deepseek can share information collected through your use of a service with our partners of advertising or analysts
Deepseek will collect personal data through cookies, web beacons and pixels and payment data labels
The data collected also includes chat history, device data model, IP address, keyboard patterns, OS, payment details and system language
Deepseek’s personal data protection principles allow him to share information with his corporate group, note Carey Lening, a personal data protection expert with the Counseling Company in Ireland Castlebridge. Also she he noticed Deepseek policy allows you to share data with third parties within the “business transactions”. However, this policy does not include details on the topic. Deepseek also says that its partners can also share data with a startup “to help you match you and your actions out of service”. This includes:
Activities on other websites and applications or in stores
Products or services purchased online or in person
Mobile identifiers for ads, e -mail addresses, phone numbers, and cookie identifiers
Deepseek collects and shares data similar to his opponents, but their marketing data policy varies. For example, Google uses a lot of data to target ad targeting, but its policy says it doesn’t use Gemini conversations. Spantlexity’s states that it can publish user data to third parties, including business partners and companies that operate advertising on their platforms or “otherwise help with advertising”. However, Openai’s principles argue that it is avoiding user content for marketing purposes and that it does not build users’ profiles to target ads.
Deepseek did not respond immediately to Digiday’s request about the comment.
Divided opinions
“We think Tiktok is just a thin end of the big wedge,” said Joe Jones, director of research and knowledge in the International Privacy Association of Professionals. “We see a much larger Hawkishness in terms of data in countries where there are lower standards or even where countries are perhaps an opponent.”
Despite concerns, some AI experts think that R1 can be a safe and viable LLM corporate level if it is deployed through a controlled client, such as local laptop installation or passes through servers hosted in the US and Europe. Some say that the API, the Chatbot app Deepseek or the web version.
“We can share personal information with advertising and analysts, collected using her services.”
Deepseek’s personal data protection principle
Concerns have not stopped some companies, such as confusion, in the advance with adoption. On Monday, the AI R1 searched platform was available to help users of premium research with deep web research and provide R1 skills. ARAVIND Srinivas, CEO and co -founder of ARAVIND Srinivas, wrote about X that all the use of a deep search for confusion is “through models hosted in American and European data centers”.
Some think that concerns about data and security protection have been largely overlooked in the middle of all hypes. Phillip Hacker, a professor of German law and ethics at the European University of Vidrin, noted that American rivals also collect a lot of data, but also have a stronger privacy policy. In LinkedIn, Hacker asked why Deepseek feels “especially scary”.
“From the US case, we know that every Chinese society must hand over its data to the Chinese government if it wants it,” hacker he wrote. “Integrate Deepseek into your products and allow a completely new level of industry espionage. Beyond the scope of what Tiktok is already facilitating. ”
Railing and instructions
Before receiving models AI, experts suggest that companies carry out tests to make sure they do not accidentally use data in a way that violates personal data protection laws – such as European and different state laws.
Companies can improve privacy – and business value – by actively building systems, said Ron de Jesus, Main Data Protection Director in Transced, helping companies to test data in the use of various AI models and other technologies. President Donald Trump’s recent decision to abolish the then President Joe Biden for the responsible AI policy has created more regulatory uncertainties, reduced the instructions for responsible development and adoption of AI, and left the main officers of personal data concerned with compliance with compliance.
“We cannot constantly ban societies because it is based in China,” De Jesus said. “We have to have a better way to control.” [companies] And look at their compliance programs. ”
Privacy experts are afraid of the impact of R1 with European AI and data that they could weaken IP protection, increase content distortion, and allow Chinese content censorship. The new AI efficiency also fears that they are worried about the fraud of generated AI, deep, disinformation and national security risks.
Marketing executors also expressed concern. A trader testing In personal position, Tim Hussain, the global SVP design of the product and the solution in Oliver. He observed that Deepseek’s application returned “Let’s talk about something else” when he asked about the actions of the Chinese state, such as events in the South China Sea or the massacre of the square of Heavenly Square.
“How can we believe AI, which is obviously censored?” Hussain wrote on LinkedIn. “While the LLM space continues to excite us with innovation and potential, the example of Deepseek raises serious concerns – especially for companies considering inserting such models. How do you ensure reliability and integrity when the results are clearly manipulated? ”