The widespread growth of Generative AI (GenAI) is changing the world of many different industries such as healthcare, finance, government, and entertainment. In addition to the incredible advantages that this technological victory provides, it also has some negative aspects, hence it contributes to privacy and data protection issues. A recent bipartisan effort embarked on by the U.S. Congress to prevent the DeepSeek AI platform from being used for federal purposes bows out the aforementioned concerns, the point being that the situation is urgently in need of stronger security measures.
According to Kurt Rohloff, CTO and co-founder of Duality Technologies, a leading company in privacy-enhancing data collaboration, this congressional move reveals a deeper systemic issue in how GenAI models are designed.
“DeepSeek’s vulnerabilities are just the tip of the iceberg,” Rohloff explains. “Most GenAI systems are built without robust privacy architectures, falling short of what regulatory frameworks demand — especially in highly regulated sectors.”
This warning comes at a time when AI adoption is accelerating rapidly, but governance and privacy controls lag behind.
Why DeepSeek AI Was Targeted: A Wake-Up Call for AI Privacy
Bill Cassidy together with Jacky Rosen were the ones that did it! They introduced a bill in May 2025 that would outlaw the awarding of federal contracts to DeepSeek. The platform openly admitted to routing sensitive user data to China, which ignited alarm bells regarding national security and user privacy.
This incident is more than a one-off controversy — it signals widespread risks associated with how GenAI platforms handle data. Many operate by collecting, storing, and analyzing massive datasets with limited transparency or accountability.
Experts argue that this regulatory action is necessary but insufficient. Rohloff stresses, “The real challenge lies in the very architecture of GenAI, which inherently stores and uses data in ways that create structural privacy flaws.”
Consumer Trust Is Cracking — And for Good Reason
Public anxiety around AI privacy is rising sharply. According to a Prosper Insights & Analytics poll conducted in 2025, 58.6% of consumers are either very or extremely concerned about their privacy using AI-powered technology
This skepticism is well-founded, particularly in sensitive industries such as:
- Healthcare: Potential exposure of patient medical records can lead to identity theft and discrimination
- Finance: Leakage of trading algorithms or client data risks financial fraud and competitive harm
- Government: Unauthorized access to classified or sensitive information can jeopardize national security
Rohloff warns, “Trust is fragile. Once lost, it can take decades to rebuild — if ever.”
Generative AI: Built to Remember, Risking Privacy
A core characteristic of GenAI models is their ability to learn from vast datasets, including:
- User inputs and prompts
- Uploaded documents and files
- Behavioral and interaction patterns
Unlike traditional software, these models retain information, effectively building an evolving “memory.” While this enhances performance, it also exposes a massive attack surface.
Without strict controls and privacy-by-design principles, sensitive data can inadvertently become part of the model’s training set, creating risks when the AI is queried or shared.
Inside Threats: Model Inversion & Prompt Injection Attacks
Two critical and growing threats have emerged:
Model Inversion Attacks
Cybercriminals use crafted inputs to reverse-engineer and extract sensitive training data, reconstructing private information from AI outputs.
Prompt Injection Attacks
Malicious actors embed harmful instructions in user inputs to bypass safeguards, tricking AI systems into revealing confidential data.
“These attacks are not theoretical — they are happening in the wild,” Rohloff warns. Security teams must prepare accordingly.
Learn more about AI security threats at MIT Technology Review.
What the Government Is Doing — And Why It’s Not Enough
The U.S. government has taken steps to address AI risks, including:
- Executive Order 14179 of January 2025, Promoting innovation and ethics in responsible AI
- OMB AI Guidelines (April 2025), setting standards for testing, monitoring, and handling Personally Identifiable Information (PII)
While these frameworks represent important milestones, Rohloff emphasizes that regulations alone cannot solve design flaws.
“Privacy by design and strong encryption technologies are the real solutions,” he insists.
For detailed government AI policies, visit the White House AI Initiatives.
The Financial Fallout: What AI Breaches Really Cost
IBM’s 2024 Cost of a Data Breach Report underlines the shocking monetary effect of data leaks:
The average per-occurrence of healthcare violations—the highest among sectors—is $9.8 million.
Costs include not only fines and litigation but also loss of customer trust and long-term brand damage, which can take years to repair.
The report is available in full and can be gone through here: IBM Data Breach Report 2024.
The Solution? Privacy-Enhancing Technologies (PETs)
Rohloff is favoring Privacy Enhancing Technologies (PET) that advance AI but also safeguard data.
A standout solution is:
Fully Homomorphic Encryption (FHE)
By allowing encrypted data to be used without exposing the data, FHE ensures the data’s confidentiality during the entire life cycle.
With FHE, the critical vulnerability found in traditional encryption systems is closed, and thus data will be safe while at rest, in transit, and during computation.
Go through more into the open-source project for FHE. OpenFHE.org.
From Theory to Reality: How Field-Programmable Gate Arrays Are Changing the Game
Duality Technologies is the first to introduce FHE-based solutions in industries such as finance, healthcare, and the government.
OpenFHE, an open-source platform that was established with the use of libraries such as PALISADE, makes it possible the use such software, but at the same time, it also boosts the speed, scalability, and accessibility factors.
“We are bridging the gap between strong encryption and practical AI application,” Rohloff notes.
Beyond Security: Safe AI Collaboration Across Borders
FHE doesn’t just protect data; it also enables secure, multi-party collaboration without sharing raw data.
Examples include:
Hospitals
Joint analysis of patient data without exposing individual records, enhancing research while complying with HIPAA.
Banks
Collaborative fraud detection without revealing proprietary algorithms, maintaining competitive advantage.
This feature assists the enterprise to respect the privacy regulations, e.g.:
- (Health Insurance Portability and Accountability Act) HIPAA
- (General Data Protection Regulation) GDPR
- (California Consumer Privacy Act) CCPA
Leadership Over Regulation: A Call to Action
Rohloff stresses that waiting for regulatory frameworks is risky.
“Organizations must lead by building privacy-first AI systems now.”
His recommendations include:
- Embedding privacy by design in AI development
- Using encrypted-by-default architectures
- Vetting AI tools for potential data leaks
- Training teams in secure AI development best practices
“Trust can’t be outsourced; it must be cultivated internally,” he concludes.
Conclusion: Privacy Must Be Built In, Not Bolted On
As GenAI drives digital transformation forward, its future success hinges on how seriously privacy is integrated from the start.
Patchwork fixes and reactive compliance won’t suffice.
Only a strategic approach featuring bold leadership and advanced privacy-first technologies like FHE can ensure these systems remain safe, trustworthy, and future-proof.
This article includes insights inspired by reporting from Forbes
Ahsan Ali is a technology blogger and the founder of Techzivo.com, a platform dedicated to delivering insightful and practical content for tech enthusiasts.He currently focuses on creating in-depth articles around cybersecurity, aiming to help readers stay safe and informed in the digital world. With a passion for emerging technologies, Ahsan plans to expand Techzivo’s coverage into other technology micro-niches such as AI, cloud computing, and digital privacy, offering valuable insights for a broader tech-savvy audience.