Breaking 13:00 Researchers push safer eco-friendly batteries from lab concept to reality 12:58 Sahara: New U.S. diplomatic offensive to restart dialogue 12:40 Xi launches sweeping purge of Chinese military after Wagner revolt 12:30 Trump announces 10% global tariff after Supreme Court setback 12:20 European powers link Ukraine peace to firm security guarantees 11:50 Eight Hezbollah members killed in Israeli strikes in Lebanon 11:20 Ukraine’s battlefield gains reshape southern front but diplomacy stalls 11:00 Venezuela grants amnesty to 379 political prisoners amid transition 10:50 Spain calls EU commissioner’s attendance at Trump Board of Peace meeting a mistake 10:30 South Korea lodges protest after rare US-China jet standoff over the Yellow Sea 10:20 Trump oil embargo pushes Cuba to the brink of collapse 09:30 Pacific: new US strike against suspected drug traffickers leaves three dead 09:20 Merz seeks trade balance with China amid Ukraine war tensions 08:20 Polish speedskater Kamila Sellier hospitalized after terrifying blade injury at Milan Cortina Olympics 18:00 U.S. economic growth slows below expectations in late 2025 17:50 US boosts carrier presence near Iran as Trump sets nuclear deadline 17:20 Supreme Court curbs trump's emergency tariff powers in landmark ruling 15:50 Florida legislature approves renaming Palm Beach airport after President Trump 15:30 X appeals €120 million Eu Fine over digital services act violations 15:20 Iran says US has not demanded end to uranium enrichment 15:00 Prince Andrew’s arrest triggers unprecedented crisis for the British Monarchy 14:50 Investors flock to european stocks as ‘hedge america’ trade builds 14:30 Türkiye calls for Genuine Us-Iran negotiations to avoid war 14:20 FIFA pledge for Gaza stadium sparks debate over postwar priorities 13:50 kim leverages nuclear shield as trump weighs new iran strikes

Artificial Intelligence: The Enigmatic Foe of Your Privacy

Friday 07 June 2024 - 13:00
Artificial Intelligence: The Enigmatic Foe of Your Privacy

In the realm of technological advancements, the rise of artificial intelligence (AI) unveils a captivating panorama of possibilities. However, this sophisticated technology harbors an unsettling potential to gravely compromise the confidentiality of personal data.

AI and machine learning have transformed a myriad of domains, spanning computing, finance, medical research, automatic translation, and more, expanding with each passing month. Yet, these strides are accompanied by a recurring inquiry: what is the impact of these technologies on our privacy and data confidentiality? Regardless of the AI model in question, their development is fueled by ingesting an astronomical quantity of data, some of which could be highly sensitive.

The Retention of Secrets by AI

One of the principal challenges faced by enterprises training artificial intelligences lies in the inherent capacity of these technologies to learn and memorize intricate patterns derived from their training data. This characteristic, while advantageous for enhancing model accuracy (preventing hallucinations, for instance), poses a significant risk to privacy.

Machine learning models, comprising algorithms or systems that enable AI to learn from data, can encompass billions of parameters, akin to GPT-3 with its staggering 175 billion parameters. These models leverage this vast expanse of data to minimize prediction errors. Therein lies the crux of the issue: during the process of adjusting their parameters, they may inadvertently retain specific information, including sensitive data.

For illustration, if models are trained on medical or genomic data, they could memorize private information that could be extracted through targeted queries, thereby jeopardizing the confidentiality of the individuals concerned. Envision a scenario where a cyberattack or an accidental data breach occurs within the organization possessing these models; malicious entities could potentially disclose this sensitive information.

AI and the Prediction of Sensitive Information

AI models can also harness seemingly innocuous data to deduce sensitive information. A striking example is that of the Target retail chain, which successfully predicted pregnancies by analyzing customers' purchasing habits. By cross-referencing data such as the purchase of dietary supplements or unscented lotions, the model could identify potentially pregnant customers and target them with specific advertisements. This case demonstrates that even mundane data can unveil highly personal aspects of one's privacy.

Despite efforts to limit data memorization, most current methods have proven ineffective. However, there is one technique presently considered the most promising for ensuring a degree of confidentiality during model training: differential privacy. But as you will see, it is far from miraculous.

Differential Privacy: An Imperfect Solution?

To explain differential privacy in simple terms, consider this example: imagine participating in a survey, but you disagree with someone being aware of your participation or responses. Differential privacy introduces a small amount of "noise" or randomness into the survey data, so that even if someone accesses the results, they cannot be certain of your specific responses. It anonymizes the data while allowing for analysis without compromising your privacy.

This method has been adopted by industry titans like Apple and Google. However, even with this protection, AI models can still draw conclusions or make predictions about personal or private information. To prevent such violations, the only solution is to protect the entire dataset transmitted to the organization, an approach known as local differential privacy.

Despite its advantages, differential privacy is not without its limitations. Its primary drawback is that it can induce a significant decrease in the performance of machine learning methods. Consequently, models may be less accurate, providing erroneous information, and are much slower and costlier to train.

Therefore, a compromise must be struck between achieving satisfactory results and providing sufficient protection for individuals' privacy. A delicate balance must be found and, more importantly, maintained as the AI sector continues to expand. While AI can assist you in your daily life, whether for professional, personal, or academic purposes, do not consider it an ally of your confidentiality, far from it.

In summary, AI models can retain sensitive information during training, and even innocuous data can lead them to draw conclusions that compromise privacy. The differential privacy method is employed to limit this phenomenon, but it is far from perfect.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

Read more

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.