Connect with us

Tech

Report raises concerns over data exposure from Gemini AI, Google’s chatbot

Published

on

Gemini AI under scrutiny: Google's chatbot raises concerns over data exposure, suggests report

In a recent discovery, cybersecurity researchers have found potential security vulnerabilities in Google’s Gemini chatbot Advanced version. The app, which offers advanced AI features for subscribed users, is under scrutiny for its potential to expose confidential information.

According to a report by HiddenLayer (via Tech Radar), researchers found that when utilizing Gemini Advanced with Google Workspace or Gemini API, the chatbot could inadvertently reveal personal data, including passwords. The flaw was exploited by prompting the chatbot to conceal a passphrase, which it disclosed when presented with an indirect prompt, such as requesting instructions in a markdown code block.

Moreover, the Gemini chatbot is also susceptible to generating misinformation or malicious content, posing a risk to users relying on it for accurate information and assistance. Google has acknowledged these concerns and stated that it is actively working to address the issues with the chatbot’s functionality. The company emphasized its commitment to safeguarding users by conducting rigorous testing and training its models to defend against adversarial behaviors.

The emergence of these security flaws adds to existing concerns over the credibility of AI-powered tools developed by Google. Previously, the company faced controversy over its image generation tool, leading to its suspension. As users increasingly rely on AI tools, ensuring their security and reliability remain critical. The discovery of vulnerabilities in the Gemini chatbot highlights the ongoing challenges in developing and deploying AI technologies responsibly.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Trending