Generative AI like ChatGPT and Gemini have risen quickly in popularity, but large
language models
put organizations at risk of data breach or leak.

Safe access and robust DLP for generative AI

Forcepoint delivers industry-leading data security and access control on any device for generative AI,
enabling users to maximize its benefits without the risk.

Data Security
Everywhere

Industry-leading data security
across cloud and web

Centralized Policy
Enforcement

Curate access to thousands
of generative AI applications

Reliable
Scalability

Grow with the elasticity
of AWS hyperscaler-based platform

Data Security Starts with Data-first SASE  

Secure Access Service Edge (SASE) simplifies access control and connectivity to better secure work-from-anywhere employees – including AI users.

Forcepoint’s Data-first SASE architecture pushes this one step further by universally enforcing policies to secure sensitive data on any device. As a result, organizations can confidently integrate generative AI with full, real-time visibility.

Protect data on ChatGPT with DLP and Forcepoint ONE.
Manage access and secure data in ChatGPT and Gemini with DLP and SSE.

The Next Generation of Shadow IT Risk 

ChatGPT and Gemini can run financial analysis in hyperspeed and create sample code for software developers. But the applications learn from the data that’s shared with them, creating the potential for a data leak or breach.

Forcepoint enables your employees to safely benefit from generative AI, while you: 

  • Control who uses generative AI. 
  • Prevent uploading of sensitive files. 
  • Block pasting of sensitive information. 
Block sensitive upload with Forcepoint

Stop Data Leakage with DLP for ChatGPT and Gemini

Discover and classify sensitive data across the organization.

Utilize 1,700+ out-of-the-box policies and classifiers to stop data loss.

Block copy and paste of sensitive information into web browsers.

Use unified policy management to control data loss through generative AI.

Forcepoint ONE, combined with Forcepoint DLP, protect access and secure data on ChatGPT.

Visibility and Control for AI with Forcepoint ONE

Limit access to AI based on users, groups and other criteria.

Coach visitors to use approved AI applications and redirect them.

Securely manage usage of thousands of AI SaaS apps.

Cover emerging tools with blanket coverage based on AI website category.

SSE PLATFORM Explore Forcepoint ONE

Read About DLP for ChatGPT, Gemini and More 

Unlock Productivity with ChatGPT and Forcepoint Data Security
Gartner®: 4 Ways Generative AI Will Impact CISOs and Their Teams report

Frequently Asked Questions

 

What are the security risks of generative AI?  

Generative AI poses several threats to data security. These include a data leak, data breach and non-compliance with data privacy laws. 

Generative AI applications like ChatGPT and Gemini are LLMs that learn from the information that users put in. If a user is reviewing, for example, information about the target of merger and acquisition activity that has not yet been made public, then subsequent searches from other users could reveal that information. 

If a software engineer were to use generative AI to debug proprietary software code, then that intellectual property would be at risk of becoming public domain – or worse, ending up with a competitor. Similarly, if the engineer were to use generative AI to write code, it could potentially contain malware that could provide an attacker a backdoor into a company’s system. 

Lastly, data privacy laws like HIPAA or GDPR mandate organizations closely safeguard personal data. Using generative AI for anything having to do with Personally Identifiable Information (PII), such as drafting an email response about a customer query, could put the company at risk for non-compliance as this information could be leaked or misused.

What are examples of generative AI used maliciously?  

Generative AI can be used to write malware and spoof content for phishing and spear-phishing attacks. Generative AI vendors like ChatGPT are also at risk of data breach, giving attackers access to sensitive data from users. 

One example of generative AI being used to create malware is from Aaron Mulgrew at Forcepoint. Mulgrew was able to get ChatGPT to write malicious code, despite ChatGPT having safeguards in place to prevent this from happening.

How can you protect sensitive data in generative AI?  

Generative AI data security requires both real-time access control and robust data controls.  

In a platform like Forcepoint ONE SSE, organizations can provide access to generative AI tools to limited groups of employees that have been approved to use them. These policies can be implemented on both managed and unmanaged devices, giving companies a wide range of control. 

With Forcepoint DLP, ChatGPT and Gemini users will be limited in what information they can share with the platforms to prevent an accidental data leak. This includes preventing the pasting of sensitive information, such as social security numbers, into the application.

How does generative AI affect the threat landscape in cybersecurity?  

The cybersecurity threat landscape is constantly evolving and while generative AI is a new risk, there are existing technologies to mitigate it.

Generative AI is most at risk of being another avenue for a data leak or data breach – not too different from any of the other SaaS applications that businesses use on a daily basis. If employees are working out of generative AI with sensitive data, then a data breach or misuse of that data could impact the company’s security posture.

Treating generative AI tools like ChatGPT and Gemini as shadow IT is the first step in the right direction. Define who should be able to use the applications and build policies enabling these groups to access the tools – and preventing those who shouldn’t. Furthermore, introduce strong data controls to ensure that sensitive data remains within the organization.

What are large language models and what security risks to they pose to data?

Generative AI applications like ChatGPT are built and trained on large language models. These algorithmic models enable the platforms to ingest and learn from large databases, giving them the knowledge, context and accuracy that users have come to expect from AI chatbots.

But because generative AI is built on large language models, it continues to learn from the information that is fed into it. It is continuously learning and applying context to new information it interacts across, enabling its answers to stay fresh and relevant.

This poses a significant risk for your business. Sharing proprietary or sensitive information with the AI chatbot opens up two avenues for risk. One being that the application takes that information as public knowledge and shares it when prompted in the future. The second concern being that the company behind the generative AI tool itself is breached and the information shared with it is compromised

 

Will generative AI use our data to train their models?

By default, you should operate on the assumption that generative AI applications like ChatGPT and Gemini are likely to use the data you enter to improve their models. However, you will want to double check with the specific platform you use.

This data is collected with a host of other information, such as the device you are using, where you are accessing it from, and any details associated with your account. The ability for the application to train off your data can usually be turned off in the settings, depending on the vendor.

This is one of the primary risks of sharing sensitive information with AI. Sharing sensitive company information, such as an upcoming go-to-market strategy for a product launch, with the applications can effectively make that information publicly available if the proper precautions aren’t taken.

Forcepoint data security helps prevent this by limiting access to only users that are trained to use the applications safely, or by blocking the pasting of information into the application.

 

What if the provider gets breached and our data is stolen?

Generative AI applications must be treated like any other third-party vendor in the digital supply chain. If the platform suffers a data breach, then the information you have entered into it could be at risk. This is why companies must develop stringent acceptable-use policies surrounding generative AI to ensure data isn’t unexpectedly put at risk.

 

How do data privacy and regulatory requirements apply to generative AI?

All organizations are responsible for being in compliance with privacy regulations. When entering data into generative AI applications, there is a potential for individuals to share PII, PHI and other types of information that is regulated.

Sharing private information with a generative AI application is a form of data exfiltration that falls under privacy regulations. In order for organizations to remain compliant, they must have strong data security tools in place that will prevent this type of information from being shared.

 

Do we need new data security policies specific to generative AI?

All companies should review how their employees already or may interact with generative AI, what types of risks those interactions pose and how the organization can prevent ineligible users from accessing and sharing sensitive information with generative AI.

Like any other SaaS application, businesses should ensure they have complete visibility over users accessing generative AI and control over the data they interact with. In most cases, this will involve URL filtering by job role, device or location, and data security policies to prevent any compliance breaches.

 

Talk to an Expert About Securing ChatGPT, Bard and Generative AI Tools