19 Questions you need to ask to build a Generative AI Cybersecurity Policy

Are your employees putting your business at risk with AI?

Are your employees putting your business at risk with AI?

At Model, we assist numerous clients with their Cybersecurity Compliance Frameworks and Policies. Model utilizes a leading vCISO Compliance tool that recently introduced a Generative AI Cybersecurity Control. I thought you might be interested in seeing the list of questions it raises:

With the growing popularity of Microsoft CoPilot, the risks are very real. If your identity, access control permissions, and data-loss prevention strategies are not robustly in place, you could inadvertently grant employees access to data they shouldn’t have. Even worse, you might expose your intellectual property to malicious actors through an ill-advised AI search

  1. Are employees required to receive approval before using any AI tools or platforms?
  2. Does the company keep an inventory of all generative AI tools currently in use?
  3. Are employees prohibited from using unapproved AI tools or platforms for company-related activities?
  4. Does the company educate employees about approved AI tools and the dangers of unapproved or malicious AI tools?
  5. Are employees allowed to use consumer AI content generation services such as ChatGPT, Bard, or Bing Chat?
  6. Does the company educate employees on the safe use of generative AI web services such as ChatGPT, Bard, or Bing Chat?
  7. Does the company prevent employees from entering sensitive or private data into consumer generative AI products?

Example: Are there existing policies or controls such as DLP, Browser control, or SASE that can be used to limit the use of ChatGPT or other generative AI web service?

  1. Does employee use of generative AI follow company ethics standards?

Example: Employees do not use ChatGPT for legal, medical or safety-related topics or in substitution for skills that the employee lacks.

  1. Is the company able to monitor web form input to verify that users are not using generative AI for malicious or illicit purposes?

Malicious or illicit purposes could be fraud, impersonation, or generating harmful content.

  1. The output of AI generated content, including text, code, images, video, or audio is checked for the following:
  • Proofed for mistakes.
  • Fact checked.
  • Double checked for biases.
  1. Is AI generated content labeled for origin or authorship?
  2. Does the company use or plan to use AI in development or product integration?
  3. If in use, are models classified according to the Data Handling Policy?
  4. Are access controls in place to protect the data model?
  5. Does the company monitor the data input and output to the model?
  6. Is data used in or by the model anonymized?
  7. Is data output from the model checked for hallucinations?

Hallucination or artificial hallucination is a confident response by an AI that does not seem to be justified by its training data.

  1. Does the company use only certified open-source or secure foundation models?
  2. If the company uses foundation models, do they fine tune the model?


Don't put your business at risk and learn about the risks of AI the hard way.

Share this post:

Comments on "19 Questions you need to ask to build a Generative AI Cybersecurity Policy"

Comments 0-5 of 0

Please login to comment