19 Questions you need to ask to build a Generative AI Cybersecurity Policy
Are your employees putting your business at risk with AI?
At Model, we assist numerous clients with their Cybersecurity Compliance Frameworks and Policies. Model utilizes a leading vCISO Compliance tool that recently introduced a Generative AI Cybersecurity Control. I thought you might be interested in seeing the list of questions it raises:
With the growing popularity of Microsoft CoPilot, the risks are very real. If your identity, access control permissions, and data-loss prevention strategies are not robustly in place, you could inadvertently grant employees access to data they shouldn’t have. Even worse, you might expose your intellectual property to malicious actors through an ill-advised AI search
- Are employees required to receive approval before using any AI tools or platforms?
- Does the company keep an inventory of all generative AI tools currently in use?
- Are employees prohibited from using unapproved AI tools or platforms for company-related activities?
- Does the company educate employees about approved AI tools and the dangers of unapproved or malicious AI tools?
- Are employees allowed to use consumer AI content generation services such as ChatGPT, Bard, or Bing Chat?
- Does the company educate employees on the safe use of generative AI web services such as ChatGPT, Bard, or Bing Chat?
- Does the company prevent employees from entering sensitive or private data into consumer generative AI products?
Example: Are there existing policies or controls such as DLP, Browser control, or SASE that can be used to limit the use of ChatGPT or other generative AI web service?
- Does employee use of generative AI follow company ethics standards?
Example: Employees do not use ChatGPT for legal, medical or safety-related topics or in substitution for skills that the employee lacks.
- Is the company able to monitor web form input to verify that users are not using generative AI for malicious or illicit purposes?
Malicious or illicit purposes could be fraud, impersonation, or generating harmful content.
- The output of AI generated content, including text, code, images, video, or audio is checked for the following:
- Proofed for mistakes.
- Fact checked.
- Double checked for biases.
- Is AI generated content labeled for origin or authorship?
- Does the company use or plan to use AI in development or product integration?
- If in use, are models classified according to the Data Handling Policy?
- Are access controls in place to protect the data model?
- Does the company monitor the data input and output to the model?
- Is data used in or by the model anonymized?
- Is data output from the model checked for hallucinations?
Hallucination or artificial hallucination is a confident response by an AI that does not seem to be justified by its training data.
- Does the company use only certified open-source or secure foundation models?
- If the company uses foundation models, do they fine tune the model?
Don't put your business at risk and learn about the risks of AI the hard way.