Skip to Main Content
 
Thought Leadership

Bytes and Boundaries: Why You Need an Employee AI Use Policy

 
Legal Updates

Employers know that artificial intelligence (AI) is here to stay, but so many are grappling with the initial question: where do we even begin? Recognizing the benefits of AI, many companies have decided that allowing employees to use generative AI in some capacity is the path forward. Some even argue that resisting the use of AI in the workplace will result in your organization becoming an inevitable casualty of the “AI Revolution.”

While there is certainly value in embracing AI innovation, it must be balanced with the potential legal and reputational risks. Recent studies have shown that 57% of workplaces did not have an AI use policy or were still developing one, and 17% of employees were unaware of whether there was one in place. These numbers are likely even higher.

Regardless of whether your company has an AI use policy, we can guarantee that at least some of your employees are using generative AI (think ChatGPT, Bard, DALL-E and many others) to enhance their productivity and performance for business-related purposes. And, of course, it is important that they are doing so in a way that mitigates legal risks while protecting your confidential and propriety business information.

Below are several preliminary issues your AI use policy should address:

  1. Acceptable and prohibited uses. An AI use policy needs to address when and how employees may use generative AI, when they are prohibited from using AI technology, and which AI tools are allowed to do specific types of work.
  1. Transparency and disclosure. Consideration should also be given to whether employees will be required to disclose AI use and when approvals are needed before using AI—for example, a company may want stricter controls when dealing with customer/client or publicly-facing work product. Are employees required to let their supervisors or managers know if they use AI for any part of their job duties, or just in specific situations?
  1. Protection of trade secrets and confidential information. Perhaps the most discussed legal issue surrounding AI concerns intellectual property. As a general rule of thumb, anything that is put into a generative AI program is retained and used to train the technology. While companies can contract with AI companies to keep their information confidential, this is not the default position for free and publicly available generative AI programs like ChatGPT 3.5. Even some of the most technologically advanced companies have experienced employees putting their trade secrets in jeopardy. An AI use policy helps remedy these concerns by advising employees of the risk of inadvertent disclosure and prohibiting input of confidential and proprietary information.
  1. Accountability for ‘hallucinations.’ Generative AI is notorious for so-called “hallucinations” —which refers to incorrect or entirely fabricated information generated by AI. AI use policies should require employees to verify all AI-generated information before relying on it and provide for discipline in the event of noncompliance.
  1. Accountability for bias and discrimination. Accountability requirements in an AI use policy can also help mitigate bias and discrimination risks. Generative AI programs are only as valuable as the data they are trained on. If data used in the technology contains historical biases, generative AI will perpetuate those (think: garbage in, garbage out). Even unintentional biases that have a disproportionate impact on members of a protected class can violate Title VII, and the EEOC has taken special interest in the area. An AI use policy can help mitigate this risk by requiring regular auditing and human review of AI output as well as mandatory reporting obligations in the event a user discovers discrimination or bias resulting from use of the tool.
  1. Content ownership. Employee use of publicly available generative AI programs is still subject to that program’s terms and conditions. These terms may vary widely from one program to another, and it is unlikely that employees read the terms at all, much less with a critical eye. This can create issues for the company if the terms do not grant ownership rights or do not permit commercial use.
  1. Stakeholder trust. Although often overlooked, AI use policies can also serve as a signal and emblem of a company’s commitment to ethical innovation. While there are many supporters of AI adoption, there are also many who fear its adoption and negative effects on the job market. AI use policies can assuage these fears and build trust with stakeholders, namely customers and employees.

    Adopting an AI use policy shows employees that the company is aware of AI concerns and, by emphasizing the employee’s role in verifying AI-generated information, it shows that job augmentation—not job elimination—is the company’s goal. Further, the AI use policy can outline employee AI-training options to further comfort fears of employees being “left behind.” For customers, an AI use policy can signal an outward commitment to both innovation and responsible use.

What this means to you

In short, the cost of inaction is high, and AI use policies are essential for all. All companies should be proactive in developing AI use policies which balance their employees’ use of AI to innovate and streamline while limiting legal risk.

Contact us

If you have questions about AI use policies, please contact Laura Malugade, Eric Locker, or your Husch Blackwell attorney.

Professionals:

Eric Locker

Associate