Productivity

How to Start an AI Governance Program

Blog Image
Published on
September 23, 2024

Introduction:

I'm sorry to tell you this, but you are starting your AI Governance program wrong. This is not just a catchy opening. It is actually true. Let's dive into it.

Congratulations! You Get to Handle AI Governance:

Let me guess, you used to handle only legal or privacy or cybersecurity for your organization. Then ChatGPT happened and the company looked around and pointed at you to take on this new challenge of "AI Governance." You may be ecstatic about this or you may not be able to wait until the company hires someone else to take on this full-time added to your already heavy workload. Either way, while you decide to take a step back and build out a program that includes a process and a bunch of policies, guess what your employees are doing?

Yup, you guessed it, they are using GenAI tools, like ChatGPT or Perplexity.ai.

It's Worse than You Think:

While your company was looking around for who to take charge of AI Governance, your employees were using tools that were already company-approved that now added GenAI capabilities.

What tools does your marketing department use? How about sales? I am going to guess that they use a lot of tools: a sales CRM, a marketing hub, etc. Most of those tool providers have launched GenAI features in the last year. So while you are trying to talk to management about what worries them in adopting GenAI, your employees are already doing it, and they are doing it with the company's permission.

And those vendors aren't rolling out those capabilities in the most data safe way. Did you see the controversy when LinkedIn updated its privacy settings so that users had to opt out of letting LinkedIn use their data for model training?

What's the Big Deal Anyway:

So why is it a problem to use data for model training and why should your business care that LinkedIn changed permissions for its users? The problem with using data for training, re-training or fine-tuning is that the model does not forget its training. Unlike a human, you can stuff it with more information, and you can tell it not to regurgitate the training data word for word, but it does not forget it.

Samsung learned this the hard way when its engineers uploaded code into the free version of ChatGPT only to discover their proprietary source code was forever available to anyone else in the public using ChatGPT. Yikes.

Imagine you have an employee on LinkedIn who violates your social media policy? You can ask LinkedIn to take the post down, but with the new setting, they have already put that post, along with the employee's name, into its model. You can't be sure that the "take-down" of the post will do anything to protect your brand. It is now captured into a very large dataset, forever.

Conclusion:

For the above stated reasons, it is why I recommend starting your AI Governance program with vendor management. I will cover how to build that vendor management process in the next article, but, please, run, don't walk to starting your due diligence on the use of AI by your current vendors.

Featured Blog

We are constantly writing new content. Check back often or join our newsletter!

This blog post explores the current state of licensing, whether it is online terms of service or main services agreements, and what you should look for to understand your rights and the rights of your vendor.
Exploring the diverse global AI regulatory landscape, including the EU’s AI Act, the decentralized approach in the US, and the varying frameworks in the Asia-Pacific region. It highlights the importance of understanding these regulations to ensure compliance and build what's right in AI technologies.
This article highlights the importance of AI governance by teaching you how to interrogate your vendors like a seasoned detective, minus the trench coat. After all, it’s not just about knowing if they use AI—it’s about making sure your data doesn’t become the plot twist in their next sci-fi thriller!
This blog post discusses how deepfake technology is being used to exploit corporate hierarchies through sophisticated phishing attacks. The post emphasizes the need for robust AI governance and vendor management processes to prevent costly breaches and ensure secure verification of requests.
In my view, starting an AI governance program means evaluating your existing vendors with information you already have.
Most people start an AI governance program by backing up and building a process. I argue that your employees aren't waiting around for your beautiful policies. You need to start with vendor management.

Stop Wasting Your Time on Assessments

Gain efficiency and remove tedium by using ClearOPS