ChatGPT the consumer application and its colleague technologies are governed by terms of use that may impact your use or your company's use of it. We explore how to navigate those terms with policies and vendor management.
Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu enim urna adipiscing praesent velit viverra sit semper lorem eu cursus vel hendrerit elementum morbi curabitur etiam nibh justo, lorem aliquet donec sed sit mi at ante massa mattis.
Lorem ipsum dolor sit amet, consectetur adipiscing elit ut aliquam, purus sit amet luctus venenatis, lectus magna fringilla urna, porttitor rhoncus dolor purus non enim praesent elementum facilisis leo, vel fringilla est ullamcorper eget nulla facilisi etiam dignissim diam quis enim lobortis scelerisque fermentum dui faucibus in ornare quam viverra orci sagittis eu volutpat odio facilisis mauris sit amet massa vitae tortor condimentum lacinia quis vel eros donec ac odio tempor orci dapibus ultrices in iaculis nunc sed augue lacus
At risus viverra adipiscing at in tellus integer feugiat nisl pretium fusce id velit ut tortor sagittis orci a scelerisque purus semper eget at lectus urna duis convallis. porta nibh venenatis cras sed felis eget neque laoreet libero id faucibus nisl donec pretium vulputate sapien nec sagittis aliquam nunc lobortis mattis aliquam faucibus purus in.
Nisi quis eleifend quam adipiscing vitae aliquet bibendum enim facilisis gravida neque. Velit euismod in pellentesque massa placerat volutpat lacus laoreet non curabitur gravida odio aenean sed adipiscing diam donec adipiscing tristique risus. amet est placerat in egestas erat imperdiet sed euismod nisi.
“Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum”
Eget lorem dolor sed viverra ipsum nunc aliquet bibendum felis donec et odio pellentesque diam volutpat commodo sed egestas aliquam sem fringilla ut morbi tincidunt augue interdum velit euismod eu tincidunt tortor aliquam nulla facilisi aenean sed adipiscing diam donec adipiscing ut lectus arcu bibendum at varius vel pharetra nibh venenatis cras sed felis eget dolor cosnectur drolo.
ChatGPT is fun.
But there are trade-offs, particularly as it pertains to confidentiality, privacy and cybersecurity. As an individual, you have to assess those risks for yourself. As a business, you have to account for your employees, their awareness and the company’s risk tolerance.
Basically, you need an approach, a process and a policy. You might already have data classification and risk assessment policies, but those won’t be enough because of vendor risk.
First, let’s make an important distinction between the consumer tools released by OpenAI, Bing, Google and soon many others from the APIs these same companies are releasing. I won’t get too technical but using the APIs is just not as easy as using the consumer tools, such as ChatGPT, Bard, BingChat and DALL-E, etc. So, you could have multiple policies!
Second, there are different terms of use depending on which one you use, consumer tool or API. For example, the terms of use for ChatGPT state that OpenAI (the company behind ChatGPT) may use your inputs for training purposes. The terms for the APIs, however, say that they do not use your inputs for training.
This distinction makes it much more challenging to write a comprehensive policy because it requires so much explanation.
As we all know, people don’t like to read long policies.
Finally, you cannot look internal without looking external. Are your vendors using Generative AI? In this guide, I will walk through how you should frame your Generative AI policy. For each policy point I make below, I will offer corresponding questions you should be asking your vendors.
For your vendors, you need to ask them if they have a Generative AI policy and if so, does it prevent employees from inputting confidential or proprietary information? Are there any exceptions to the rule?
For your vendors, you need to ask them how they are verifying the output of any Generative AI used for creating code that is then put into production. You likely already ask them about their code review process and whether they follow OWASP Top 10. If not, now is a good time to add that to your vendor questionnaire.
For your vendors, you have already asked if they have a Generative AI policy, which is most important to assess this risk. I think the biggest risk your vendors pose to you here is if you have asked them to do work for you that uses Generative AI and you need ownership to those deliverables. Likely, this is a contract requirement over a vendor due diligence question.
In putting together your policy, there are opportunities here for providing examples, FAQs and checks and balances. For example, if you permit the use of Generative AI in your business, perhaps you have a carve-out where confidential information may be used as input if the corresponding Generative AI license terms assert that the data is not used. Or maybe you determine that, for example, you trust Microsoft and you have read their terms, so they are the exception to the rule (I am not expressing an opinion here).
A tangent to this confidentiality problem is also the security issue. OpenAI relies on open source code, which caused them to already suffer a security incident. Therefore, you may want to ask your vendors which Generative AI providers are permitted. A note about that is that most companies rely on open source software these days so while that incident was bad, it is not uncommon. It is their response that matters and, as you can see, they did show that they have an incident response process, which is likely already a question on your security questionnaire. Course, adding questions about the use of open source and how the code is checked makes a lot of sense given this incident.
Maybe they used ChatGPT ;)
Finally, all policies have a consequences provision. Typically, it allows the company to warn or even fire the employee. I would submit that that is harsh in this context and to think outside the box. For example, if an employee inputs confidential information into ChatGPT, let them know about OpenAI’s opt out process so they can fix it. Or, if you want to check how much of the output is generated by the AI for purposes of copyright protection, check out GPTZero.me. I always think empowering employees is better than scaring them.
My parting words are to keep in mind that, as with any new technology, we want to keep these policies short, concise, easy to read and not too scary that they stymie innovation.
I, for one, love ChatGPT and we are leveraging the technology at ClearOPS. Wouldn’t it make security questionnaires and vendor management, dare I say, fun?
You’re the best,
Caroline
P.S. ClearOPS is a tech startup company on a mission to bridge the gap between privacy and security empowering all businesses to build responsibly. We are intent on building an ethical A.I. company. If you support our mission, please subscribe and share.