Security questionnaires are the worst. After having spent over a decade now on this pain point there is one constant - everyone hates them and will run as fast and as far away as they can to avoid them. Except one person. The sales person trying to close the sale.
Unfortunately, organizations around the world will continue to invest significant resources in assessing and maintaining their cybersecurity posture by assessing vendors with a questionnaire. I say unfortunately because now this practice is baked into the regulations. A security questionnaire is a set of questions designed to gauge an organization's security practices and readiness. If your reading this, then I probably did not need to explain that to you.
However, with the growing complexity of cybersecurity, responding to these questionnaires is challenging and very, very time-consuming. This is where OpenAI comes into play. In this blog post, we'll explore how OpenAI is a game changer in the process of responding to security questionnaires.
What is a Security Questionnaire?
A security questionnaire is a comprehensive set of questions that organizations send to their vendors, partners, consultants or even internal departments to assess security. Coincidentally, the same practice happens in privacy, except they are called privacy impact assessments and focus more internally. However, I bring them up because, nowadays, the security questionnaire includes questions about privacy compliance, so the lines are blurring.
Vendor assessments almost always come in through the sales person or through the sales process. This is why I argue that security questionnaires are the revenue generator for information security personnel. As the in-house counsel, I considered it my job to review license agreements, and review them quickly, because it was the revenue generating side of my job. I believe the same holds true for security questionnaires. Just like you want a lawyer to review the license agreement, you want information security to review the security questionnaire (and privacy to review the privacy questionnaire).
What is this new technology?
OpenAI, Microsoft, Google, Meta and others offer a range of tools and technologies that have garnered significant attention these days, specifically because of the LLMs (large language models).
LLMs are a type of artificial intelligence (AI) model designed to understand and generate human-like text. These models are built using deep learning techniques and have the capability to process and generate natural language text based on the input they receive. LLMs have significantly advanced the field of natural language processing (NLP) and are widely used in various applications such as chatbots, language translation, text summarization, question-answering systems, and more.
The GPT (Generative Pre-trained Transformer) series are OpenAI's language models that can understand and generate human-like text based on the input provided. It has been trained on a lot of internet data, making it incredibly capable of answering a wide range of questions. It has also come under scrutiny for various reasons that I have spoken about on Linkedin.
ChatGPT, Bard, Bing etc. are designed for interactive and dynamic conversations and have fundamentally changed how we perceive chatbots. We used to think of them as semi-useful at best and pretty frustrating at worst because they were based on a fixed input of questions and answers. Now, with these LLM powered chatbots, you can engage in a conversation with a chatbot that seems more "intelligent."
How LLMs Can Be Used to Respond to Security Questionnaires
The problem with security questionnaires, based on my research and now long term experience building a tool to solve them, is summed up as follows: they all ask basically the same questions in slightly different ways requiring you to read every single one. There is no standard that is universally applied. When prospects come to ClearOPS, they want "magic." Before LLMs, the magic we showed them looked more like the old chatbots i.e. not that intelligent. That is why these new LLMs and corresponding models are a game changer. Specifically, they make better:
1. Automated Responses: By providing the questionnaire as input, the AI model can actually read them and the vendor's policies to generate comprehensive auto populated answers.
2. Improving Consistency: With its understanding of the questions, it can repeat answers either exactly as you originally wrote them, or slightly differently to address the slightly different question, which reduces the risk of human error that comes from relying on information stored in your human head. For example, many questionnaires will ask the exact same question twice! If you don't remember that the question was already asked, you are likely to answer it slightly differently the second time, but that could make the reviewer suspicious.
3. Enhanced Understanding: You don't have to be in information security to respond if an assistant is available to provide you detailed explanations and context. Although, being a critic of the standards based questionnaires, most of these vendor assessments are written in a way such that the questions are very, very confusing, so it helps information security too.
4. Efficiency and Speed: Just like the human review of license agreements, responding to security questionnaires takes time. Unfortunately, they usually take more time due to the need for multiple stakeholder involvement. When the LLM is directed to source its information from dedicated knowledge base of information, like in ClearOPS, it can reduce the need for multiple stakeholder involvement significantly. Add to that the process of a machine taking the first draft and you have a process that is a lot faster! Which means money in your pocket.
OpenAI as the Answer to Security Questionnaires
I feel like ClearOPS has been waiting on this technology for years. Luckily, it took very little effort to plug it in because our process for responding to security questionnaires already contemplated all of the above. While we chose to start with OpenAI's models, we will always offer our users a choice and the flexibility to establish their own process. After all, a tool is only a tool. It's the person using the tool that matters the most and, for ClearOPS, that means empowering both the sales person and the information security team (and the privacy team, and legal, and the C-suite...).