Is ChatGPT the Future of Recruitment Chatbots?

Written by Keisuke Inoue

Table of Contents

Introduction

By now, you’ve likely heard about ChatGPT, “a chatbot technology developed by OpenAI that uses natural language processing to generate human-like conversation” (this description was written by Davinci, a GPT3 AI model from OpenAI). The release of ChatGPT took social media by storm, and for good reason—people were amazed by its ability to provide seemingly thoughtful responses to a wide range of questions, though others have pointed out its response inaccuracy (even going so far as to describe it as “fluent BS”). Regardless, ChatGPT represents a bold leap forward in AI’s potential to drive chatbot technology across a wide range of useful applications, without the need for extensive manual intervention during dialog setup and implementation.   

ChatGPT is based on GPT3.5, the successor to GPT3, an extremely popular AI language model that was released in 2020. GPT3 (which stands for Generative Pre-trained Transformer 3) is a Large Language Model (LLM)—which means it’s an expansive neural network that’s designed to process a large data set of text to effectively learn language and generate responses based on various inputs. ChatGPT is a “fine-tuned” version of GPT3.5 and has a relatively thin neural network layer over GPT3.5 to optimize contextual dialog output. 

Although similar models exist (including those developed by industry heavyweights such as Google, Meta, and Microsoft) and the list of players keeps growing alongside their ability and level of sophistication, ChatGPT seems particularly revolutionary due to its convincing ability to handle a wide array of text-based tasks—including answering questions, writing essays, summarizing long texts, translating languages, and so on.  But is ChatGPT the future of recruitment chatbots?  

In this article, PandoLogic Data Scientist, Keisuke Inoue, examines the potential of ChatGPT to drive recruitment chatbots forward and walks through PandoLogic’s generative AI technologies using the latest LLM architecture for recruitment chatbots.

Chatting with ChatGPT

Now, let’s start interacting with ChatGPT with an easy question: 

Great answer! PandoLogic also offers a robust candidate management dashboard, pandoSELECT, and access to PandoLogic’s AI chat but overall, it’s a good description. Now, let’s get more specific and ask about the NLP Data Scientist role at PandoLogic (Note: this role is no longer open):

Not bad, ChatGPT! But wait…are you just giving us a general answer for any NLP Data Scientist role? Let’s dig deeper: 

Sorry ChatGPT—in reality, you need to be fluent in Python to become an NLP Data Scientist at PandoLogic. Maybe ChatGPT didn’t read the job description (after all, not even ChatGPT can read everything on the internet). Fair enough—let’s provide the full job description (clipped from the following screenshot to save space), and see what happens: 

There you go—just like Davinci (and other GPT3 SAS offerings out there such as Jasper.ai), it can decipher long instructions to provide a sufficiently nuanced context. An interesting feature of ChatGPT is that it “remembers” context, so now I can ask a question like this: 

Well done, ChatGPT (this answer is correct), although its memory and ability are limited—which becomes apparent after it loses context during longer back-and-forth interactions.  

What else is ChatGPT capable of? It can also generate interview questions for a given job description, which is something PandoLogic is experimenting with, using different neural language models. 

These are all good questions and reflect the job description well. However, evaluating responses is the greater challenge. Let’s see if ChatGPT can help us with this: 

Although at first glance it may seem as if ChatGPT is poised to take over all facets of recruitment and render many of us obsolete when OpenAI makes this technology available to businesses, the truth is that it’s not perfect or omniscient, as others (including StackOverflow) have pointed out. In fact, ChatGPT failed to answer many of the typical questions candidates may ask: 

Other questions ChatGPT failed to answer include:

  • What does a typical day as an NLP Data Scientist look like at PandoLogic?
  • Where is PandoLogic’s headquarters?
  • Who is the CEO of PandoLogic?

ChatGPT does represent an improvement over GPT3 model technology (e.g., Davinci)—with GPT3, we often saw questions answered incorrectly:

Examples of incorrect responses by GPT3 (PandoLogic’s CEO is Terry Baker and does offer 401K benefits)

Still, as the earlier question about Python illustrated, ChatGPT can provide incorrect answers if the right context is not provided or lost during a conversation. As many social media posts demonstrated, ChatGPT, while great for creative text generation, is simply not reliable enough to always provide factual information.  

Generative AI at PandoLogic

Job searching and interviews—and their outcomes—are serious business, so we need to safeguard against any risk of inserting misinformation, inappropriate evaluation criteria, or unwanted biases into the process, which makes using a public LLM like ChatGPT  a challenge. Another problem with using  ChatGPT is that it requires more computational power, which may make chatbots’ response times slower and their operation more costly.  

That said, operational bottlenecks come from various points in the whole conversational AI system and with careful design and execution, generative AI can be leveraged to yield a more efficient and safe system with a better user experience overall. The following are the high-level overviews of generative AI technologies at PandoLogic to drive some of the processes for the AI recruitment chatbot.  

1. Interview Question Generation with DSLLM

In the past, our automatic chat generation process was assisted by an NLP parser, which analyzed incoming job descriptions and extracts key data, including required skills and qualifications. Interview questions were defined based on available data and client requests.  

But as the interactions above demonstrated, current generation LLMs, such as GPT3, can not only extract skill requirements from job descriptions but can also form interview-format questions from them. Although these suggestions should be approved by human experts to ensure that the model isn’t inserting irrelevant or inappropriate questions, just having a list of suggested questions ready to insert into the interview design can be an improvement in efficiency over existing processes.  

The Conversational AI team at PandoLogic has developed a domain-specific LLM (DSLLM), powered by Veritone Generative AI, that is optimized for the recruitment domain, using a large collection of job descriptions and relevant datasets. Leveraging domain-specific knowledge, this DSLLM is able to produce safer and more reliable job interview questions that are suitable for incoming job descriptions.  

Interview Question Generation with DSLLM

2. Question Answering with GPT/DSLLM

As mentioned above, there’s a set of questions that candidates often ask during job interviews. These questions relate to either the company, the job, or the process. PandoLogic’s Conversational AI is a whiz at leveraging key pieces of information that are aggregated and stored in PandoLogic’s knowledge graph and using them effectively in her responses, to help make the onboarding process as seamless and informative as possible. Traditionally, this process would rely on an intent-detection model that would be developed in-house via supervised machine learning and would handle a limited set of questions. 

The shortcomings of the traditional approach are 1) it requires a supervised training process that is labor-intensive and time-consuming and 2) it cannot handle questions that were not represented in the training dataset. PandoLogic’s new generative AI approach overcomes these shortcomings and is able to handle unforeseen questions leveraging GPT3 and contextual data. However, relying solely on this new approach still involves some challenges like reliability, cost, and response time. Therefore, our current approach is a hybrid version of these two technologies, where the traditional Conversational AI/Graph model is used primarily and the DSLLM approach is used when intent detection output returns a low confidence value. 

Question Answering with GPT/DSLLM 

Conclusion (TL;DR)

In this article, I illustrated ChatGPT’s current potential as a recruitment chatbot by examining its responses to questions about an NLP Data Scientist role. ChatGPT amazed us with its ability to generate sentences that exhibit language fluency and basic contextual understanding. However, it isn’t omniscient and may help spread false information and perpetuate biases. It’s also susceptible to providing incorrect answers when the proper context isn’t provided or if context gets heavily nuanced or hard to discern during the back-and-forth of an ongoing conversation. To effectively leverage this technology, we must take active steps to prevent the dispersion of misinformation, inappropriate evaluation criteria, and unwanted biases. PandoLogic’s generative AI approach starts by re-training existing general-knowledge LLMs into purpose-built Domain Specific LLMs (DSLLMs) for better quality, safety, security, and performance. While we are already seeing significant improvement over existing models, our journey with generative AI has just begun. We’re excited by the possibilities it offers as we move towards a bold new future in recruitment.  

Subscribe to Our
Newsletter

Stay in the loop on recruitment industry trends, news, tips and tricks.

Job advertising
made easy

Ready to try our AI Recruiting Platform?