top of page

Our Insights for HR Community

AI can help accelerate transformation in the HR process.

Our technology experts along with strong domain experts, can help you enable the transformation.

Problem Statement:

Job seekers face numerous challenges, including limited job openings, intense competition, and outdated skill sets. Upskilling and career changers struggle to identify relevant courses and certifications that align with their goals. The stress and uncertainty of job searching, the risk of being left behind in a rapidly changing job market is real.


While there are lot of point solutions in the market, that helps you identify keywords missing in the job description that should be added in the resume, we are missing a contextual ability to assess candidate skills, aspirations and design a career path which is suitable. There is a need to guide them through on how to approach and apply for suitable jobs that will prepare them for next level.


Improv was launched to help Increase the chances of getting an interview with personalized resume feedback for Job Seekers, by codifying what hiring managers look for in a resume.


Following are the benefits for Job Seekers.

  • Sentences that help you align the resume with the job description

  • Quantifying your experience

  • Identify missing keywords which are required in the job description.

  • Grammatical or spelling mistakes.

  • Resume Summary

  • Email to Hiring Manager


Try Improv for free: Improv


Under the hood of how Improv was developed.


Enterprise generative AI has the potential to revolutionize various industries by enabling organizations to automate content creation, improve decision-making, and streamline operations. One way to achieve this is by combining knowledge graphs (KGs) and large language models (LLMs).


In this blog, we will explore how KGs and LLMs has been used in Improv, discuss key technology software products and libraries that may be required, introduce key technical concepts, and develop a high-level target architecture specification that could be used by developers.

  • Knowledge Graphs (KGs):

A knowledge graph is a database that stores information in the form of a graph, where entities are represented as nodes, and relationships between entities are represented as edges. KGs are particularly useful for storing complex, hierarchical, and interconnected data, which makes them ideal for representing domain-specific knowledge. In the context of enterprise generative AI, KGs can be used to represent entities, their properties, and relationships, enabling machines to reason about the underlying data and generate new insights.

  • Large Language Models (LLMs):

Large language models are deep neural networks trained on vast amounts of text data to generate language outputs that are coherent and natural-sounding. LLMs can be fine-tuned for specific tasks, such as text classification, sentiment analysis, and machine translation. In the context of enterprise generative AI, LLMs can be used to generate text based on given prompts or topics, adapt to different styles and tones, and engage in conversations.

  • Natural Language Processing (NLP):

Natural language processing is a subfield of artificial intelligence that deals with the interaction between computers and human language. NLP is essential for analyzing and understanding natural language inputs and generating meaningful responses. In the context of enterprise generative AI, NLP can be used to analyze prompts, understand the context, and generate relevant responses.

  • Machine Learning (ML):

Machine learning is a subset of artificial intelligence that involves training algorithms on data and using those algorithms to make predictions or take actions. ML is essential for developing predictive models that can identify patterns and correlations in data. In the context of enterprise generative AI, ML can be used to train models that can automatically detect skills, group them against various candidate profiles, and correlate adjacent skills in an automated manner.Key Technology Software Products and Libraries:

  • Knowledge Graph Storage:

Google Cloud Datastore is a fully managed NoSQL database that can be used to store and manage knowledge graphs. It provides efficient storage and querying mechanisms for knowledge graphs, making it an ideal choice for enterprise generative AI initiatives.

  • Large Language Model Management:

Google Vertex is a managed service that enables organizations to deploy, manage, and scale large language models. It provides a variety of pre-trained models that can be fine-tuned for specific tasks, as well as tools for monitoring and optimizing model performance.

  • Natural Language Processing (NLP) Library:

Spacy is a Python library that provides a wide range of NLP tools, including tokenization, POS tagging, named entity recognition, and dependency parsing. It can be used to analyze prompts and understand the context.

  • Machine Learning (ML) Library:

TensorFlow is an open-source machine learning library developed by Google. It provides a wide range of tools and functionalities for developing and training ML models. It can be used to train models that can automatically detect skills, group them against various candidate profiles, and correlate adjacent skills in an automated manner.


Target Architecture Specification: The target architecture for supporting enterprise generative AI with KGs and LLMs comprises the following components:

  • Proprietary Knowledge Graph:

The proprietary knowledge graph is stored in Google Cloud Datastore and represents various entities, their properties, and relationships. The knowledge graph is designed to enable automatic detection of skills, grouping them against various types of candidate profiles, and correlating adjacent skills in an automated manner.

  • Large Language Model (LLM):

The LLM is managed by Google Vertex and is used to generate text based on given prompts or topics. The LLM is trained on a dataset of high-quality text samples and is fine-tuned for specific tasks, such as text classification, sentiment analysis, and machine translation.

  • Natural Language Processing (NLP) Engine:

The NLP engine is built using Stanford CoreNLP and is used to analyze prompts, understand the context, and generate relevant responses.

  • Machine Learning (ML) Model:

The ML model is built using TensorFlow and is used to automatically detect skills, group them against various candidate profiles, and correlate adjacent skills in an automated manner.


  • Data Security and Governance:

The target architecture includes robust security measures to protect sensitive data and ensure compliance with regulatory requirements. Access control policies are implemented to restrict unauthorized access to the knowledge graph and LLM. Data encryption is used to protect data in transit and at rest. Regular backups and disaster recovery processes are also implemented to minimize downtime and data loss.


  • Integration Layer:

The integration layer connects the knowledge graph, LLM, NLP engine, and ML model. It enables seamless communication between the components and ensures that the system works together effectively.


Configuring an API to access a knowledge graph from large language models (LLMs) requires careful consideration of several factors, including data formats, communication protocols, and authentication mechanisms. Here are some steps and technologies that can help accelerate the development process:

  • Choose a data format:

The first step is to select a data format for representing the knowledge graph. Popular choices include JSON, XML, and RDF. JSON is a lightweight and flexible format that is easy to work with, while XML provides more structure and validation capabilities. RDF is a standard for representing and exchanging data on the web and is often used in knowledge graph applications. We choose JSON as format.

  • Select a communication protocol:

Once the data format is chosen, the next step is to select a communication protocol for accessing the knowledge graph. REST (Representational State Transfer) is a popular choice for APIs, as it is simple and widely supported. GraphQL is another option that allows for more flexible queries and can reduce the amount of data transferred. We choose REST.

  • Authenticate requests:

To prevent unauthorized access to the knowledge graph, authentication is necessary. Common authentication methods include OAuth, JWT, and basic auth. OAuth is a popular choice for APIs, as it allows clients to authenticate without sharing their credentials. JWT provides a secure way to transmit information between parties and can be used for authorization. Basic auth is simple but less secure, as it transmits credentials in plaintext. We used OAuth on Google Cloud.

  • Implement request handling:

After configuring the API, the next step is to implement request handling. This involves creating endpoints for retrieving, updating, and deleting knowledge graph data. Endpoints can be created using frameworks like Flask or Django for Python, Express.js for Node.js, or Spring Boot for Java. We used Flask.

  • Utilize libraries and frameworks:

There are several libraries and frameworks available that can accelerate the development process. For example, PyRDF provides a Python interface for working with RDF data, while Apache Spark GraphX offers a scalable platform for processing graph data. We build a properitary knowledge graph database structure built on in-memory capabilities from Redis.

  • Optimize performance:

Finally, it's important to optimize the performance of the API. Caching frequently accessed data, indexing the knowledge graph, and using distributed computing techniques can all help improve performance.


But why did we integrate LLM with Knowledge Graph?


Traditional LLMs, including the ones offered by ChatGPT, Bard are limited in their ability to capture complex relationships between entities and lack domain-specific knowledge. They propose that incorporating KGs into the model can address these limitations by providing a rich source of structured knowledge about entities and their relationships.


The knowledge graph serves as a repository of domain-specific knowledge that can augment the large language model's understanding of text. By integrating the knowledge graph, the model can better comprehend the relationships between entities, leading to improved performance in tasks requiring such understanding.


In other words,


LLMs are good at understanding words and sentences, but they don't know much about the world. KGs, on the other hand, contain lots of information about things and how they relate to each other.


Step by Step Approach for Integrating Knowledge Graph with LLM model


Step 1: Knowledge Graph Preprocessing

  • Start by preprocessing the knowledge graph data to ensure it's in a format that can be easily integrated with the LLM.

  • This may involve converting the knowledge graph data into a matrix or vector representation, removing duplicates or redundant edges, and normalizing the data.

Step 2: LLM Pretraining

  • Next, pretrain the LLM on a large corpus of text data. In our case, it was large corpus of job descriptions and sanitized resumes (without names, contact details).

  • This step is important because it allows the LLM to learn general language representations that can be fine-tuned later for the specific task at hand.

Step 3: Knowledge Graph Embedding

  • After pretraining the LLM, create embeddings for the knowledge graph entities and relations.

  • There are several ways to do this, but one popular method is to use a knowledge graph embedding (KGE) model, such as TransE or DistMult, to map the entities and relations to dense vectors in a high-dimensional space. We used DistMult.

Step 4: Integrating Knowledge Graph Embeddings with LLM

  • Once you have the knowledge graph embeddings, integrate them with the LLM using a fusion strategy.

  • One simple way to do this is to add the knowledge graph embeddings to the input embeddings of the LLM, effectively combining the linguistic information from the text with the semantic information from the knowledge graph.

Step 5: Fine-Tuning the LLM

  • Now that the LLM has been augmented with the knowledge graph embeddings, fine-tune it on a small set of labeled data that contains both text and knowledge graph information.

  • During fine-tuning, the model will learn to align the linguistic information from the text with the semantic information from the knowledge graph, enabling it to perform tasks like textual entailment, question answering, and knowledge grounding.

Step 6: Evaluation and Iteration

  • Evaluate the performance of the integrated LLM on a held-out test set.

  • If the performance is unsatisfactory, iterate on the integration strategy, adjusting parameters, experimenting with different fusion strategies, or even exploring alternative KGE models.

Step 7: Domain Specific Training

  • Finally, train the integrated LLM on a large dataset specifically designed for the task at hand (e.g., question answering, text classification).

  • This step allows the model to further adapt to the task-specific context and refine its performance.

Below is sequence of steps required to achieve the same.

  1. Defining Rules for Representing Information: Define clear rules for representing information in the Knowledge Graph. For example, define entities such as skills, experiences, education, etc., and their respective attributes like dates, locations, etc.

  2. Creating New Sentences: Once the LLM and Knowledge Graph are trained, use them to generate new sentences that demonstrate impact, quantify experience, and present more relevant experience upfront. These sentences can be generated based on the information extracted from the resumes and job descriptions.

  3. Demonstrating Impact: Use the LLM to generate sentences that demonstrate the impact of the candidate's previous work experience. For instance, "Led a team that increased sales by 20% within six months" or "Developed a software application that reduced customer complaints by 50%."

  4. Quantifying Experience: Use the Knowledge Graph to quantify the candidate's experience by generating sentences like "Has over 10 years of experience in project management" or "Managed teams of up to 50 people."

  5. Presenting Relevant Experience Upfront: Use both the LLM and Knowledge Graph to present the most relevant experience upfront. For example, generate sentences like "Gained extensive experience in digital marketing, resulting in a 30% increase in website traffic" or "Spearheaded several successful product launches, increasing revenue by 25%."

  6. Ensuring Accuracy and Consistency: To ensure accuracy and consistency, we have multiple models and human reviewers evaluate the generated sentences. This has helped identify any errors or biases in the output and improve the overall quality of the generated sentences.

By following this approach, organizations can integrate LLM and Knowledge Graph technology to create new sentences that effectively communicate the candidate's qualifications and experience, helping hiring managers make more informed decisions during the hiring process.


Try Improv for free and experience the joy of getting hired faster!


Summary


Combining knowledge graphs and large language models can significantly enhance enterprise generative AI capabilities. By leveraging these technologies, businesses can automate content creation, improve decision-making, and streamline operations. The target architecture specification outlined in this blog provides a comprehensive framework for developers to build and implement enterprise generative AI systems that combine KGs and LLMs. The architecture includes a proprietary knowledge graph, LLM, NLP engine, ML model, and integration layer, all working together to enable automatic detection of skills, grouping them against various candidate profiles, and correlating adjacent skills in an automated manner. The architecture also includes robust security measures to protect sensitive data and ensure compliance with regulatory requirements.


As the digital age continues to reshape industries and redefine professional landscapes, the traditional approaches to career services are no longer sufficient to guide individuals toward their desired destinations. This is where the transformative power of Artificial Intelligence (AI) steps onto the stage, revolutionizing the way we navigate our professional aspirations.


AI can significantly elevate Career Services for working professionals and students. Based on the demand and possibilities from the current state of technology, following are the art of possible for transforming career services.



Personalized Career Assessments:


  • Utilize AI-powered assessment tools to evaluate students' strengths, skills, preferences, and career goals.

  • Provide individualized feedback and insights to help students align their aspirations with suitable career paths.


Customized Career Pathways:

  • Develop an AI-driven system that suggests tailored career paths based on students' assessments, program specialization, and industry trends.

  • Offer a range of potential roles, industries, and geographic locations to cater to diverse interests and backgrounds.


Support with Resume and LinkedIn Optimization:


  • AI enabled enhancement of Resume and LinkedIn profiles, contextualizing based on job description.

  • AI generated cover letter, contextualized for each job description.


Skill Enhancement Recommendations:


  • Analyze students' current skill sets and compare them against desired job profiles.

  • Suggest relevant online courses, certifications, and skill-building activities to bridge any gaps.


Mock Interviews and Interview Preparation:


  • Provide virtual mock interview simulations using AI to simulate real interview scenarios.

  • Offer feedback and tips for improving interview skills, communication, and confidence.


Improving Recruiter engagement

  • Automated pre-screening of resumes prior to the job fair, to ensure recruiters target the right candidates.

  • AI supported skillset assessment for technology, simulations in context with the job requirements.


Networking and Alumni Engagement:


  • Create an online platform that facilitates networking opportunities among students, alumni, and industry professionals.

  • Organize virtual networking events, webinars, and panel discussions to connect students with industry experts.


Career Workshops and Webinars:


  • Offer a series of online workshops and webinars on topics such as job search strategies, negotiation skills, personal branding, and industry insights.


Career Coaching and Counseling:


  • AI-powered career coaching: AI-powered career coaches could help students assess their skills and interests, develop a career plan, and prepare for job interviews. This could free up human career coaches to focus on more personalized services.


  • AI-powered mentorship: AI-powered mentors could provide students with guidance and support throughout their program. These mentors could be former students, successful professionals, or even AI-powered chatbots.


  • Provide virtual one-on-one career coaching sessions with experienced career advisors.


Continuous Learning and Upskilling:


  • Curate a repository of online resources, articles, and industry reports to keep students updated on market trends and emerging skills.

  • Recommend opportunities for continuous learning and professional development.


A comprehensive benchmark study across the top universities.


Top universities have been empowering their career services, elevating their student experience already using AI.



How can iRekommend help enhance Career Services?


iRekommend based on the current set of capabilities, can help introduce AI in career services and help enable the scale, while lowering the risk and improving student and alumni experience.


1) Support with Resume and LinkedIn Optimization:


  • AI enabled enhancement of Resume and LinkedIn profiles, contextualizing based on job description.

  • AI generated cover letter, contextualized for each job description.



Benefit: Students would have self-serve options to enhance resume and LinkedIn and contextualize for job description, hence better prepared for interviews.


In case of career fairs, students can optimize their resume and LinkedIn profiles in advance, to increase possibilities of job offer.


Target Metric:

Increase in number of Job offers per student.


 

2) Improving Recruiter engagement


Ask:

- Automated pre-screening of resumes prior to the job fair, to ensure recruiters target the right candidates.


Solution:


Benefit:

Recruiters will submit the roles in advance, and iRekommend can automatically identify best fit candidates for the roles. Recruiters can then engage with the best fit candidates, engaging them to apply. 90% of the work can be automated.


Target metric:

Increase in number of Job offers per student.


 

3) Mock Interviews and Interview Preparation


Ask:

- AI supported skillset assessment for technology, simulations in context with the job requirements

- Offer feedback and tips for improving interview skills, communication, and confidence.



Solution:

iRekommend in partnership with InterviewDesk to support the skillset assessment and simulation assessment in context with the job requirements.


 

4) Career Coaching and Counseling:


Ask:

  • AI-powered career coaching: AI-powered career coaches could help students assess their skills and interests, develop a career plan, and prepare for job interviews. This could free up human career coaches to focus on more personalized services.

Solution:

In roadmap for iRekommend for potential Q4 delivery.


Candidate sourcing and screening is the most critical aspect of the recruitment cycle. An organization may have dedicated recruiters to perform this activity or may choose to outsource to 3rd party recruitment agency. But recruitment agencies or in-house recruiters often struggle for candidate sourcing and screening.



Complicated workflow, mostly manual work

The entire process requires a lot of emails and excel sheets and there is significant information loss across the entire workflow.

This is how the current process works. We have identified key bottlenecks across the entire value chain.


Because of this, recruiters are spending a lot of time in finding the right candidate. Candidate Searching takes 45% of entire recruitment effort. Even searching applicants is an ardent task.


Few may also outsource it to agencies. However, the challenge in this process seems to be never-ending—mapping the skillsets required in JD to that of candidates from a list of hundreds, if not thousands. Be it recruitment portals or ATS; keywords are the most helpful tool to carry out candidate sourcing.


Application Tracking Systems (ATS) may have been part of the problem, producing negative experience for both candidates and recruiters.



How do recruitment agencies find candidates?


Finding the right candidate for a job post can be a challenging task for recruiters. It is essential to source top talents while saving recruitment time, effort, and cost. Fortunately, there are several sourcing tools available where recruiters find candidates.



LinkedIn Recruiter as recruitment sourcing tool


LinkedIn Recruiter is a powerful sourcing tool for recruiters. It is one of the best talent sourcing tool. It provides access to LinkedIn's vast network of professionals and allows recruiters to search for the best candidates based on specific criteria. With LinkedIn Recruiter, recruiters can easily connect with potential candidates, learn more about them, and develop relationships to set them up as a long-term hiring pipeline. Using LinkedIn to find employees is one of the best ways to hire top talent.



While LinkedIn offers millions of resumes, the identification of suitable candidate requires significant manual efforts. Pricing is another key factor considering LinkedIn Recruiter may not be applicable for SMB companies.



What happens when you apply for jobs in Job Boards or upload your resume in Job sites?


Dice, Indeed, ZipRecruiter are resume sourcing platform that offers more than just traditional job posting and resume databases. They are an excellent candidate sourcing tool for tech positions, and recruiters use it for its features such as career fairs, webinars, and social recruiting. They are one of the best sourcing websites for recruiters, which comes along with candidate search tools.


However, they are also keyword centric resume search and do not offer excellent match. Recruiters have to understand and know how to use boolean search to identify the right candidate. A recruiter for tech jobs needs to understand that "micro services" and "microservices" are synonymous. Also, there is subtle difference between Python developer and NLP developer.




The real problem - Too many resumes per job posting!


As you could see, there are more applicants per job posting in LinkedIn. It becomes very difficult for recruiter to screen through resumes to identify the best candidates.



Solution - Candidates have to be hyper-personalized in their job search process.


The traditional spray and pray approach do not work anymore.


This shotgun approach might seem efficient, but it often backfires. Employers are looking for candidates who not only possess the required skills but also show genuine interest in the company and the specific role. By sending a generic resume to all, applicants miss the opportunity to highlight how their unique abilities align with the particular needs of each job. This lack of personalization can lead to the resume being overlooked, prolonging the job search and leading to frustration.


1. Lack of Personal Connection:

When you submit a generalized resume without tailoring it to the particular role, you miss the chance to connect with the employer. Recruiters often look for clues that indicate a candidate's genuine interest in their company and the specific role. By using a one-size-fits-all resume, applicants send the message that they are interested in a job, any job, rather than the job. This lack of personal connection can easily result in the resume being discarded in favor of other candidates who have taken the time to craft targeted applications.


2. Missing the Mark on Skills and Experience:

Different roles require different skills and experiences, even within the same industry. By using a generic resume, you may fail to emphasize the specific skills, qualifications, and experiences that make you the perfect fit for a particular position. As a result, your resume might not pass through Applicant Tracking Systems (ATS) that look for specific keywords related to the job, or it may be overlooked by human recruiters who don't see the unique value you could bring to that specific role.


3. Decreased Interview Chances:

Recruiters are adept at spotting generic resumes. When they see one, they are less likely to consider the applicant for an interview. A personalized resume shows that you have taken the time to research the company, understand its values, and are interested in contributing to its success. By not taking this step, you are not only reducing your chances of getting an interview but also missing the opportunity to prepare for potential interview questions related to the company's culture and mission.


4. Prolonging the Job Search Process:

While it might seem like sending the same resume to hundreds of employers would save time, this strategy often prolongs the job search process. The lack of success in receiving interview invitations can lead to frustration and disillusionment. Moreover, by not focusing on roles that genuinely align with your skills and career goals, you may miss out on opportunities that are the right fit for you, thus extending your job search unnecessarily.


Apply better using our tips. Check this link to learn more.







bottom of page