Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
069163a
feat: integrate job recommendation functionality into API response
Linchen-Xu Dec 1, 2024
5d389f2
doc: modify ci_be
Linchen-Xu Dec 1, 2024
49518a7
doc: modify ci_be
Linchen-Xu Dec 1, 2024
54c4996
doc: modify ci_be
Linchen-Xu Dec 1, 2024
cc68e16
doc: modify ci_be
Linchen-Xu Dec 1, 2024
e3211ab
doc: modify ci_be
Linchen-Xu Dec 1, 2024
e843f68
doc: modify ci_be
Linchen-Xu Dec 1, 2024
8900e38
doc: modify ci_be
Linchen-Xu Dec 1, 2024
90b5f0a
doc: modify ci_be
Linchen-Xu Dec 1, 2024
343c7a3
doc: modify ci_be
Linchen-Xu Dec 1, 2024
b054bdc
Merge 343c7a33993b3a76c03504424c4896335a83869b into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
7707775
fix:
Linchen-Xu Dec 1, 2024
4e9e2e4
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
0bdc1a1
Merge 4e9e2e42f650eb153e10ed38e22e7a72ea86466c into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
4bb8bed
fix: automatically retrieve URI
Linchen-Xu Dec 1, 2024
c277233
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
d6130fa
Merge c2772334c3b75c61c8c5460b3b22bf6bf54c8372 into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
bc4c760
fix: automatically retrieve URI
Linchen-Xu Dec 1, 2024
a23feb7
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
96827c2
Merge a23feb74a107151de906b414ca18439a388f223f into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
44c40db
fix: automatically retrieve URI
Linchen-Xu Dec 1, 2024
e97ad19
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
c888edc
Merge e97ad19907de9690406d31469a06cc31bb86755c into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
211f63d
fix: automatically retrieve URI
Linchen-Xu Dec 1, 2024
036b29f
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
1e862db
Merge 036b29f4e7d99e33ba6273c32432101877ac62b3 into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
48312d3
fix: automatically retrieve URI
Linchen-Xu Dec 1, 2024
7641b70
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
6a2c65e
Merge 7641b70e60032c9b5a8b28b9a1471ec42b9852e4 into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
ef32850
fix: automatically retrieve URI
Linchen-Xu Dec 1, 2024
f5a720b
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
1e4e391
Merge f5a720b41688ed392e414b25d8fe563d593efe34 into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
85fdf36
fix: automatically retrieve URI
Linchen-Xu Dec 1, 2024
fc6d377
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
f36c4a4
Merge fc6d377b3eadfdb80c7e3f8566c326e462d9af90 into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
11f977d
fix: increase wait time
Linchen-Xu Dec 1, 2024
60bf2b1
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
591ae7a
Merge 60bf2b1b74bbef16032ab1757b6e69b6c77689ef into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
7f307e2
fix: increase wait time
Linchen-Xu Dec 1, 2024
8c712a3
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
70cbf1f
Merge 8c712a39a90bb32bf1834011181611a43a21e42f into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
51d87dd
fix: increase wait time
Linchen-Xu Dec 1, 2024
adf9e45
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
7acab26
Merge adf9e454e62c996ece05564392c6a341d82749b9 into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
e5e33aa
fix: increase wait time
Linchen-Xu Dec 1, 2024
f4199b1
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
515eea4
Merge f4199b179c32748317c65631e347160970eb43be into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
82e4a06
fix: increase wait time
Linchen-Xu Dec 1, 2024
5ad1c98
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
98a64dd
Merge 5ad1c983b4c4464af1086457d1ef5b243687578d into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
aa625d4
fix: increase wait time
Linchen-Xu Dec 1, 2024
000dacb
Merge remote-tracking branch 'origin/be/xlc/integrate_job_recommendat…
Linchen-Xu Dec 1, 2024
f5401ae
Merge 000dacb279ce1e3ca0a747e79cfc03e79f856616 into b9f9d0343856b8afd…
Linchen-Xu Dec 1, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 7 additions & 3 deletions .github/workflows/ci_be.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,10 +62,14 @@ jobs:
run: |
sed -i '/<!-- Pytest Coverage Comment:Begin -->/,/<!-- Pytest Coverage Comment:End -->/c\<!-- Pytest Coverage Comment:Begin -->\n\${{ steps.coverageComment.outputs.coverageHtml }}\n<!-- Pytest Coverage Comment:End -->' ../README.md

- name: Debug current branch
run: |
echo "head ref: ${{ github.head_ref }} "
echo "ref name: ${{ github.ref_name }} "

- name: Commit & Push changes to Readme
uses: actions-js/push@master
uses: ad-m/github-push-action@master
with:
branch: |
${{ github.ref_name == 'main' && 'dev/backend' || github.ref_name.startsWith('be/') && github.ref_name || 'dev/backend' }}
branch: ${{ github.head_ref || github.ref_name }}
message: Update coverage on Readme
github_token: ${{ secrets.GITHUB_TOKEN }}
33 changes: 29 additions & 4 deletions be_repo/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
from configs.database import get_resume_database, get_user_database
from graphs.qa_graph import create_graph
from modules.evaluator import evaluate_resume, evaluate_resume_with_jd
from modules.job_recommendation_system import job_recommend
from modules.langgraph_qa import get_answer_from_langgraph
from modules.upload import upload_parse_resume

Expand Down Expand Up @@ -159,9 +160,17 @@ def ask_question():
return jsonify({"error": "No user ID provided."}), 400
if not question:
return jsonify({"error": "No question provided."}), 400
# Load resume from database
resume = resume_collection.find_one({"user_id": user_id})
if not resume:
return jsonify({"error": "No resume found for this user."}), 404

resume_text = resume.get('resume_text', '')
if not resume_text:
return jsonify({"error": "Resume text is empty."}), 400

# Get answer using LangGraph
response = get_answer_from_langgraph(qa_graph, resume_collection, user_state_collection, user_id, question)
response = get_answer_from_langgraph(qa_graph, resume_text, user_state_collection, user_id, question)

return jsonify({"response": response}), 200

Expand Down Expand Up @@ -202,8 +211,17 @@ def interview_question_suggestion():
if not user_id:
return jsonify({"error": "No user ID provided."}), 400

# Load resume from database
resume = resume_collection.find_one({"user_id": user_id})
if not resume:
return jsonify({"error": "No resume found for this user."}), 404

resume_text = resume.get('resume_text', '')
if not resume_text:
return jsonify({"error": "Resume text is empty."}), 400

# Get answer using LangGraph
response = get_answer_from_langgraph(qa_graph, resume_collection, user_state_collection, user_id, prompt)
response = get_answer_from_langgraph(qa_graph, resume_text, user_state_collection, user_id, prompt)

return jsonify({"response": response}), 200

Expand All @@ -218,10 +236,17 @@ def job_suggestion():
if not user_id:
return jsonify({"error": "No user ID provided."}), 400

# Load resume from database
resume = resume_collection.find_one({"user_id": user_id})
if not resume:
return jsonify({"error": "No resume found for this user."}), 404

# Get answer using LangGraph
response = 'Example response'
resume_text = resume.get('resume_text', '')
if not resume_text:
return jsonify({"error": "Resume text is empty."}), 400

return jsonify({"response": response}), 200
return jsonify({"response": job_recommend(resume_text, user_id)}), 200


if __name__ == '__main__':
Expand Down
72 changes: 26 additions & 46 deletions be_repo/modules/job_recommendation_system.py
Original file line number Diff line number Diff line change
@@ -1,75 +1,55 @@
# job_recommendation_system.py

from neo4j_model import Neo4jModel
from resume_processor import ResumeProcessor
from retrieval_engine import RetrievalEngine
from recommendation_generator import RecommendationGenerator
from view import CLIView
import sys
import logging

def main():


# Redirect standard output to a file
sys.stdout = open('output.log', 'w')

# Your code here
print("Lots of output")
from .neo4j_model import Neo4jModel
from .recommendation_generator import RecommendationGenerator
from .resume_processor import ResumeProcessor
from .retrieval_engine import RetrievalEngine
from .view import CLIView


def job_recommend(resume_text, user_id):
# Setup Logging
import logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)


# Get Resume Input from User
if not resume_text.strip():
logger.error(f'No resume text provided, user_id: {user_id}.')
return 'Error: No resume text provided.'

# Neo4j Connection Details
NEO4J_URI = "neo4j+ssc://7bf5a48e.databases.neo4j.io" # Replace with your Neo4j URI
NEO4J_USERNAME = "neo4j" # Replace with your Neo4j username
NEO4J_PASSWORD = "oxsK7V5_86emZlYQlvCfQHfVWS95wXz29OhtU8GAdFc" # Replace with your Neo4j password
NEO4J_USERNAME = "neo4j" # Replace with your Neo4j username
NEO4J_PASSWORD = "oxsK7V5_86emZlYQlvCfQHfVWS95wXz29OhtU8GAdFc" # Replace with your Neo4j password

# Initialize Model
neo4j_model = Neo4jModel(
uri=NEO4J_URI,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD
)

# Initialize Controller Components
resume_processor = ResumeProcessor()
retrieval_engine = RetrievalEngine(resume_processor, neo4j_model)
recommendation_generator = RecommendationGenerator()

# Initialize View
view = CLIView()

# Get Resume Input from User
resume_text = view.get_resume_input()

if not resume_text.strip():
logger.error("No resume text provided.")
print("Error: No resume text provided.")
return

# Perform Mixed Retrieval for 'JD' Node Label
node_label = "JD" # Adjust as needed; could be dynamic based on user input or other criteria
similar_docs, graph_results = retrieval_engine.perform_mixed_retrieval(resume_text, node_label=node_label)


# Perform Mixed Retrieval
similar_docs, graph_results = retrieval_engine.perform_mixed_retrieval(resume_text, node_label='JTitle')

if not similar_docs and not graph_results:
print("No job recommendations found based on your resume.")
return

return 'No job recommendations found based on your resume.'

# Generate Recommendations
try:
recommendations = recommendation_generator.generate_recommendations(similar_docs, graph_results)
except Exception as e:
print("Error: Failed to generate job recommendations.")
return

# Display Recommendations
view.display_recommendations(recommendations)
return 'Error: Failed to generate job recommendations.'

# Close the file
sys.stdout.close()

if __name__ == "__main__":
main()
# Display Recommendations
return view.display_recommendations(recommendations)
33 changes: 15 additions & 18 deletions be_repo/modules/langgraph_qa.py
Original file line number Diff line number Diff line change
@@ -1,27 +1,24 @@
def get_answer_from_langgraph(qa_graph, resume_collection, user_state_collection, user_id, question):
resume = resume_collection.find_one({"user_id": user_id})
def get_answer_from_langgraph(qa_graph, resume_text, user_state_collection, user_id, question):
user_state = user_state_collection.find_one({"user_id": user_id})
resume_text = resume.get('resume_text', '')
state = user_state.get('state', '')
thread_id = user_state.get('thread_id', '')

config = {"configurable": {"thread_id": user_id + thread_id}}

# If state is 0, send resume to LLM first
if state == '0':
events = qa_graph.stream(
{"messages": [("user", resume_text)]}, config, stream_mode="values"
)
# Update state to 1
new_state = {
"user_id": user_id,
"state": '1',
"thread_id": thread_id
}
user_state_collection.replace_one({"user_id": user_id}, new_state, upsert=True)
for event in events:
if event["messages"][-1].type == "ai":
print('User ask for the first time!')
# if state == '0':
events = qa_graph.stream(
{"messages": [("user", resume_text)]}, config, stream_mode="values"
)
# Update state to 1
new_state = {
"user_id": user_id,
"state": '1',
"thread_id": thread_id
}
user_state_collection.replace_one({"user_id": user_id}, new_state, upsert=True)
for event in events:
if event["messages"][-1].type == "ai":
print('User ask for the first time!')
# Then send the question
events = qa_graph.stream(
{"messages": [("user", question)]}, config, stream_mode="values"
Expand Down
29 changes: 13 additions & 16 deletions be_repo/modules/retrieval_engine.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,10 @@
# retrieval_engine.py

from langchain_neo4j import GraphCypherQAChain
from langchain_openai import ChatOpenAI
from langchain.chains.retrieval import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from configs.openai_key import get_openai_api_key # New import
from langchain.chains.retrieval import create_retrieval_chain
from langchain.prompts import PromptTemplate


class RetrievalEngine:
def __init__(self, resume_processor, neo4j_model):
"""
Expand All @@ -21,7 +19,7 @@ def __init__(self, resume_processor, neo4j_model):

# Initialize Language Model (already initialized in Neo4jModel)
self.llm = self.neo4j_model.llm

# Initialize GraphCypherQAChain (already initialized in Neo4jModel)
self.graph_chain = self.neo4j_model.get_graph_chain()

Expand Down Expand Up @@ -53,12 +51,12 @@ def __init__(self, resume_processor, neo4j_model):
{context}
\"\"\"
""",
input_variables=["input"]
input_variables=["input"]
)

# Create a documents chain
# Create a documents chain
self.combine_docs_chain = create_stuff_documents_chain(self.llm, prompt=prompt)

# Initialize Retrieval Chain
# Default node_label is 'JD'; can be adjusted as needed
self.retrieval_chain = create_retrieval_chain(
Expand All @@ -79,13 +77,13 @@ def perform_mixed_retrieval(self, resume_text, node_label="JD"):
"""
# Process resume into a Document
doc = self.resume_processor.process_resume(resume_text)

if not doc:
return [], {}

# Store the Document in the appropriate vector store
self.neo4j_model.store_documents([doc], node_label=node_label)

# Access the schema property correctly
schema = self.neo4j_model.graph.get_schema

Expand All @@ -94,16 +92,15 @@ def perform_mixed_retrieval(self, resume_text, node_label="JD"):
similar_docs = similar_docs_result.get("output", [])
print("similar_docs_result:", similar_docs_result)
print("Keys in similar_docs_result:", similar_docs_result.keys())



for doc in similar_docs:
print("Document Metadata:", doc.metadata)

query = f"Based on the following resume, recommend relevant job positions: {resume_text}"
query = (f"Based on the following resume, recommend relevant job positions based on skills and experience, "
f"while ignoring the location: {resume_text}")
graph_response = self.graph_chain.invoke({"query": query, "schema": schema})
# After graph query
print("Graph Response:")
print(graph_response)
return similar_docs, graph_response

return similar_docs, graph_response
8 changes: 4 additions & 4 deletions be_repo/modules/view.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@ def display_recommendations(self, recommendations):
Display job recommendations to the user.
"""
if not recommendations:
print("No job recommendations found based on your resume.")
return
print("\nRecommended Jobs for You:")
return 'No job recommendations found based on your resume.'
res = '\nRecommended Jobs for You:\n'
for idx, job in enumerate(recommendations, start=1):
print(f"{idx}. {job}")
res += f'{idx}. {job}\n'
return res
32 changes: 18 additions & 14 deletions be_repo/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,18 +1,22 @@
Flask==3.0.3
chromedriver_autoinstaller==0.6.4
Flask==3.1.0
Flask_Cors==5.0.0
openai==1.54.4
pandas==2.2.3
langchain==0.3.9
langchain_community==0.3.8
langchain_neo4j==0.1.1
langchain_openai==0.2.10
langgraph==0.2.53
neo4j==5.27.0
openai==1.55.3
pandas==1.5.3
protobuf==3.20.2
pymongo==4.10.1
PyPDF2==3.0.1
qdrant_client==1.12.0
pytest==8.3.4
PyVirtualDisplay==3.0
qdrant_client==1.12.1
selenium==4.27.1
tqdm==4.66.2
selenium==4.26.1
google-auth==2.36.0
google-auth-oauthlib==1.2.1
chromedriver-autoinstaller==0.6.4
langgraph==0.2.48
langchain==0.3.7
langsmith==0.1.143
langchain_community==0.3.7
langchain-openai==0.2.8
pyvirtualdisplay
google-auth>=2.0.0
google-auth-oauthlib>=0.4.0
google-auth-httplib2>=0.1.0
2 changes: 1 addition & 1 deletion be_repo/tests/test_e2e.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ def driver():

@pytest.fixture
def wait(driver):
return WebDriverWait(driver, 30)
return WebDriverWait(driver, 50)


def upload_resume(driver, wait):
Expand Down
4 changes: 2 additions & 2 deletions fe_repo/src/App.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ function App() {
};
// show loading message
setMessages((messages) => [...messages, {
text: 'Suggest jobs',
text: 'Suggested jobs',
isUser: true
}, sendingMessage]);
// show loading message
Expand All @@ -118,7 +118,7 @@ function App() {
} else {
sendingMessage.text = (
<>
<div className="font-bold">Suggested questions:<br/>{response.response}</div>
<div className="font-bold">Suggested jobs:<br/>{response.response}</div>
</>
);
}
Expand Down
2 changes: 1 addition & 1 deletion fe_repo/src/functions/api.test.ts
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import {expect, test} from 'vitest';
import {analyze, getHasResume, getUserId, login, sendMessage, suggest, uploadFile, suggestJob} from "./api.ts";
import {analyze, getHasResume, getUserId, login, sendMessage, suggest, suggestJob, uploadFile} from "./api.ts";

test('sendMessage', () => {
expect(sendMessage("I'm adam")).toBeDefined();
Expand Down
Loading