Hi all,
I'm loading a BERT model to use it for a semantic search service. Loading the models works fine and fast, but when my script is about to encode the input sentence it stops/doesn't continue.
Here is the part I'm talking about:
def getRecommends(text1):
print("calculate_recommends")
rel_questions = []
rel_links = []
query = text1
print("encode_sent")
query_vec = model.encode([query],show_progress_bar=True)[0] # <--- STOPS WORKING HERE
print("encoded_sent")
# compute normalized dot product as score
score = np.sum(query_vec * doc_vecs, axis=1) / np.linalg.norm(doc_vecs, axis=1)
topk_idx = np.argsort(score)[::-1][:4]
for idx in topk_idx:
rel_questions.append(questions[idx])
rel_links.append(answers[idx])
#for a in len(rel_questions):
print("calculated recommends")
return rel_questions, rel_links
I'm using the sentence_transformers module and a base model in this example.
Thank you :)