You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to run prediction for a medium sized corpus (approx 100k records) & the prediction speed is quite slow. It processes approx 2-3 sentences per second. Is there any way to speed up the process?
The text was updated successfully, but these errors were encountered:
i made a slight modification to the code (a very naive one too) to run for a file in predict.py by reading in a csv and iterating through each document/content in it and passing it onto the predict function.
pred_data = pd.read_csv(r"file.csv")
print("prediction data read")
result = []
for index, row in pred_data.iterrows():
sentence = row['ColName']
print(index)
ids = data_loader.sentence2id(vocab, sentence)
if len(ids) > Config.data.max_seq_length:
print(f"Max length I can handle is: {Config.data.max_seq_length}")
result.append(0)
continue
result.append(predict(ids))
pred_class = pd.DataFrame(result)
pred_class.to_csv(r"pred-sent.csv", index=False)
If i want to improve the performance, & am not sure how to do it? .I'm quite new to TF so it could be really useful if you help me out on this.
I'm trying to run prediction for a medium sized corpus (approx 100k records) & the prediction speed is quite slow. It processes approx 2-3 sentences per second. Is there any way to speed up the process?
The text was updated successfully, but these errors were encountered: