-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Description
We got the below results of reward after training rl model
And rl+llm model (we used llama instead of gpt 4 as u suggested)
Here it executed until 41 step and finally got the reward when training with llm but it accepted every decision of rl considering it as reasonable always
But the reward reduced from
-187165.88(rl) to -2.29(
rl+llm).
Is the reward we getting is correct?
And also how do we get the mean travel time, mean waiting time and mean speed values?
Metadata
Metadata
Assignees
Labels
No labels