You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/DTO/ChatData.php
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -108,12 +108,12 @@ private function validateXorFields(array $params): void
108
108
* See LLM Parameters (https://openrouter.ai/docs#parameters) for following:
109
109
*/
110
110
public ?int$max_tokens = 1024; // Range: [1, context_length) The maximum number of tokens that can be generated in the completion. Default 1024.
111
-
public ?int$temperature; // Range: [0, 2] Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
112
-
public ?int$top_p; // Range: (0, 1] An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
113
-
public ?int$top_k; // Range: [1, Infinity) Not available for OpenAI models
114
-
public ?int$frequency_penalty; // Range: [-2, 2] Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
115
-
public ?int$presence_penalty; // Range: [-2, 2] Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
116
-
public ?int$repetition_penalty; // Range: (0, 2]
111
+
public ?float$temperature; // Range: [0, 2] Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
112
+
public ?float$top_p; // Range: (0, 1] An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
113
+
public ?float$top_k; // Range: [1, Infinity) Not available for OpenAI models
114
+
public ?float$frequency_penalty; // Range: [-2, 2] Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
115
+
public ?float$presence_penalty; // Range: [-2, 2] Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
116
+
public ?float$repetition_penalty; // Range: (0, 2]
117
117
public ?int$seed; // OpenAI only. This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
0 commit comments