Skip to content

Commit 1fe7833

Browse files
authored
Merge pull request #25 from moe-mizrak/fix/int-fields-to-float
int fields in ChatData fixed as float
2 parents d8bd272 + 3b91399 commit 1fe7833

File tree

2 files changed

+12
-12
lines changed

2 files changed

+12
-12
lines changed

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -89,12 +89,12 @@ The `ChatData` class is used to encapsulate the data required for making chat re
8989
#### LLM Parameters
9090
These properties control various aspects of the generated response (more [info](https://openrouter.ai/docs#parameters)):
9191
- **max_tokens** (int|null): The maximum number of tokens that can be generated in the completion. Default is 1024.
92-
- **temperature** (int|null): A value between 0 and 2 controlling the randomness of the output.
93-
- **top_p** (int|null): A value between 0 and 1 for nucleus sampling, an alternative to temperature sampling.
94-
- **top_k** (int|null): A value between 1 and infinity for top-k sampling (not available for OpenAI models).
95-
- **frequency_penalty** (int|null): A value between -2 and 2 for penalizing new tokens based on their existing frequency.
96-
- **presence_penalty** (int|null): A value between -2 and 2 for penalizing new tokens based on whether they appear in the text so far.
97-
- **repetition_penalty** (int|null): A value between 0 and 2 for penalizing repetitive tokens.
92+
- **temperature** (float|null): A value between 0 and 2 controlling the randomness of the output.
93+
- **top_p** (float|null): A value between 0 and 1 for nucleus sampling, an alternative to temperature sampling.
94+
- **top_k** (float|null): A value between 1 and infinity for top-k sampling (not available for OpenAI models).
95+
- **frequency_penalty** (float|null): A value between -2 and 2 for penalizing new tokens based on their existing frequency.
96+
- **presence_penalty** (float|null): A value between -2 and 2 for penalizing new tokens based on whether they appear in the text so far.
97+
- **repetition_penalty** (float|null): A value between 0 and 2 for penalizing repetitive tokens.
9898
- **seed** (int|null): A value for deterministic sampling (OpenAI models only, in beta).
9999
#### Function-calling
100100
Only natively suported by OpenAI models. For others, we submit a YAML-formatted string with these tools at the end of the prompt.

src/DTO/ChatData.php

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -108,12 +108,12 @@ private function validateXorFields(array $params): void
108108
* See LLM Parameters (https://openrouter.ai/docs#parameters) for following:
109109
*/
110110
public ?int $max_tokens = 1024; // Range: [1, context_length) The maximum number of tokens that can be generated in the completion. Default 1024.
111-
public ?int $temperature; // Range: [0, 2] Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
112-
public ?int $top_p; // Range: (0, 1] An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
113-
public ?int $top_k; // Range: [1, Infinity) Not available for OpenAI models
114-
public ?int $frequency_penalty; // Range: [-2, 2] Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
115-
public ?int $presence_penalty; // Range: [-2, 2] Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
116-
public ?int $repetition_penalty; // Range: (0, 2]
111+
public ?float $temperature; // Range: [0, 2] Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
112+
public ?float $top_p; // Range: (0, 1] An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
113+
public ?float $top_k; // Range: [1, Infinity) Not available for OpenAI models
114+
public ?float $frequency_penalty; // Range: [-2, 2] Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
115+
public ?float $presence_penalty; // Range: [-2, 2] Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
116+
public ?float $repetition_penalty; // Range: (0, 2]
117117
public ?int $seed; // OpenAI only. This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
118118

119119
// Function-calling

0 commit comments

Comments
 (0)