-
Notifications
You must be signed in to change notification settings - Fork 139
Fix edit docs link #675
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
Its-Just-Nans
wants to merge
54
commits into
smallcloudai:dev
Choose a base branch
from
Its-Just-Nans:fix-edit-docs
base: dev
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Fix edit docs link #675
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* un-disable input when limit reached. * chore: add `compression_strength` to tool messages * add paused state to thread * add hook to pause auto send based on compression. * ui: let the user know that their chat is being compressed. * fix: linter issues after removing `limitReached` information call out. * fix: also use `/links` to decided if a new chat should be suggested. * refactor: remove `useTotalTokenUsage` hook. * add comments about `newChatSuggested`. * pause and unpaused using newChatSuggested. * fix(NewChatSuggested): use a hook to get the compression strength. * feat: add second condition for pausing the chat. * case: it might be posable for many messages with out compression.
…ting them to last assistant message
it was only used in telemetry, and it was replaced by IntoResponse
* wip: remove attach file checkbox. * feat: attach files button. * test: active file is no longer attached by default. * add an event for the ide to attach a file to chat. * fix: remove attached files after submit.
* fix: auto attach file. * add attached files to command preview request. * test: attach file on new chat test
* init * fixes * remove redirect * next * models from litellm info * fetch providers and models * providers improvements * add model modal changes * 3rd party model setup fixes * api fixes * renames * api refactor * dont show Available Chat Models * fix adding models * some fixes * update api * ann enabled to backend, fixes * remove unneded code * fixes * inference wip * UI/models cleanup * available models * thirdparty utils * dont use passthrough minidb * refactor * caps rebuild wip * finetune_info is always list * thirdparty models in caps * small fixes * another fixes * spad config in model db * tokenizer endpoints * telemetry_endpoints * caps v2 parsing * update migration from v2 to internal v1 * prompt based chat * embeddings -> embedding * fix embeddings * embeddings fix * model config instread of str * ui support buttons * all providers * show providers without models at all * optional api key * custom config * ThirdPartyModel improvement * no custom models for providers * custom model config * custom model for no-predefined providers only * update model fixes * api base * fixes * inference name * api key is not required * api key or dummy * refactor of api * to dict * another dict conversion * UI rework * apiConfig update * continue refactor * continue refactor * remove custom badge * UI fixes * badges fix * fix provider name * api keys UI * show api keys * remove api_keys from api * remove api_keys from UI * add cutom provider * fixes in chat/completions * remove provider logic * remove unneded stuff * remove commented old style api keys * validation initial * caps validation fixes * continue * hide api base for non custom providers * container for extended configuration * api key selection fixes * collapsable advanced options * initial tokenizers API * ui initial * implement tokenizer upload and get * create tokenizers dir * update tokenizer ui * move thirdparty from hamburger * fix twice click * api enhance * fixes * fix api * fix tokenizers modal * tokenizer select in model * get tokenizer logic * 3rdparty utils refactor * refactor tokenizers api * fix upload * default tokenizers * ui tokenizers defaults and unploaded * rework tokenizers section * move scc into .css file * enhance UI, wip * tokenizers UI improve * update setup, up version * required tokenizer * migration * fix circular imports * migration fixes * another fix * another fix 2 * migration fixes 3 * reasoning params and migration * reasoning UI * default tokenizer * oops * add custom build workflow * remove deprecated models * fix setup.py * ui improvements * model name for custom * backend validation * fixes * model name validation * gpt4o at the top * set tokenizer if has default * up caps version if 3rdparty updated * remove old n_ctx arg * static caps data * fix caps * backward compat with prev lsp versions * up lsp version due to new style caps for server support * fix parse version call * add missed customization and default system prompt * embedding models args --------- Co-authored-by: Kirill Starkov <starkov.kirill123@gmail.com>
…o cancellation of request does not lead to zombie processes
* gemini 25 pro and chatgpt 4o * chatgpt4o has no tools
Last command done (1 command done): pick ab05f48 fix: limit the number of tokens a chat can use. Next commands to do (6 remaining commands): pick 405bf45 fix: linter errors. pick 2098b4f fix: dispatch information callout and disable input. You are currently rebasing branch 'fix-conflicts' on '8f8a0078'. Changes to be committed: modified: refact-agent/gui/src/components/ChatForm/ChatForm.tsx modified: refact-agent/gui/src/hooks/index.ts modified: refact-agent/gui/src/hooks/useSendChatRequest.ts new file: refact-agent/gui/src/hooks/useTotalTokenUsage.ts
* add forceReload event on any tool result. * rename forceReload event * chore: rename forceReload * fix: compression stop, only stop the chat when it's compressed and more than 40 messages from the last user message. * fix: combine comppression and chat suggestion use similar logic. * enable send if the user dimisses the request to start a new chat * fix: open chat suggestion box when restoring a large chat.
* n_ctx from model assigner * models_dict_patch * fix missed fields and patch pass
@Its-Just-Nans thanks for the fix! Could you resolve all conflicts (change the base branch to the dev)? |
f4dd267
to
55e7fc6
Compare
done |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fix edit docs link