You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/eino/core_modules/flow_integration_components/multi_agent_hosting.md
+74Lines changed: 74 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -343,3 +343,77 @@ HandOff to answer_with_journal with argument {"reason":"To find out the user's m
343
343
Answer:
344
344
You got up at 7:00 in the morning.
345
345
```
346
+
347
+
## FAQ
348
+
349
+
### Host direct answer does not have streaming effect
350
+
351
+
Host Multi-Agent provides a configuration for `StreamToolCallChecker` to determine whether the Host outputs directly.
352
+
353
+
Different models may output tool calls in different ways in streaming mode: some models (e.g., OpenAI) output tool calls directly; some models (e.g., Claude) output text first and then output tool calls. Therefore, different methods are needed for the determination. This field is used to specify a function for determining whether the model's streaming output contains tool calls.
354
+
355
+
It is optional. If not filled, the determination of whether the "non-empty package" contains tool calls is used:
356
+
357
+
```go
358
+
funcfirstChunkStreamToolCallChecker(_context.Context, sr *schema.StreamReader[*schema.Message]) (bool, error) {
359
+
defer sr.Close()
360
+
361
+
for {
362
+
msg, err:= sr.Recv()
363
+
if err == io.EOF {
364
+
returnfalse, nil
365
+
}
366
+
if err != nil {
367
+
returnfalse, err
368
+
}
369
+
370
+
iflen(msg.ToolCalls) > 0 {
371
+
returntrue, nil
372
+
}
373
+
374
+
iflen(msg.Content) == 0 { // skip empty chunks at the front
375
+
continue
376
+
}
377
+
378
+
returnfalse, nil
379
+
}
380
+
}
381
+
```
382
+
383
+
The above default implementation is applicable when the Tool Call Message output by the model contains only Tool Calls.
384
+
385
+
The default implementation is not applicable when there is a non-empty content chunk before outputting the Tool Call. In this case, a custom tool call checker is required as follows:
386
+
387
+
```go
388
+
toolCallChecker:=func(ctx context.Context, sr *schema.StreamReader[*schema.Message]) (bool, error) {
389
+
defer sr.Close()
390
+
for {
391
+
msg, err:= sr.Recv()
392
+
if err != nil {
393
+
if errors.Is(err, io.EOF) {
394
+
// finish
395
+
break
396
+
}
397
+
398
+
returnfalse, err
399
+
}
400
+
401
+
iflen(msg.ToolCalls) > 0 {
402
+
returntrue, nil
403
+
}
404
+
}
405
+
returnfalse, nil
406
+
}
407
+
```
408
+
409
+
This custom `StreamToolCallChecker` may need to check **all packages** for the presence of ToolCalls in extreme cases, resulting in the loss of the "streaming judgment" effect. If you want to maintain the "streaming judgment" effect as much as possible, the suggested solution is:
410
+
411
+
Try to add a prompt to constrain the model not to output additional text when making tool calls, for example: "If you need to call a tool, output the tool directly without outputting text."
412
+
413
+
Different models may be affected by the prompt differently. You need to adjust the prompt and verify the effect in actual use.
The Host provides the selection of Specialists in the form of Tool Calls, so it may select multiple Specialists simultaneously in the form of a Tool Call list. At this time, the Host Multi-Agent will route the request to these multiple Specialists at the same time. After the multiple Specialists complete their tasks, the Summarizer node will summarize multiple Messages into one Message as the final output of the Host Multi-Agent.
418
+
419
+
Users can customize the behavior of the Summarizer by configuring the Summarizer, specifying a ChatModel and a SystemPrompt. If not specified, the Host Multi-Agent will concatenate the output Message Contents of multiple Specialists and return the result.
0 commit comments