Skip to content

Latest commit

 

History

History
252 lines (181 loc) · 11.1 KB

File metadata and controls

252 lines (181 loc) · 11.1 KB

CoreAi — static API for everyone

A single class CoreAI.CoreAi — one entry point to the LLM and orchestrator. You do not need to know VContainer, write your own singleton, or resolve services on the scene by hand.

Audience What you get
Beginner Copy the 3 steps below → await CoreAi.AskAsync(...) in a script on an object.
Experienced developer The same static API for prototypes and UI; for larger architecture — TryGet* + DI, see Professional stack.

Minimum for beginners (3 steps)

  1. Scene with CoreAI — menu CoreAI → Setup → Create Chat Demo Scene (full UI demo) or CoreAI → Setup → Create Bare Scene (advanced) (CoreAILifetimeScope + assets only).
  2. Backend — in CoreAISettings set HTTP (LM Studio) or LLMUnity; see QUICK_START.
  3. Code — on any MonoBehaviour:
using CoreAI;

public class MyNpc : MonoBehaviour
{
    async void OnPlayerTalk()
    {
        if (!CoreAi.IsReady) { Debug.LogWarning("No CoreAILifetimeScope on scene"); return; }

        string reply = await CoreAi.AskAsync("How are you?");
        Debug.Log(reply);
    }
}

Streaming “like chat” — one loop line:

await foreach (string part in CoreAi.StreamAsync("Tell me about the quest", "SmartChat"))
    uiLabel.text += part;

Done. The "SmartChat" role must match AgentBuilder / chat config if you configured agents.

Sending messages: convenient in UI and from code

How you interact What you press / write Where the request goes
Chat window (CoreAiChatPanel) Send button, Shift+Enter (default) or Enter — depending on CoreAiChatConfig.SendOnShiftEnter CoreAiChatService → same ILlmClient as CoreAi
Script (NPC, quest, “Ask” button) CoreAi.AskAsync("text") or CoreAi.StreamAsync — that is how a user request is sent to the LLM Same CoreAiChatService inside CoreAi

Both paths use one CoreAILifetimeScope registered on the scene and one backend configuration. The only difference is UX: in the panel you type in a field; in code you pass a string to a method. Brushes/streaming/roles — see README_CHAT and STREAMING_ARCHITECTURE.

Summary: for the player in chat — built-in panel; for game logic without a widget — CoreAi. Together CoreAI + CoreAiUnity cover “convenient everywhere”: demo scene in one click, hotkeys in the Inspector, and one line of CoreAi on any MonoBehaviour.


Quick cheat sheet (all methods)

Method Returns When to use
AskAsync Task<string?> You need the full answer as one string (logic, save, simple NPC).
StreamAsync IAsyncEnumerable<string> Live text in UI (label, TMP, UI Toolkit).
StreamChunksAsync IAsyncEnumerable<LlmStreamChunk> You need IsDone, Error, usage per chunk.
SmartAskAsync Task<string?> Both stream to UI and full string at the end (analytics, quests). Stream mode follows the settings hierarchy.
OrchestrateAsync Task<string?> Full game pipeline: session snapshot, authority, queue, validation, publishing a command to the bus.
OrchestrateStreamAsync IAsyncEnumerable<LlmStreamChunk> Same, but tokens as they generate + final publish after the stream.
OrchestrateStreamCollectAsync Task<string> Stream + assemble full text + onChunk for UI.
StopAgent void Cancel generation and running agent tasks.
ClearContext void Clear memory (chat + long-term).
IsReady bool Whether the API can be called (scope + services).
Invalidate() void After scene change or in tests — clear cache.
TryGetChatService / TryGetOrchestrator bool No exceptions: check before a UI button or optional AI.
GetChatService / GetOrchestrator / GetSettings services Direct access when you need full control.

Detailed scenarios — section When to use what below.


1. For beginners: common questions

Why does it fail or nothing happens?
Ensure the active scene has a GameObject with CoreAILifetimeScope. After LoadScene call CoreAi.Invalidate() or check CoreAi.IsReady / CoreAi.TryGetChatService(out _).

How is AskAsync different from OrchestrateAsync?

  • AskAsyncchat: prompt + role history, text answer.
  • OrchestrateAsyncgame task: session snapshot, roles like Creator, publishing a JSON command to the game bus. For “talk to NPC” you usually use Ask / Stream.

Never on the Unity main thread: CoreAi.AskAsync(...).Result, .GetAwaiter().GetResult(), or .Wait() — the LLM stack uses player loop / ToolInvocationMarshaler / HTTP paths that can deadlock if the managed main thread blocks while a thread-pool continuation waits for main (v1.5.14: tool marshaling skips SwitchToMainThread in Edit Mode !isPlaying specifically to keep Test Runner / tooling safe — still avoid blocking main in gameplay). Use await (e.g. async void / UniTask on UI events).

Can I call outside async void?
You can from Start with StartCoroutine + wrapper, but simpler — async void on the Unity main thread or UniTask. Do not use Task.Run — LLM calls must stay on the main thread (see STREAMING_ARCHITECTURE).

Where do I get roleId?
The same id as in AgentBuilder("...") and CoreAiChatConfig. Often "SmartChat".


2. When to use what (in detail)

Layer Method What it does When to pick it
Chat CoreAi.AskAsync Waits for full answer, chat history by role. Simple dialogue, log, “one line”.
Chat CoreAi.StreamAsync String chunks. Caption / chat, “typing” effect.
Chat CoreAi.StreamChunksAsync LlmStreamChunk with metadata. Errors, IsDone, tokens.
Chat CoreAi.SmartAskAsync Chooses stream or not; onChunk + full text. UI + saving full answer.
Orchestrator CoreAi.OrchestrateAsync Snapshot → prompt → authority → queue → validation → ApplyAiGameCommand. Creator / Programmer agents, scenarios with commands.
Orchestrator CoreAi.OrchestrateStreamAsync Same, but tokens along the way. Long quest text + command at the end.
Orchestrator CoreAi.OrchestrateStreamCollectAsync Stream + accumulate string + onChunk. Combine live UI and post-processed string.

3. Code recipes

3.1. Simple question — one line

string answer = await CoreAi.AskAsync("Hi! How old are you?");
Debug.Log(answer);

3.2. Streaming in UI Toolkit / TextMeshPro

label.text = "";
await foreach (string chunk in CoreAi.StreamAsync("Tell a joke", "SmartChat"))
    label.text += chunk;

3.3. Smart: chunks in UI and full text in a variable

string full = await CoreAi.SmartAskAsync(
    "Tell a story",
    roleId: "SmartChat",
    onChunk: c => label.text += c);

SaveToPlayerJournal(full);

Override streaming: uiStreamingOverride: false — force full response in one piece.

3.4. Safe call (no try when AI is absent)

if (CoreAi.TryGetChatService(out var chat))
{
    string reply = await chat.SendMessageAsync("Hi", "SmartChat", ct);
}
else
{
    // AI disabled or scene without scope — show default NPC text
}

3.4b. Agent control API

// Stop generation (e.g. Stop button in UI)
CoreAi.StopAgent("SmartChat");

// Clear chat history but keep long-term memory (facts, quests)
CoreAi.ClearContext("SmartChat", clearChatHistory: true, clearLongTermMemory: false);

// Full hard reset (amnesia)
CoreAi.ClearContext("SmartChat", clearChatHistory: true, clearLongTermMemory: true);

3.5. Orchestrator: command into the game

var task = new AiTaskRequest
{
    RoleId = "Creator",
    Hint = "Generate JSON spawn command",
    Priority = 5,
    CancellationScope = "creator"
};

string json = await CoreAi.OrchestrateAsync(task);

3.6. Orchestrator with stream to a status line

var task = new AiTaskRequest { RoleId = "Creator", Hint = "Explain the step" };

string full = await CoreAi.OrchestrateStreamCollectAsync(task,
    onChunk: c => statusLine.text += c);

4. Lifecycle and scenes

  • CoreAi caches a reference to CoreAILifetimeScope and services.
  • SceneManager.sceneLoaded / OnDestroy on unload — call CoreAi.Invalidate(), otherwise you may keep a stale container.
  • EditMode / PlayMode tests — in [SetUp]: CoreAi.Invalidate().
  • GetSettings() — may return null if the scope is not ready yet; for global defaults also use static CoreAISettings from the portable core if configured.

5. Professional stack

Static API is not an “anti-pattern” for CoreAI: it is the official facade over VContainer. It:

  • forwards calls to CoreAiChatService and IAiOrchestrationService without duplicating logic;
  • respects the same ILlmClient, queue, logs, and metrics as manual resolution.

When to keep CoreAi everywhere: prototypes, tools, scene MonoBehaviour, menu buttons, tutorial scenes.

When to inject interfaces (DI): large codebase, unit tests without a scene, multiple scopes, strict module isolation. Pattern:

// Registration (in your LifetimeScope)
builder.Register<QuestAiController>(Lifetime.Transient)
    .WithParameter<Func<CoreAiChatService?>>(() => {
        if (CoreAi.TryGetChatService(out var s)) return s;
        return null;
    });
// or
builder.Register<QuestAiController>(Lifetime.Transient)
    .WithParameter<ILlmClient>(c => c.Resolve<ILlmClient>());

CoreAi.GetChatService() remains a convenient adapter at the “object script ↔ core” boundary.

Extending behavior: register a wrapper in the container; if it is the same type CoreAiChatService.TryCreateFromScene expects, you may need explicit registration — for fine control use direct IObjectResolver in your LifetimeScope and call services from there; the CoreAi facade stays valid for the default path.


6. Main thread (required)

// OK — from MonoBehaviour, main thread
async void OnEnable() {
  await foreach (var c in CoreAi.StreamAsync("Hi")) t.text += c;
}

// DO NOT — worker thread + UnityWebRequest
_ = Task.Run(() => _ = CoreAi.AskAsync("x"));

7. Related docs

Document Contents
QUICK_START Install, scene, backend
README_CHAT Chat panel, styles, events
STREAMING_ARCHITECTURE SSE, LLMUnity, orchestrator stream, limits
DOCS_INDEX Full documentation map

Version: see Assets/CoreAiUnity/package.json — in release changelogs for CoreAi, see Singleton API / Orchestrator streaming.