Document version: 1.0 | Date: April 2026
A step-by-step guide from zero to a working AI agent in 10 minutes.
- Installing LM Studio
- Downloading a model
- Starting the local server
- Configuring the Unity project
- Running the scene and sending a command
- Verifying the result
- What’s next?
- Go to https://lmstudio.ai
- Download the installer for your OS (Windows / macOS / Linux)
- Install and launch
💡 LM Studio is a free tool for running LLM models locally.
It provides an OpenAI-compatible HTTP API.
| Minimum | Recommended | |
|---|---|---|
| RAM | 8 GB | 16 GB+ |
| GPU (VRAM) | 4 GB | 8 GB+ |
| Disk | 5 GB (for a 4B model) | 20 GB+ |
| Model | Size | Tool calling | Best for |
|---|---|---|---|
| ⭐ Qwen3.5-4B (Q4_K_M) | ~2.5 GB | ✅ Excellent | Best starting point |
| Qwen3.5-2B (Q4_K_M) | ~1.5 GB | Weaker hardware | |
| Gemma 4 26B | ~15 GB | ✅ Excellent | Powerful hardware |
| Qwen3.5-35B MoE | ~20 GB | ✅ Excellent | Production |
- In LM Studio, click 🔍 Search (or the search icon)
- Enter
Qwen3.5-4B - Pick the GGUF build with Q4_K_M quantization
- Click Download and wait for it to finish
📦 Download size: ~2.5 GB for Qwen3.5-4B Q4_K_M
Download time: 2–5 minutes (depends on your connection)
- Open the 💬 Chat tab (or Local Server)
- In the top dropdown, select the downloaded model (Qwen3.5-4B)
- Wait until it loads (the status line shows “Model loaded”)
- Open the 🖥️ Local Server tab (
<->icon) - Click Start Server
- Confirm status: Server running on port 1234
✅ Server is running!
URL: http://localhost:1234/v1
Model: Qwen3.5-4B-Q4_K_M
Status: Ready
Open PowerShell and run:
# Check the server
Invoke-RestMethod -Uri "http://localhost:1234/v1/models"
# Sample request
$body = @{
model = "qwen3.5-4b"
messages = @(@{ role = "user"; content = "Say hello" })
} | ConvertTo-Json -Depth 3
Invoke-RestMethod -Uri "http://localhost:1234/v1/chat/completions" `
-Method POST -Body $body -ContentType "application/json"- Unity Hub → Add → select the
CoreAIfolder - Open the project (Unity 6000.0+)
Menu: CoreAI → Development → Open _mainCoreAI scene
Or in the Project window:
Assets/CoreAiUnity/Scenes/_mainCoreAI.unity
- In the Project window find:
Assets/CoreAiUnity/Resources/CoreAISettings.asset - Or create: Create → CoreAI → CoreAI Settings
- In the Inspector configure:
┌─────────────────────────────────────────────┐
│ CoreAI Settings │
│ │
│ 🎯 LLM Backend: [OpenAiHttp] ▼ │
│ │
│ 🌐 HTTP API: │
│ Base URL: http://localhost:1234/v1 │
│ API Key: (empty) │
│ Model: qwen3.5-4b │
│ Temperature: 0.2 │
│ Max Tokens: 4096 │
│ Timeout: 120 │
│ │
│ ⚙️ General: │
│ LLM Timeout: 30 │
│ Max Concurrent: 2 │
│ │
│ [🔗 Test Connection] │
│ │
└─────────────────────────────────────────────┘
Click 🔗 Test Connection in the Inspector.
Expected result:
✅ HTTP API: Connected
Model: qwen3.5-4b
Response: "OK"
Latency: 0.3s
In Unity, click Play (▶).
In the Unity Console you should see:
[CoreAI] VContainer + MessagePipe... ready.
[CoreAI] Backend: OpenAiHttp → http://localhost:1234/v1
[CoreAI] Registered tools: memory, execute_lua, world_command, get_inventory, ...
Option A: From your own script
using CoreAI;
using VContainer;
public class MyGameController : MonoBehaviour
{
[Inject] private IAiOrchestrationService _orchestrator;
async void Start()
{
// Ask the Programmer agent to generate Lua
await _orchestrator.RunTaskAsync(new AiTaskRequest
{
RoleId = "Programmer",
Hint = "Write a Lua script that reports 'Hello from AI!'"
});
Debug.Log("✅ AI task completed!");
}
}Option B: Via hotkey (already on the scene)
- In Play Mode, press F9
- That invokes the Programmer agent via
CoreAiLuaHotkey - Check the logs:
[LLM ▶] traceId=abc123 role=Programmer
[LLM ◀] traceId=abc123 312 tokens, 1.8s
[Lua] Execution succeeded: "Hello from AI!"
Option C: Create a custom agent
// Create a merchant — three lines!
var merchant = new AgentBuilder("Merchant")
.WithSystemPrompt("You are a friendly weapon merchant. Greet customers warmly.")
.WithTool(new InventoryLlmTool(myInventory))
.WithMemory()
.Build();
merchant.ApplyToPolicy(CoreAIAgent.Policy);
// Send a message:
merchant.Ask("Show me swords", (response) => {
Debug.Log($"Merchant: {response}");
});┌─ Unity Console ──────────────────────────────────────────────┐
│ │
│ [CoreAI] Backend: OpenAiHttp → http://localhost:1234/v1 │
│ [LLM ▶] traceId=abc123 role=Programmer │
│ [LLM ◀] traceId=abc123 247 tokens, 1.2s │
│ [MEAI] Tool call detected: name=execute_lua │
│ [Lua] Executing: report("Hello from AI!") │
│ [Lua] Execution succeeded │
│ ✅ AI task completed! │
│ │
└───────────────────────────────────────────────────────────────┘
| Issue | Quick fix |
|---|---|
Backend: Stub |
Confirm LM Studio is running |
Connection refused |
Check port 1234 in LM Studio |
Empty response |
Raise Timeout to 120 s |
Tool call not recognized |
Model too small; use 4B+ |
📖 Details: TROUBLESHOOTING.md
| Task | Document |
|---|---|
| Build your own agent | AGENT_BUILDER.md |
| Control the world from Lua | WORLD_COMMANDS.md |
| Configure memory | MemorySystem.md |
| Add a custom tool | TOOL_CALL_SPEC.md |
| Understand architecture | DEVELOPER_GUIDE.md |
| Roles and prompts | AI_AGENT_ROLES.md |
| Browse examples | EXAMPLES.md |
- Build an NPC merchant with inventory → CHAT_TOOL_CALLING.md
- Craft weapons via CoreMechanicAI → EXAMPLES.md
- Spawn enemies via World Commands → WORLD_COMMANDS.md
- Run tests → Window → Test Runner → EditMode → Run All
✅ LM Studio installed
✅ Model downloaded (Qwen3.5-4B Q4_K_M)
✅ Server running on port 1234
✅ Unity project open
✅ _mainCoreAI scene loaded
✅ CoreAISettings → Backend = OpenAiHttp
✅ CoreAISettings → Base URL = http://localhost:1234/v1
✅ Test Connection = ✅ Connected
✅ Play → F9 → "Hello from AI!" in the logs
🎉 Done! Move on to building your own agents.
To run a model inside Unity (no external server):
- CoreAISettings → Backend = LlmUnity (or Auto)
- On the scene, find LlmManager with
LLM+LLMAgent - In the LLM Inspector, download a model (LLMUnity Download button)
- Press Play
⚠️ LLMUnity runs the model inside the Unity process — slower, but no external tools.
CoreAISettings → Backend = OpenAiHttp
Base URL: https://api.openai.com/v1
API Key: sk-xxxxxxxxxxxxx
Model: gpt-4o-mini
or
CoreAISettings → Backend = OpenAiHttp
Base URL: https://dashscope.aliyuncs.com/compatible-mode/v1
API Key: sk-xxxxxxxxxxxxx
Model: qwen-max
🚀 CoreAI — make your game smarter. One agent at a time.