Skip to content

Commit fcf2d21

Browse files
authored
Merge pull request #9 from KodyKendall/feat/send-agent-messages-from-background
Feat/send agent messages from background
2 parents f267d3b + 8421473 commit fcf2d21

File tree

16 files changed

+2332
-162
lines changed

16 files changed

+2332
-162
lines changed

Gemfile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ group :development, :test do
2424
gem 'capybara'
2525
gem 'selenium-webdriver'
2626
gem 'webdrivers'
27+
gem 'webmock'
2728
gem 'pry-rails'
2829
gem 'pry-byebug'
2930
end

Gemfile.lock

Lines changed: 12 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
PATH
22
remote: .
33
specs:
4-
llama_bot_rails (0.1.2)
5-
actioncable (>= 7.0, < 9.0)
4+
llama_bot_rails (0.1.7)
5+
actioncable (>= 6.0, < 9.0)
66
async
77
async-http
88
async-websocket
9-
rails (>= 7.0, < 9.0)
9+
rails (>= 6.0, < 9.0)
1010

1111
GEM
1212
remote: https://rubygems.org/
@@ -129,6 +129,9 @@ GEM
129129
fiber-annotation
130130
fiber-local (~> 1.1)
131131
json
132+
crack (1.0.0)
133+
bigdecimal
134+
rexml
132135
crass (1.0.6)
133136
date (3.4.1)
134137
diff-lcs (1.6.2)
@@ -148,6 +151,7 @@ GEM
148151
fiber-storage (1.0.1)
149152
globalid (1.2.1)
150153
activesupport (>= 6.1)
154+
hashdiff (1.2.0)
151155
i18n (1.14.7)
152156
concurrent-ruby (~> 1.0)
153157
io-console (0.8.0)
@@ -371,6 +375,10 @@ GEM
371375
nokogiri (~> 1.6)
372376
rubyzip (>= 1.3.0)
373377
selenium-webdriver (~> 4.0, < 4.11)
378+
webmock (3.25.1)
379+
addressable (>= 2.8.0)
380+
crack (>= 0.3.2)
381+
hashdiff (>= 0.4.0, < 2.0.0)
374382
websocket (1.2.11)
375383
websocket-driver (0.8.0)
376384
base64
@@ -406,6 +414,7 @@ DEPENDENCIES
406414
sprockets-rails
407415
sqlite3
408416
webdrivers
417+
webmock
409418

410419
BUNDLED WITH
411420
2.6.9

README.md

Lines changed: 19 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -36,26 +36,11 @@ bundle add llama_bot_rails
3636
# 2. Install the routes & chat interface
3737
rails generate llama_bot_rails:install
3838

39-
# 3. Clone & run the LangGraph backend
40-
git clone https://github.com/kodykendall/llamabot
41-
42-
cd llamabot
43-
44-
# 4. Set up your environment
45-
python3 -m venv venv
46-
47-
source venv/bin/activate
48-
49-
pip install -r requirements.txt
50-
51-
echo "OPENAI_API_KEY=your_openai_api_key_here" > .env
52-
53-
# 5. Run the agent
54-
cd backend
55-
uvicorn app:app --reload
56-
57-
# 6. Confirm our agent is running properly. You should see: Hello, World! 🦙💬
58-
curl http://localhost:8000/hello
39+
# 3.Run the LlamaBot backend easily with Docker
40+
docker run \
41+
-e OPENAI_API_KEY=(your-key) \
42+
-p 8000:8000 \
43+
kody06/llamabot-backend
5944

6045
# 7. Start your Rails server.
6146
rails server
@@ -81,6 +66,20 @@ open http://localhost:3000/llama_bot/agent/chat
8166

8267
---
8368

69+
## ⚙️ Rails Integration Note (for LlamaBot Rails Gem)
70+
71+
If you're using the llama_bot_rails Gem with Docker, your Rails app must allow the Docker agent to connect back to it.
72+
73+
Add this to your config/environments/development.rb (if it wasn’t added automatically by the Gem installer):
74+
75+
```ruby
76+
Rails.application.configure do
77+
config.hosts << /host\.docker\.internal/ # Allow Docker agent to connect to Rails
78+
end
79+
```
80+
81+
This allows the Docker container to reach http://host.docker.internal:3000, which maps to your Rails app on the host machine.
82+
8483
## 🧨 **Power & Responsibility**
8584

8685
### ⚠️ **This gem gives the agent access to your Rails console.**

app/channels/llama_bot_rails/chat_channel.rb

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,34 @@
44

55
require 'json' # Ensure JSON is required if not already
66

7+
# Why support both a websocket connection, (chat_channel.rb), and a non-websocket SSE connection? %>
8+
# Rails 6 wasn’t working with our ActionCable websocket connection, so I wanted to implement SSE as well.
9+
10+
# We want to support a generic HTML interface that isn’t dependent on rails. (In case the Rails server goes down for whatever reason, we don’t lose access to LlamaBot).
11+
# Why have chat_channel.rb at all?
12+
13+
# Because Ruby on Rails lacks good tooling to handle real-time interaction, that isn’t through ActionCable.
14+
# For “cancel” requests. Websocket is a 2 way connection, so we can send a ‘cancel’ in.
15+
# To support legacy LlamaPress stuff.
16+
# We chose to implement it with ActionCable plus Async Websockets.
17+
# But, it’s Ruby on Rails specific, and is best for UI/UX experiences.
18+
19+
# SSE is better for other clients that aren’t Ruby on Rails specific, and if you want to handle just a simple SSE approach.
20+
# This does add some complexity though.
21+
22+
# We now have 2 different paradigms of front-end JavaScript consuming from LlamaBot
23+
# ActionCable consumption
24+
# StreamedResponse consumption.
25+
26+
# We also have 2 new middleware layers:
27+
# ActionCable <-> chat_channel.rb <-> /ws <-> request_handler.py
28+
# HTTPS <-> agent_controller.rb <-> LlamaBot.rb <-> FastAPI HTTPS
29+
30+
# So this increases our overall surface area for the application.
31+
32+
# This is deprecated and will be removed over time, to move towards a simple SSE approach.
33+
34+
735
module LlamaBotRails
836
class ChatChannel < ApplicationCable::Channel
937
# _chat.html.erb front-end subscribes to this channel in _websocket.html.erb.

app/controllers/llama_bot_rails/agent_controller.rb

Lines changed: 78 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,8 @@
11
require 'llama_bot_rails/llama_bot'
22
module LlamaBotRails
33
class AgentController < ActionController::Base
4-
skip_before_action :verify_authenticity_token, only: [:command]
4+
include ActionController::Live
5+
skip_before_action :verify_authenticity_token, only: [:command, :send_message]
56
before_action :authenticate_agent!, only: [:command]
67

78
# POST /agent/command
@@ -38,6 +39,11 @@ def chat
3839
# Render chat.html.erb
3940
end
4041

42+
# GET /agent/chat_ws
43+
def chat_ws
44+
# render chat_ws.html.erb
45+
end
46+
4147
def threads
4248
begin
4349
threads = LlamaBotRails::LlamaBot.get_threads
@@ -66,13 +72,75 @@ def chat_history
6672
end
6773
end
6874

75+
# POST /agent/send-message
76+
def send_message
77+
response.headers['Content-Type'] = 'text/event-stream'
78+
response.headers['Cache-Control'] = 'no-cache'
79+
response.headers['Connection'] = 'keep-alive'
80+
81+
@api_token = Rails.application.message_verifier(:llamabot_ws).generate(
82+
{ session_id: SecureRandom.uuid },
83+
expires_in: 30.minutes
84+
)
85+
86+
# 1. Instantiate the builder
87+
builder = state_builder_class.new(
88+
params: { message: params[:message] },
89+
context: { thread_id: params[:thread_id], api_token: @api_token }
90+
)
91+
92+
# 2. Construct the LangGraph-ready state
93+
state_payload = builder.build
94+
# sse = SSE.new(response.stream)
95+
96+
begin
97+
LlamaBotRails::LlamaBot.send_agent_message(state_payload) do |chunk|
98+
Rails.logger.info "[[LlamaBot]] Received chunk in agent_controller.rb: #{chunk}"
99+
# sse.write(chunk)
100+
response.stream.write "data: #{chunk.to_json}\n\n"
101+
102+
end
103+
rescue => e
104+
Rails.logger.error "Error in send_message action: #{e.message}"
105+
response.stream.write "data: #{ { type: 'error', content: e.message }.to_json }\n\n"
106+
107+
# sse.write({ type: 'error', content: e.message })
108+
ensure
109+
response.stream.close
110+
111+
# sse.close
112+
end
113+
end
114+
115+
def test_streaming
116+
response.headers['Content-Type'] = 'text/event-stream'
117+
response.headers['Cache-Control'] = 'no-cache'
118+
response.headers['Connection'] = 'keep-alive'
119+
sse = SSE.new(response.stream)
120+
sse.write({ type: 'start', content: 'Starting streaming' })
121+
sleep 1
122+
sse.write({ type: 'ai', content: 'This is an AI message' })
123+
sleep 1
124+
sse.write({ type: 'ai', content: 'This is an AI message' })
125+
sleep 1
126+
sse.write({ type: 'ai', content: 'This is an AI message' })
127+
sleep 1
128+
sse.write({ type: 'ai', content: 'This is an AI message' })
129+
end
130+
69131
private
70132

71133
def safety_eval(input)
72-
# Change to Rails root directory for file operations
73-
Dir.chdir(Rails.root) do
74-
# Create a safer evaluation context
75-
binding.eval(input)
134+
begin
135+
# Change to Rails root directory for file operations
136+
Dir.chdir(Rails.root) do
137+
# Create a safer evaluation context
138+
Rails.logger.info "[[LlamaBot]] Evaluating input: #{input}"
139+
binding.eval(input)
140+
end
141+
rescue => exception
142+
Rails.logger.error "Error in safety_eval: #{exception.message}"
143+
return exception.message
76144
end
77145
end
78146

@@ -83,5 +151,10 @@ def authenticate_agent!
83151
rescue ActiveSupport::MessageVerifier::InvalidSignature
84152
head :unauthorized
85153
end
154+
155+
def state_builder_class
156+
#The user is responsible for creating a custom AgentStateBuilder if they want to use a custom agent. Otherwise, we default to LlamaBotRails::AgentStateBuilder.
157+
LlamaBotRails.config.state_builder_class.constantize
158+
end
86159
end
87160
end

0 commit comments

Comments
 (0)