Skip to content

Commit ba70131

Browse files
sanderrarnaudsjs
authored andcommitted
Made various improvements to the AutostartedAgent._ensure_agents method (Issue #7612, PR #7612)
Fixes bug in autostarted agent manager that occasionally causes agent's sessions to time out, dropping calls, resulting in a "stuck" orchestrator until a redeploy is triggered. ~~The bug has been there forever, but the conditions for it to trigger happen to be set up by #7278.~~ ~~The core of the bug is the following. When we need a certain agent to be up, we call `AutostartedAgentManager._ensure_agents`. This makes sure an agent process is running for these agents and waits for them to be up. However, instead of starting a process to track the autostarted agent map and to trust that it would keep up with changes to it, we instead start a process with an explicit list of agent endpoints.~~ If a new call comes in to `_ensure_agents`, and the agent is not yet up, we would kill the current process and start a new one. In killing the process, we would not expire its sessions, letting them time out on the 30s heartbeat timeout, losing any calls made to it in that window. EDIT: Scratched some of the above: I wrote a test case for the first bullet point below, which I thought to be the root cause of the reason for killing the old agent process. However, turns out the test also succeeds on master. The agent's `on_reconnect` actually updates the process' agent map. So, I think my root cause analysis may have been wrong (at least less likely), but the second and third bullet points should fix the issue anyway (doesn't matter exactly *how* we got in the situation where only part of the agent endpoints were up, as long as we handle it correctly), and even if part of the issue persists, the logging improvements should help future investigation. And the first bullet point is still a good consistency fix imo. This PR makes the following changes: - An agent process for autostarted agents (`use_autostart_agent_map` in the config file) now ignores any agent endpoints in the config file. Instead it runs purely in autostarted agent mode, starting instances for each of the endpoints in the agent map. It then tracks any changes made to the agent map. This last part was already in place, and resulted in a small inconsistency where the process would respect agent names in the config at start, and then suddenly move to consider the agent map the authority as soon as a change comes in. This inconsistency is now resolved by considering the agent map the authority for the entire lifetime of the agent process. - The autostarted agent manager now trusts in its processes to track the agent map. If a process is already running for an environment, but the agent endpoints are not yet up, it waits for them rather than to kill the process and start fresh. - When an explicit restart is requested, the autostarted agent manager now expires all sessions for the agents in the agent map, i.e. the endpoints for the killed process. - Improved robustness of wait condition for an agent to be up, specifically make sure we don't consider expired sessions to be up. - Made some log messages a bit more informative, no major changes. Strike through any lines that are not applicable (`~~line~~`) then check the box - [x] ~Attached issue to pull request~ - [x] Changelog entry - [x] Type annotations are present - [x] Code is clear and sufficiently documented - [x] No (preventable) type errors (check using make mypy or make mypy-diff) - [x] Sufficient test cases (reproduces the bug/tests the requested feature) - [x] Correct, in line with design - [x] ~End user documentation is included or an issue is created for end-user documentation (add ref to issue here: )~ - [x] ~If this PR fixes a race condition in the test suite, also push the fix to the relevant stable branche(s) (see [test-fixes](https://internal.inmanta.com/development/core/tasks/build-master.html#test-fixes) for more info)~
1 parent 4223f75 commit ba70131

File tree

6 files changed

+381
-186
lines changed

6 files changed

+381
-186
lines changed
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
description: "Made various improvements to the AutostartedAgent._ensure_agents method"
2+
sections:
3+
bugfix: "Fixed a race condition where autostarted agents might become unresponsive for 30s when restarted"
4+
issue-nr: 7612
5+
change-type: patch
6+
destination-branches:
7+
- master
8+
- iso7
9+
- iso6
10+

src/inmanta/agent/agent.py

Lines changed: 19 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1168,16 +1168,18 @@ async def _init_agent_map(self) -> None:
11681168
self.agent_map = cfg.agent_map.get()
11691169

11701170
async def _init_endpoint_names(self) -> None:
1171-
if self.hostname is not None:
1172-
await self.add_end_point_name(self.hostname)
1173-
else:
1174-
# load agent names from the config file
1175-
agent_names = cfg.agent_names.get()
1176-
if agent_names is not None:
1177-
for name in agent_names:
1178-
if "$" in name:
1179-
name = name.replace("$node-name", self.node_name)
1180-
await self.add_end_point_name(name)
1171+
assert self.agent_map is not None
1172+
endpoints: Iterable[str] = (
1173+
[self.hostname]
1174+
if self.hostname is not None
1175+
else (
1176+
self.agent_map.keys()
1177+
if cfg.use_autostart_agent_map.get()
1178+
else (name if "$" not in name else name.replace("$node-name", self.node_name) for name in cfg.agent_names.get())
1179+
)
1180+
)
1181+
for endpoint in endpoints:
1182+
await self.add_end_point_name(endpoint)
11811183

11821184
async def stop(self) -> None:
11831185
await super().stop()
@@ -1255,6 +1257,13 @@ async def update_agent_map(self, agent_map: dict[str, str]) -> None:
12551257
await self._update_agent_map(agent_map)
12561258

12571259
async def _update_agent_map(self, agent_map: dict[str, str]) -> None:
1260+
if "internal" not in agent_map:
1261+
LOGGER.warning(
1262+
"Agent received an update_agent_map() trigger without internal agent in the agent_map %s",
1263+
agent_map,
1264+
)
1265+
agent_map = {"internal": "local:", **agent_map}
1266+
12581267
async with self._instances_lock:
12591268
self.agent_map = agent_map
12601269
# Add missing agents

src/inmanta/data/__init__.py

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@
2929
import warnings
3030
from abc import ABC, abstractmethod
3131
from collections import abc, defaultdict
32-
from collections.abc import Awaitable, Callable, Iterable, Sequence
32+
from collections.abc import Awaitable, Callable, Iterable, Sequence, Set
3333
from configparser import RawConfigParser
3434
from contextlib import AbstractAsyncContextManager
3535
from itertools import chain
@@ -1290,7 +1290,7 @@ def get_connection(
12901290
"""
12911291
if connection is not None:
12921292
return util.nullcontext(connection)
1293-
# Make pypi happy
1293+
# Make mypy happy
12941294
assert cls._connection_pool is not None
12951295
return cls._connection_pool.acquire()
12961296

@@ -3415,10 +3415,12 @@ def get_valid_field_names(cls) -> list[str]:
34153415
return super().get_valid_field_names() + ["process_name", "status"]
34163416

34173417
@classmethod
3418-
async def get_statuses(cls, env_id: uuid.UUID, agent_names: set[str]) -> dict[str, Optional[AgentStatus]]:
3418+
async def get_statuses(
3419+
cls, env_id: uuid.UUID, agent_names: Set[str], *, connection: Optional[asyncpg.connection.Connection] = None
3420+
) -> dict[str, Optional[AgentStatus]]:
34193421
result: dict[str, Optional[AgentStatus]] = {}
34203422
for agent_name in agent_names:
3421-
agent = await cls.get_one(environment=env_id, name=agent_name)
3423+
agent = await cls.get_one(environment=env_id, name=agent_name, connection=connection)
34223424
if agent:
34233425
result[agent_name] = agent.get_status()
34243426
else:

0 commit comments

Comments
 (0)