Skip to content

fix(slack-work-app): Listen to deletion events#3050

Merged
miles-kt-inkeep merged 3 commits intomainfrom
fix/slack-app-submission
Apr 7, 2026
Merged

fix(slack-work-app): Listen to deletion events#3050
miles-kt-inkeep merged 3 commits intomainfrom
fix/slack-app-submission

Conversation

@miles-kt-inkeep
Copy link
Copy Markdown
Contributor

No description provided.

@changeset-bot
Copy link
Copy Markdown

changeset-bot bot commented Apr 7, 2026

🦋 Changeset detected

Latest commit: 4acdbc3

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 10 packages
Name Type
@inkeep/agents-work-apps Patch
@inkeep/agents-api Patch
@inkeep/agents-manage-ui Patch
@inkeep/agents-cli Patch
@inkeep/agents-core Patch
@inkeep/agents-email Patch
@inkeep/agents-mcp Patch
@inkeep/agents-sdk Patch
@inkeep/ai-sdk-provider Patch
@inkeep/create-agents Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@vercel
Copy link
Copy Markdown

vercel bot commented Apr 7, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
agents-api Ready Ready Preview, Comment Apr 7, 2026 7:17pm
agents-manage-ui Ready Ready Preview, Comment Apr 7, 2026 7:17pm
1 Skipped Deployment
Project Deployment Actions Updated (UTC)
agents-docs Skipped Skipped Apr 7, 2026 7:17pm

Request Review

@pullfrog
Copy link
Copy Markdown
Contributor

pullfrog bot commented Apr 7, 2026

TL;DR — Handles Slack app_uninstalled and tokens_revoked events so workspace cleanup happens automatically when a user removes the app from Slack, instead of only when triggered from the admin UI. Extracts the inline cleanup logic into a shared cleanupWorkspaceInstallation service, adds a tenant ownership check to the delete route, and trims unused bot scopes from the Slack app manifest.

Key changes

  • Handle app_uninstalled and tokens_revoked events in the dispatcher — new branches in dispatchSlackEvent react to Slack lifecycle events by cleaning up workspace data as background work
  • Extract cleanupWorkspaceInstallation service — moves the multi-step teardown (token revocation, DB deletion, Nango removal, cache clear) into a dedicated reusable function with a structured WorkspaceCleanupResult type
  • Simplify the admin uninstall route and add tenant guard — replaces ~55 lines of inline cleanup in the DELETE /workspaces/:teamId handler with a single call to the shared service, and adds a verifyTenantOwnership check before cleanup runs
  • Subscribe to new Slack events and trim unused scopes — adds app_uninstalled and tokens_revoked to bot_events; removes unused channels:join, search:read.public, search:read.files, search:read.users scopes

Summary | 6 files | 3 commits | base: mainfix/slack-app-submission


Automatic workspace cleanup on Slack app removal

Before: Workspace data was only cleaned up when an admin explicitly uninstalled via the API. If a user removed the app from Slack directly, stale records persisted indefinitely.
After: The dispatcher handles app_uninstalled and tokens_revoked events from Slack, triggering the same cleanup flow automatically.

Both events call cleanupWorkspaceInstallation with skipTokenRevocation: true — since Slack has already revoked tokens by the time these events fire. For tokens_revoked, cleanup only runs when bot tokens are among the revoked set.

dispatcher.ts · slack-app-manifest.json


Shared cleanupWorkspaceInstallation service

Before: The workspace teardown sequence (revoke token → delete channel configs → delete user mappings → delete MCP configs → delete workspace row → delete Nango connection → clear cache) was inlined in the route handler.
After: A single cleanupWorkspaceInstallation function encapsulates the full flow and returns a typed WorkspaceCleanupResult indicating which steps succeeded.

The function accepts skipTokenRevocation for cases where tokens are already invalid. It gracefully handles missing workspaces by clearing the cache and returning early with success: true. Each cleanup step runs sequentially with individual error handling — partial failures are tracked and reported without aborting the remaining steps.

How does the step-by-step cleanup work? The function iterates over an ordered array of named steps (channel configs → user mappings → MCP configs → workspace row → Nango connection). Each step runs independently inside a try/catch, so a failure in one step doesn't block the rest. The result object reports `dbCleaned` (all DB steps passed) and `nangoCleaned` separately, giving callers granular insight into what succeeded.

workspace-cleanup.ts · workspaces.ts · services/index.ts


Tenant ownership guard on workspace delete

Before: The DELETE /workspaces/:teamId route performed cleanup without verifying the requesting tenant owned the workspace.
After: A verifyTenantOwnership check runs after workspace lookup, returning 403 before any destructive operations if the tenant doesn't match.

workspaces.ts

Pullfrog  | View workflow run | Triggered by Pullfrog | Using Claude Opus𝕏

Copy link
Copy Markdown
Contributor

@pullfrog pullfrog bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good refactor — extracting cleanupWorkspaceInstallation into a shared service keeps the uninstall logic DRY across the API route and the new event handlers. The event subscriptions and scope changes look correct. Three issues to address below, one of which is a bug.

Pullfrog  | Fix all ➔Fix 👍s ➔View workflow run | Using Claude Opus𝕏

deleteWorkspaceInstallation,
findWorkspaceConnectionByTeamId,
revokeSlackToken,
} from '../services';
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Circular import. This file imports from ../services (the barrel index.ts), but that barrel re-exports ./workspace-cleanup. Node can resolve this in many cases but it's fragile and can cause undefined at import time depending on evaluation order.

Import directly from the source modules instead:

import { clearWorkspaceConnectionCache, deleteWorkspaceInstallation, findWorkspaceConnectionByTeamId } from './nango';
import { revokeSlackToken } from './client';

Comment on lines +99 to +102
clearWorkspaceConnectionCache(teamId);
logger.info({ teamId, connectionId, result }, 'Workspace cleanup completed');

result.success = result.dbCleaned && result.nangoCleaned;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: result.success is set on line 102, but the log on line 100 captures result before that assignment — so success will always be logged as false, even on a fully successful cleanup. Move the assignment before the log.

Suggested change
clearWorkspaceConnectionCache(teamId);
logger.info({ teamId, connectionId, result }, 'Workspace cleanup completed');
result.success = result.dbCleaned && result.nangoCleaned;
clearWorkspaceConnectionCache(teamId);
result.success = result.dbCleaned && result.nangoCleaned;
logger.info({ teamId, connectionId, result }, 'Workspace cleanup completed');
return result;

Comment on lines 557 to +562
const workspace = await findWorkspaceConnectionByTeamId(teamId);
if (!workspace) {
return c.json({ error: 'Workspace not found' }, 404);
}

if (workspace.botToken) {
const tokenRevoked = await revokeSlackToken(workspace.botToken);
if (tokenRevoked) {
logger.info({ teamId }, 'Revoked Slack bot token');
} else {
logger.warn({ teamId }, 'Failed to revoke Slack bot token, continuing with uninstall');
}
}

// Delete from PostgreSQL first (recoverable), then Nango (point of no return)
const tenantId = workspace.tenantId;
const result = await cleanupWorkspaceInstallation({ teamId });
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: findWorkspaceConnectionByTeamId is now called twice for the API uninstall path — once here for the 404 check, and again inside cleanupWorkspaceInstallation. cleanupWorkspaceInstallation already handles the "no workspace found" case gracefully (returns success: true), so this outer lookup + 404 could be removed. That said, preserving the 404 semantics for the API is reasonable — just calling it out as a potential simplification.

Comment on lines +216 to +220
const tokensEvent = payload.event as
| { tokens?: { oauth?: string[]; bot?: string[] } }
| undefined;
const revokedBotTokens = tokensEvent?.tokens?.bot ?? [];
const revokedOauthTokens = tokensEvent?.tokens?.oauth ?? [];
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revokedOauthTokens is extracted and logged but never acted on. If only OAuth (user) tokens are revoked and no bot tokens are revoked, the event is silently handled with no cleanup. This is fine if the system doesn't depend on user tokens — but worth a brief inline note or log line to make this intentional behavior explicit.

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Review Summary

(5) Total Issues | Risk: Medium

🟠⚠️ Major (2) 🟠⚠️

🟠 1) workspace-cleanup.ts:60-97 Partial failure cleanup proceeds to Nango deletion

Issue: When database cleanup fails in the try/catch block (lines 60-92), dbCleaned is set to false but the function continues to call deleteWorkspaceInstallation(connectionId) at line 94. This deletes OAuth tokens from Nango while database records (channel configs, user mappings, MCP configs, workspace) may still exist.

Why: This creates an inconsistent state that is difficult to recover from. Future cleanup retries will fail because findWorkspaceConnectionByTeamId may still return the workspace record, but the Nango token is already gone. Slack may redeliver app_uninstalled events if the response is slow, and retries will encounter this partial state.

Fix: Consider one of:

  1. Skip Nango deletion when DB fails: Return early after catching the error, before proceeding to Nango
  2. Reverse the order: Delete from Nango first (the "point of no return"), then clean up DB records
  3. Transaction: Wrap DB operations in a transaction for atomicity
  4. Idempotent cleanup: Make each step idempotent so retries can pick up where they left off

Refs:


🟠 2) slack-app-manifest.json Missing changeset for scope removal

Issue: This PR removes bot scopes (channels:join, search:read.public, search:read.files, search:read.users) without a corresponding changeset.

Why: Removing OAuth scopes is a potentially breaking change for existing Slack app installations. Production Slack apps must be manually updated at https://api.slack.com/apps. Per AGENTS.md, a changeset is required for user-facing changes to published packages, and manifest changes affect agents-work-apps.

Fix: Run:

pnpm bump patch --pkg agents-work-apps "Remove unused bot scopes (channels:join, search:read.*) and add app_uninstalled/tokens_revoked event listeners"

Refs:

🟡 Minor (3) 🟡

Inline Comments:

  • 🟡 Minor: dispatcher.ts:205-208 Missing error stack trace in app_uninstalled handler
  • 🟡 Minor: dispatcher.ts:242-245 Missing error stack trace in tokens_revoked handler
  • 🟡 Minor: workspaces.ts:564-569 API returns success:true even on partial failure

💭 Consider (1) 💭

💭 1) workspace-cleanup.ts:9-14 Circular import through barrel file

Issue: workspace-cleanup.ts imports from ../services (the barrel file) which re-exports workspace-cleanup.ts, creating a circular dependency.

Why: While this usually works in practice because the imports (clearWorkspaceConnectionCache, deleteWorkspaceInstallation, etc.) come from other files (nango.ts, client.ts), circular imports through barrels can cause initialization order issues in edge cases.

Fix: Consider importing directly from source files:

import { clearWorkspaceConnectionCache, deleteWorkspaceInstallation, findWorkspaceConnectionByTeamId } from './nango';
import { revokeSlackToken } from './client';

💡 APPROVE WITH SUGGESTIONS

Summary: The refactoring to centralize cleanup logic is a good improvement, but the partial failure handling should be addressed. The main concern is that database cleanup failures don't prevent Nango deletion, which can leave the system in an inconsistent state. Additionally, a changeset is needed for the scope removals in the Slack manifest.

Discarded (3)
Location Issue Reason Discarded
slack-app-manifest.json:74-81 Event subscriptions need verification Valid — handlers exist in dispatcher.ts, informational only
slack-manifest/SKILL.md:56-61 Skill documentation drift Out of PR scope — documentation update is separate concern
nango.ts External Nango call lacks timeout Pre-existing issue, not introduced by this PR
Reviewers (4)
Reviewer Returned Main Findings Consider While You're Here Inline Comments Pending Recs Discarded
pr-review-sre 6 1 0 0 0 0 2
pr-review-errors 5 0 0 0 3 0 0
pr-review-devops 3 1 0 0 0 0 2
pr-review-standards 1 0 1 0 0 0 0
Total 15 2 1 0 3 0 4

Note: SRE finding about partial failure merged with errors finding about same issue (both identified the Nango-proceeds-despite-DB-failure problem).

result.dbCleaned = true;
} catch (error) {
logger.error({ error, teamId, connectionId }, 'Failed to clean up database records');
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 MAJOR: Partial failure cleanup proceeds to Nango deletion

Issue: When database cleanup fails (caught at lines 60-92), the function still proceeds to delete from Nango (line 94). This creates an inconsistent state: OAuth tokens are deleted but database records remain orphaned.

Why: Future cleanup retries will fail because findWorkspaceConnectionByTeamId may still return the workspace record, but the Nango token is already gone. This makes recovery difficult and can leave stale data in the database.

Fix: Consider one of:

  1. Skip Nango deletion when dbCleaned is false and return early
  2. Reverse the order (delete Nango first since it's the "point of no return")
  3. Wrap DB operations in a transaction for atomicity
// Option 1: Early return on DB failure
} catch (error) {
  logger.error({ error, teamId, connectionId }, 'Failed to clean up database records');
  // Don't proceed to Nango deletion if DB cleanup failed
  return result;
}

Refs:

Comment on lines +205 to +208
.catch((err: unknown) => {
const errorMessage = err instanceof Error ? err.message : String(err);
logger.error({ errorMessage, teamId }, 'Failed to handle app_uninstalled');
})
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Minor: Missing error stack trace in catch handler

Issue: The error handler extracts err.message but discards the stack trace.

Why: For background work running asynchronously, stack traces are critical for diagnosing error origins. The existing handlers for app_mention (line 148) and direct_message (line 181) capture errorStack — this should follow the same pattern.

Fix:

Suggested change
.catch((err: unknown) => {
const errorMessage = err instanceof Error ? err.message : String(err);
logger.error({ errorMessage, teamId }, 'Failed to handle app_uninstalled');
})
.catch((err: unknown) => {
const errorMessage = err instanceof Error ? err.message : String(err);
const errorStack = err instanceof Error ? err.stack : undefined;
logger.error({ errorMessage, errorStack, teamId }, 'Failed to handle app_uninstalled');
})

Refs:

Comment on lines +242 to +245
.catch((err: unknown) => {
const errorMessage = err instanceof Error ? err.message : String(err);
logger.error({ errorMessage, teamId }, 'Failed to handle tokens_revoked (bot)');
})
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Minor: Missing error stack trace in catch handler

Issue: Same as above — error handler discards the stack trace.

Fix:

Suggested change
.catch((err: unknown) => {
const errorMessage = err instanceof Error ? err.message : String(err);
logger.error({ errorMessage, teamId }, 'Failed to handle tokens_revoked (bot)');
})
.catch((err: unknown) => {
const errorMessage = err instanceof Error ? err.message : String(err);
const errorStack = err instanceof Error ? err.stack : undefined;
logger.error({ errorMessage, errorStack, teamId }, 'Failed to handle tokens_revoked (bot)');
})

Comment on lines +564 to 569
if (!result.success) {
logger.error({ teamId, result }, 'Workspace uninstall partially failed');
}

clearWorkspaceConnectionCache(teamId);
logger.info({ connectionId, teamId }, 'Deleted workspace installation and cleared cache');
logger.info({ teamId }, 'Workspace uninstalled via API');
return c.json({ success: true });
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Minor: API returns success: true even on partial failure

Issue: When cleanupWorkspaceInstallation returns success: false (partial failure), the error is logged but the API still returns { success: true }.

Why: This misleads API consumers into thinking the uninstall was fully successful when it may have left orphaned data.

Fix: Return the actual result status:

Suggested change
if (!result.success) {
logger.error({ teamId, result }, 'Workspace uninstall partially failed');
}
clearWorkspaceConnectionCache(teamId);
logger.info({ connectionId, teamId }, 'Deleted workspace installation and cleared cache');
logger.info({ teamId }, 'Workspace uninstalled via API');
return c.json({ success: true });
if (!result.success) {
logger.error({ teamId, result }, 'Workspace uninstall partially failed');
}
logger.info({ teamId }, 'Workspace uninstalled via API');
return c.json({ success: result.success });

@github-actions github-actions bot deleted a comment from claude bot Apr 7, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 7, 2026

Preview URLs

Use these stable preview aliases for testing this PR:

These point to the same Vercel preview deployment as the bot comment, but they stay stable and easier to find.

Raw Vercel deployment URLs

@itoqa
Copy link
Copy Markdown

itoqa bot commented Apr 7, 2026

Ito Test Report ❌

14 test cases ran. 7 failed, 1 additional finding, 6 passed.

Overall, the unified run failed with 8 of 14 test cases failing (6 passed), driven mainly by regressions in Slack uninstall behavior and webhook cleanup despite several security and input-hardening checks passing. The most important issues were a PR-introduced contract mismatch where UI/dev mode sends connection IDs like dev: that DELETE parsing rejects with 400 (blocking desktop/mobile/resilience/idempotency uninstall flows), PR-introduced webhook cleanup logic that can return 200/success without actually removing installed workspaces for app_uninstalled and bot-token tokens_revoked (including duplicate-event scenarios), and an existing localhost-development auth flaw where non-admins can be treated as owner and perform admin-only uninstalls, while enterprise-format uninstall, malformed/hostile input handling, and forged or stale webhook rejection behaved as expected.

❌ Failed (7)
Category Summary Screenshot
Adversarial 🟡 Rage-click idempotency uninstall path still hits the same parser rejection and cannot reach a successful uninstall state. ADV-4
Edge 🟡 Mobile uninstall remains blocked by the same backend connectionId parsing failure before responsive behavior can be validated. EDGE-3
Edge 🟡 Refresh/back-forward uninstall resilience cannot be exercised because uninstall fails first with invalid connectionId format. EDGE-4
Edge 🟠 Duplicate event IDs are deduplicated, but the first delivery can still no-op in cleanup and leave installation state behind. EDGE-5
Happy-path 🟡 Slack dashboard uninstall returns 400 because dev:TDEV12345 is rejected as invalid connectionId format. ROUTE-1
Happy-path ⚠️ Signed app_uninstalled acknowledged successfully but cleanup can no-op and leave the workspace installed when workspace resolution fails. ROUTE-3
Happy-path ⚠️ Signed tokens_revoked with bot tokens acknowledged successfully but shares the same no-op cleanup path when workspace resolution fails. ROUTE-4
🟡 Rage-click uninstall cannot reach success state due identifier mismatch
  • What failed: Multiple confirm attempts still target a connectionId the backend parser rejects (dev:<teamId>), so the scenario cannot validate successful idempotent uninstall behavior.
  • Impact: Idempotency verification for repeated uninstall actions is blocked by a deterministic backend parsing failure. This hides whether duplicate-submit safeguards behave correctly after a successful uninstall path.
  • Steps to reproduce:
    1. Open a connected Slack workspace in Work Apps and open the uninstall dialog.
    2. Trigger repeated confirm actions rapidly (double-click and Enter presses).
    3. Observe uninstall requests still fail with Invalid connectionId format and do not complete uninstall.
  • Stub / mock context: A local Slack dev fixture was enabled by bypassing packages/agents-work-apps/src/slack/services/dev-config.ts so a deterministic workspace could be exercised for repeated uninstall actions. The same flow also relied on the Work Apps route being accessible after bypassing the feature-flag gate in agents-manage-ui/src/app/[tenantId]/work-apps/layout.tsx.
  • Code analysis: The idempotency test repeatedly invokes the same uninstall API, but that endpoint rejects dev-format IDs before cleanup. Because the root parser mismatch is unconditional, repeated UI actions cannot reach the intended success state.
  • Why this is likely a bug: The endpoint rejects the format emitted by dev-mode workspace installation code, so idempotency cannot be evaluated due to an application-level contract break.

Relevant code:

agents-manage-ui/src/features/work-apps/slack/components/workspace-hero.tsx (lines 174-180)

const handleUninstall = async () => {
  if (!workspace?.connectionId) return;

  setUninstalling(true);
  await uninstallWorkspace(workspace.connectionId);
  setShowUninstallDialog(false);
  setUninstalling(false);
};

packages/agents-work-apps/src/slack/services/nango.ts (lines 157-163)

function buildDevWorkspaceConnection(
  devConfig: SlackDevConfig,
  teamId: string
): SlackWorkspaceConnection {
  const connection: SlackWorkspaceConnection = {
    connectionId: `dev:${teamId}`,
    teamId,

packages/agents-work-apps/src/slack/routes/workspaces.ts (lines 547-552)

if (workspaceIdentifier.includes(':')) {
  const teamMatch = workspaceIdentifier.match(/T:([A-Z0-9]+)/);
  if (!teamMatch) {
    return c.json({ error: 'Invalid connectionId format' }, 400);
  }
  teamId = teamMatch[1];
}
🟡 Mobile uninstall blocked by backend identifier parser
  • What failed: The uninstall request is rejected with 400 Invalid connectionId format, so mobile uninstall actionability cannot complete even when the dialog is interactable.
  • Impact: Mobile-specific uninstall verification is blocked by backend identifier parsing, masking true responsive-flow behavior. Teams cannot reliably validate mobile uninstall outcomes in dev QA.
  • Steps to reproduce:
    1. Open the Slack Work Apps page in a mobile viewport with a connected workspace.
    2. Open the workspace uninstall dialog and confirm uninstall.
    3. Observe that the uninstall request fails with Invalid connectionId format before a successful uninstall state is reached.
  • Stub / mock context: This check ran with the same non-production Slack uninstall path and local dev workspace identifier behavior used by desktop uninstall. Route access remained available after the Work Apps feature-flag gate was bypassed in agents-manage-ui/src/app/[tenantId]/work-apps/layout.tsx.
  • Code analysis: The mobile path uses the same uninstall action and API contract as desktop. Because the backend parser rejects the dev-mode connectionId format, the scenario fails for backend reasons independent of viewport.
  • Why this is likely a bug: The same internal identifier mismatch causes deterministic uninstall failure across viewports, indicating a real application logic defect.

Relevant code:

agents-manage-ui/src/features/work-apps/slack/components/workspace-hero.tsx (lines 174-180)

const handleUninstall = async () => {
  if (!workspace?.connectionId) return;

  setUninstalling(true);
  await uninstallWorkspace(workspace.connectionId);
  setShowUninstallDialog(false);
  setUninstalling(false);
};

packages/agents-work-apps/src/slack/services/nango.ts (lines 157-163)

function buildDevWorkspaceConnection(
  devConfig: SlackDevConfig,
  teamId: string
): SlackWorkspaceConnection {
  const connection: SlackWorkspaceConnection = {
    connectionId: `dev:${teamId}`,
    teamId,

packages/agents-work-apps/src/slack/routes/workspaces.ts (lines 547-552)

if (workspaceIdentifier.includes(':')) {
  const teamMatch = workspaceIdentifier.match(/T:([A-Z0-9]+)/);
  if (!teamMatch) {
    return c.json({ error: 'Invalid connectionId format' }, 400);
  }
  teamId = teamMatch[1];
}
🟡 Refresh/back-forward uninstall scenario blocked by parser mismatch
  • What failed: The uninstall API rejects the request first with 400 Invalid connectionId format, preventing the resilience path from reaching a meaningful uninstall state.
  • Impact: Navigation resilience behavior cannot be validated because the base uninstall mutation fails at request parsing. This leaves regressions in refresh/back-forward handling untestable in dev workflows.
  • Steps to reproduce:
    1. Begin uninstall for a connected Slack workspace from the Work Apps dashboard.
    2. Attempt to continue with refresh/back-forward resilience checks.
    3. Observe that uninstall fails first with Invalid connectionId format, preventing resilience validation.
  • Stub / mock context: The scenario depended on the same Slack uninstall call path and local dev workspace identifier format as the other uninstall checks. Work Apps route gating was bypassed in agents-manage-ui/src/app/[tenantId]/work-apps/layout.tsx so the flow could be reached.
  • Code analysis: Route parsing rejects colon-form identifiers unless they match T:<TEAM>; dev-mode connection identifiers are emitted as dev:<TEAM>. The failure occurs before cleanup logic and therefore blocks downstream resilience assertions.
  • Why this is likely a bug: Uninstall cannot proceed because producer and consumer code disagree on valid connectionId format.

Relevant code:

agents-manage-ui/src/features/work-apps/slack/components/workspace-hero.tsx (lines 174-180)

const handleUninstall = async () => {
  if (!workspace?.connectionId) return;

  setUninstalling(true);
  await uninstallWorkspace(workspace.connectionId);
  setShowUninstallDialog(false);
  setUninstalling(false);
};

packages/agents-work-apps/src/slack/services/nango.ts (lines 157-163)

function buildDevWorkspaceConnection(
  devConfig: SlackDevConfig,
  teamId: string
): SlackWorkspaceConnection {
  const connection: SlackWorkspaceConnection = {
    connectionId: `dev:${teamId}`,
    teamId,

packages/agents-work-apps/src/slack/routes/workspaces.ts (lines 547-552)

if (workspaceIdentifier.includes(':')) {
  const teamMatch = workspaceIdentifier.match(/T:([A-Z0-9]+)/);
  if (!teamMatch) {
    return c.json({ error: 'Invalid connectionId format' }, 400);
  }
  teamId = teamMatch[1];
}
🟠 Duplicate event_id deliveries do not double-execute cleanup
  • What failed: Deduplication correctly drops the second delivery, but the first delivery can still no-op in cleanup and leave workspace state installed.
  • Impact: Duplicate-delivery resilience is undermined because idempotency depends on a first cleanup that may not execute real deletion. Operators can receive successful acknowledgments without achieving the expected final state.
  • Steps to reproduce:
    1. Install a Slack workspace for the tenant in local development.
    2. Send the same valid signed app_uninstalled payload twice with the same event_id to POST /work-apps/slack/events.
    3. Poll GET /work-apps/slack/workspaces for up to 60 seconds.
    4. Observe that no cleanup transition is guaranteed even though the first event is accepted.
  • Stub / mock context: Authentication was run with a local non-production bypass identity, and Slack token/provider behavior was driven by a local fixture setup so webhook handling could be tested deterministically without live OAuth dependencies.
  • Code analysis: I reviewed duplicate-event filtering in packages/agents-work-apps/src/slack/dispatcher.ts and traced the first accepted event into cleanupWorkspaceInstallation in packages/agents-work-apps/src/slack/services/workspace-cleanup.ts, plus lookup behavior in packages/agents-work-apps/src/slack/services/nango.ts. Deduplication is present, but it does not protect against the upstream cleanup short-circuit when workspace resolution returns null.
  • Why this is likely a bug: Dedup logic is correct, but the accepted event path can terminate with a false-positive success and no cleanup, so expected idempotent state transition is not guaranteed.

Relevant code:

packages/agents-work-apps/src/slack/dispatcher.ts (lines 67-73)

const eventId = payload.event_id as string | undefined;
if (isDuplicateEvent(eventId)) {
  outcome = 'ignored_duplicate_event';
  span.setAttribute(SLACK_SPAN_KEYS.OUTCOME, outcome);
  logger.info({ eventId }, 'Ignoring duplicate event');
  return { outcome };
}

packages/agents-work-apps/src/slack/services/workspace-cleanup.ts (lines 41-47)

const workspace = await findWorkspaceConnectionByTeamId(teamId);
if (!workspace) {
  logger.warn({ teamId }, 'No workspace found for cleanup');
  clearWorkspaceConnectionCache(teamId);
  result.success = true;
  return result;
}

packages/agents-work-apps/src/slack/services/nango.ts (lines 126-142)

export async function getConnectionAccessToken(connectionId: string): Promise<string | null> {
  if (isSlackDevMode()) {
    const devConfig = loadSlackDevConfig();
    return devConfig?.botToken ?? null;
  }

  try {
    const nango = getSlackNango();
    const integrationId = getSlackIntegrationId();
    const connection = await nango.getConnection(integrationId, connectionId);
    return (
🟡 Slack dashboard uninstall rejects dev connection identifier
  • What failed: The client sends DELETE /work-apps/slack/workspaces/dev%3ATDEV12345, but the API returns 400 Invalid connectionId format instead of uninstalling.
  • Impact: Local/dev Slack workspace uninstall is broken, so users cannot validate or complete uninstall-dependent QA workflows. This blocks confidence in uninstall behavior during development and pre-merge verification.
  • Steps to reproduce:
    1. Open the Slack Work Apps page for a tenant with a connected dev workspace.
    2. Open Workspace options and choose Uninstall.
    3. Confirm uninstall in the dialog.
    4. Observe the DELETE request fail with Invalid connectionId format and workspace remaining connected.
  • Stub / mock context: Work Apps route access was unblocked in agents-manage-ui/src/app/[tenantId]/work-apps/layout.tsx by bypassing the feature-flag notFound() gate so the uninstall flow could be exercised. The uninstall itself used the app's dev-mode Slack workspace identifier flow, which produced dev:<teamId> values.
  • Code analysis: The UI intentionally sends workspace.connectionId; in dev mode that value is built as dev:<teamId>. The uninstall route currently accepts colon identifiers only when they contain T:<TEAM> and rejects dev:<TEAM> before cleanup, creating a deterministic mismatch.
  • Why this is likely a bug: Production code constructs dev:<teamId> in one path but rejects that same format in the uninstall endpoint, so uninstall fails due to an internal contract mismatch rather than test harness behavior.

Relevant code:

agents-manage-ui/src/features/work-apps/slack/components/workspace-hero.tsx (lines 174-180)

const handleUninstall = async () => {
  if (!workspace?.connectionId) return;

  setUninstalling(true);
  await uninstallWorkspace(workspace.connectionId);
  setShowUninstallDialog(false);
  setUninstalling(false);
};

packages/agents-work-apps/src/slack/services/nango.ts (lines 157-163)

function buildDevWorkspaceConnection(
  devConfig: SlackDevConfig,
  teamId: string
): SlackWorkspaceConnection {
  const connection: SlackWorkspaceConnection = {
    connectionId: `dev:${teamId}`,
    teamId,

packages/agents-work-apps/src/slack/routes/workspaces.ts (lines 547-552)

if (workspaceIdentifier.includes(':')) {
  const teamMatch = workspaceIdentifier.match(/T:([A-Z0-9]+)/);
  if (!teamMatch) {
    return c.json({ error: 'Invalid connectionId format' }, 400);
  }
  teamId = teamMatch[1];
}
⚠️ Signed app_uninstalled webhook triggers cleanup
  • What failed: The endpoint acknowledges the uninstall event (200) and dispatches background cleanup, but cleanup can return success without deleting DB/Nango installation state when workspace resolution returns null.
  • Impact: Workspace uninstall events can appear successful while leaving stale installations active. This creates persistent data/state drift and can cause repeated operational failures in downstream Slack flows.
  • Steps to reproduce:
    1. Install a Slack workspace for the tenant in local development.
    2. Send a valid signed app_uninstalled webhook to POST /work-apps/slack/events.
    3. Poll GET /work-apps/slack/workspaces for up to 60 seconds.
    4. Observe that the workspace record can remain installed despite a successful acknowledgment.
  • Stub / mock context: Authentication was run with a local non-production bypass identity, and Slack token/provider behavior was driven by a local fixture setup so webhook handling could be tested deterministically without live OAuth dependencies.
  • Code analysis: I reviewed webhook dispatch in packages/agents-work-apps/src/slack/dispatcher.ts, cleanup orchestration in packages/agents-work-apps/src/slack/services/workspace-cleanup.ts, and workspace lookup/token resolution in packages/agents-work-apps/src/slack/services/nango.ts. app_uninstalled always delegates to cleanupWorkspaceInstallation, but that function exits early with success = true when findWorkspaceConnectionByTeamId returns null; lookup can return null when access token retrieval fails.
  • Why this is likely a bug: The cleanup function marks uninstall as successful when workspace resolution fails, even though no workspace deletion occurs, which contradicts expected uninstall semantics.

Relevant code:

packages/agents-work-apps/src/slack/dispatcher.ts (lines 189-210)

} else if (innerEventType === 'app_uninstalled' && teamId) {
  outcome = 'handled';
  span.setAttribute(SLACK_SPAN_KEYS.OUTCOME, outcome);
  span.updateName('slack.webhook app_uninstalled');
  logger.info({ teamId }, 'Handling event: app_uninstalled');

  const uninstallWork = cleanupWorkspaceInstallation({
    teamId,
    skipTokenRevocation: true,
  })
    .then((result) => {
      logger.info(
        { teamId, success: result.success, dbCleaned: result.dbCleaned },
        'app_uninstalled cleanup completed'
      );
    })
    .catch((err: unknown) => {

packages/agents-work-apps/src/slack/services/workspace-cleanup.ts (lines 41-47)

const workspace = await findWorkspaceConnectionByTeamId(teamId);
if (!workspace) {
  logger.warn({ teamId }, 'No workspace found for cleanup');
  clearWorkspaceConnectionCache(teamId);
  result.success = true;
  return result;
}

packages/agents-work-apps/src/slack/services/nango.ts (lines 208-214)

const dbWorkspace = await findWorkAppSlackWorkspaceBySlackTeamId(runDbClient)(teamId);

if (dbWorkspace?.nangoConnectionId) {
  const botToken = await getConnectionAccessToken(dbWorkspace.nangoConnectionId);

  if (botToken) {
⚠️ Signed tokens_revoked(bot) webhook triggers cleanup
  • What failed: The tokens_revoked bot-token branch dispatches cleanup and returns 200, but cleanup can short-circuit as successful without deleting the installation when workspace lookup depends on a missing token.
  • Impact: Bot-token revocation events can fail to actually uninstall affected workspaces, leaving integrations in an inconsistent and risky state. Teams may assume revocation has been enforced when data still persists.
  • Steps to reproduce:
    1. Install a Slack workspace for the tenant in local development.
    2. Send a valid signed tokens_revoked webhook to POST /work-apps/slack/events with a non-empty tokens.bot array.
    3. Poll GET /work-apps/slack/workspaces for up to 60 seconds.
    4. Observe that the workspace record can remain installed after acknowledgment.
  • Stub / mock context: Authentication was run with a local non-production bypass identity, and Slack token/provider behavior was driven by a local fixture setup so webhook handling could be tested deterministically without live OAuth dependencies.
  • Code analysis: I reviewed tokens_revoked dispatch logic in packages/agents-work-apps/src/slack/dispatcher.ts and the shared cleanup/lookup path in packages/agents-work-apps/src/slack/services/workspace-cleanup.ts plus packages/agents-work-apps/src/slack/services/nango.ts. The handler invokes the same cleanup function as app_uninstalled, and that function can report success on a null workspace connection, which is reachable when token resolution fails.
  • Why this is likely a bug: The bot-token revocation path depends on the same cleanup routine that can claim success without actually cleaning persisted installation state.

Relevant code:

packages/agents-work-apps/src/slack/dispatcher.ts (lines 231-248)

if (revokedBotTokens.length > 0) {
  const revokeWork = cleanupWorkspaceInstallation({
    teamId,
    skipTokenRevocation: true,
  })
    .then((result) => {
      logger.info(
        { teamId, success: result.success, dbCleaned: result.dbCleaned },
        'tokens_revoked (bot) cleanup completed'
      );
    })
    .catch((err: unknown) => {
      const errorMessage = err instanceof Error ? err.message : String(err);
      logger.error({ errorMessage, teamId }, 'Failed to handle tokens_revoked (bot)');
    })
    .finally(() => flushTraces());

packages/agents-work-apps/src/slack/services/workspace-cleanup.ts (lines 41-47)

const workspace = await findWorkspaceConnectionByTeamId(teamId);
if (!workspace) {
  logger.warn({ teamId }, 'No workspace found for cleanup');
  clearWorkspaceConnectionCache(teamId);
  result.success = true;
  return result;
}

packages/agents-work-apps/src/slack/services/nango.ts (lines 240-244)

}
  }

  logger.debug({ teamId }, 'PostgreSQL lookup failed, falling back to Nango iteration');
  return await findWorkspaceConnectionByTeamIdFromNango(teamId);
✅ Passed (6)
Category Summary Screenshot
Adversarial Forged-signature Slack webhook was rejected with 401, and no workspace cleanup side effect was observed. ADV-1
Adversarial Correctly signed but stale-timestamp tokens_revoked webhook was rejected with 401, with workspace state unchanged. ADV-2
Adversarial Hostile identifiers returned non-500 responses and hostile workspace query content rendered safely as escaped text. ADV-5
Edge Malformed connectionId containing a colon correctly returned HTTP 400, and workspace state stayed unchanged. EDGE-1
Edge OAuth-only tokens_revoked left the workspace installed after a 200 acknowledgment, as expected when tokens.bot is empty. EDGE-2
Happy-path Not a real app bug. Original timeout/block was environmental; enterprise-format DELETE was re-run successfully with HTTP 200 and success=true. ROUTE-2
ℹ️ Additional Findings (1)

These findings are unrelated to the current changes but were observed during testing.

Category Summary Screenshot
Adversarial 🟠 Non-admin authorization boundary is broken for localhost Slack workspace uninstall flow. ADV-3
🟠 Non-admin authorization boundary is broken for localhost Slack workspace uninstall flow.
  • What failed: The uninstall path is expected to reject non-admin users with forbidden behavior, but localhost development requests can be assigned tenantRole = owner, allowing uninstall.
  • Impact: Non-admin tenant members can execute admin-only uninstall operations in local development, breaking authorization boundaries for destructive actions. This can cause unintended workspace cleanup and makes role-based QA validation unreliable.
  • Steps to reproduce:
    1. Sign in as a non-admin member in the local development environment.
    2. Open the tenant Slack workspace management page.
    3. Attempt uninstall from the UI or send DELETE /work-apps/slack/workspaces/<TEAM_ID> from localhost origin.
    4. Observe that the request can pass admin authorization instead of returning forbidden behavior.
  • Stub / mock context: The test ran with local development auth shortcuts: localhost requests in agents-api/src/middleware/workAppsAuth.ts assign an owner role before downstream authorization, and manage-session login was bypassed for development to keep local execution deterministic. No API response mocks were used for Slack uninstall behavior itself.
  • Code analysis: I verified the protected uninstall route uses requireWorkspaceAdmin, but the development branch in workAppsAuth sets tenantRole to owner for localhost origins before permission checks. Because requireWorkspaceAdmin accepts owner/admin, the authorization gate is bypassed for non-admin members in this flow.
  • Why this is likely a bug: The production code path for localhost development explicitly assigns an admin-equivalent role before route authorization, so non-admin users can satisfy an admin-only permission check.

Relevant code:

agents-api/src/middleware/workAppsAuth.ts (lines 34-52)

if (env.ENVIRONMENT === 'development') {
  const origin = c.req.header('Origin');
  if (origin) {
    const originUrl = new URL(origin);
    if (originUrl.hostname === 'localhost' || originUrl.hostname === '127.0.0.1') {
      if (sessionUser?.id) {
        c.set('userId', sessionUser.id);
        c.set('tenantId', session?.activeOrganizationId || 'default');
        c.set('tenantRole', 'owner');
      } else {
        c.set('userId', 'dev-user');
        c.set('tenantId', 'default');
        c.set('tenantRole', 'owner');
      }

packages/agents-work-apps/src/slack/middleware/permissions.ts (lines 21-23)

export function isOrgAdmin(tenantRole: string | undefined): boolean {
  return tenantRole === OrgRoles.OWNER || tenantRole === OrgRoles.ADMIN;
}

packages/agents-work-apps/src/slack/routes/workspaces.ts (lines 509-516)

createProtectedRoute({
  method: 'delete',
  path: '/{teamId}',
  summary: 'Uninstall Workspace',
  tags: ['Work Apps', 'Slack', 'Workspaces'],
  permission: requireWorkspaceAdmin(),

Commit: fecd644

View Full Run


Tell us how we did: Give Ito Feedback

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Review Summary

(0) Total Issues | Risk: Low

Re-review scope: Focused on 1 commit since last review (ff527cda9 cleanup)

Delta Assessment

The new commit addresses the key concerns from the prior review:

Changeset added.changeset/combative-beige-rhinoceros.md now documents the scope removal and event listener additions, satisfying the prior Major issue about missing changelog entry.

Cleanup refactored to step-based approach — The new implementation at workspace-cleanup.ts:60-91 uses a well-structured sequential steps array with individual error handling. This addresses the prior concern about partial failures because:

  • Each step is tried independently and failures are tracked in the failures array
  • The result object accurately reflects which parts succeeded (dbCleaned, nangoCleaned)
  • Failures are explicitly logged with step name, enabling debugging
  • The API route now returns result.success instead of always returning true

🕐 Pending Recommendations (2)

These items from the prior review remain unaddressed:

  • 🟡 dispatcher.ts:205-208 Missing error stack trace in app_uninstalled handler (compare to app_mention handler at line 148)
  • 🟡 dispatcher.ts:242-245 Missing error stack trace in tokens_revoked handler (compare to direct_message handler at line 181)

💭 Consider (1)

💭 1) workspace-cleanup.ts:9-14 Barrel file import pattern

Issue: The file imports from ../services (barrel) which re-exports this file, creating a circular reference.

Why: While Node.js typically handles this, it's fragile and can cause initialization issues. This was noted in the prior pullfrog review.

Fix: Import directly from source modules:

import { clearWorkspaceConnectionCache, deleteWorkspaceInstallation, findWorkspaceConnectionByTeamId } from './nango';
import { revokeSlackToken } from './client';

✅ APPROVE

Summary: The delta commit successfully addresses the major concerns from the previous review. The changeset is now in place, and the cleanup logic has been refactored to a cleaner step-based approach with proper failure tracking. The remaining items (error stack traces in dispatcher catch blocks) are minor consistency improvements that don't block approval.

Discarded (0)

No items discarded in this delta review.

Reviewers (1)
Reviewer Returned Main Findings Consider While You're Here Inline Comments Pending Recs Discarded
orchestrator 3 0 1 0 0 2 0
Total 3 0 1 0 0 2 0

Note: Delta-scoped review — sub-reviewers not dispatched for this small delta (changeset + cleanup refactor only). Issues from prior review carried forward as Pending Recommendations.

@itoqa
Copy link
Copy Markdown

itoqa bot commented Apr 7, 2026

Ito Test Report ❌

14 test cases ran. 1 failed, 13 passed.

Across 14 executed test cases, 13 passed and 1 failed, so the unified run is not fully clean due to a confirmed high-severity authorization defect where a tenant-a owner could successfully delete a tenant-b Slack workspace via DELETE /work-apps/slack/workspaces/{teamId} (introduced by this PR). Otherwise, uninstall behavior was broadly stable and defensive: valid teamId and encoded connectionId paths succeeded, repeated/rage-click/interrupted/mobile/stale-tab flows remained controlled, malformed/unknown/fuzzed identifiers returned expected 400/404 with healthy list checks, post-uninstall dependent endpoints failed safely without 500s, non-admin uninstall was correctly blocked with 403, and forged unsigned Slack uninstall-related webhooks were rejected with 401.

❌ Failed (1)
Category Summary Screenshot
Adversarial ⚠️ Tenant-a owner can delete tenant-b workspace through uninstall endpoint instead of being blocked. ADV-3
⚠️ Cross-tenant workspace delete succeeds instead of being blocked
  • What failed: Cross-tenant delete is expected to return a 4xx and preserve tenant-b data, but the endpoint accepted tenant-a owner context and deleted tenant-b workspace data.
  • Impact: An admin in one tenant can delete another tenant's Slack workspace integration data when they know a valid identifier. This breaks tenant isolation for a destructive operation.
  • Steps to reproduce:
    1. Seed tenant-a and tenant-b with separate Slack workspace installations.
    2. Authenticate as a tenant-a owner/admin user.
    3. Call DELETE /work-apps/slack/workspaces/.
    4. Observe HTTP 200 success and confirm tenant-b workspace data is removed.
  • Stub / mock context: The run used a localhost auth shortcut that accepted QA identity headers to simulate tenant-a and tenant-b admins, and Slack token cleanup behavior was bypassed to avoid external service dependency during deletion checks.
  • Code analysis: I reviewed the uninstall route, workspace admin middleware, and dev auth middleware. The delete handler resolves any workspace by teamId and immediately runs cleanup; requireWorkspaceAdmin only resolves tenant context when tenantRole is missing; and localhost development auth pre-sets tenantRole to owner, allowing the route to skip workspace-tenant resolution before deletion.
  • Why this is likely a bug: The delete path performs destructive cleanup for any found teamId without enforcing workspace ownership in that handler, so cross-tenant deletion can succeed under a pre-set admin role context.

Relevant code:

packages/agents-work-apps/src/slack/routes/workspaces.ts (lines 547-563)

if (workspaceIdentifier.includes(':')) {
  const teamMatch = workspaceIdentifier.match(/T:([A-Z0-9]+)/);
  if (!teamMatch) {
    return c.json({ error: 'Invalid connectionId format' }, 400);
  }
  teamId = teamMatch[1];
} else {
  teamId = workspaceIdentifier;
}

const workspace = await findWorkspaceConnectionByTeamId(teamId);
if (!workspace) {
  return c.json({ error: 'Workspace not found' }, 404);
}

const result = await cleanupWorkspaceInstallation({ teamId });

packages/agents-work-apps/src/slack/middleware/permissions.ts (lines 87-92)

// Resolve tenant context from teamId if tenantId or tenantRole is missing
// workAppsAuth sets tenantId from session but not tenantRole — we need both
const teamId = c.req.param('teamId') || c.req.param('workspaceId');
if (teamId && !c.get('tenantRole')) {
  await resolveWorkAppTenantContext(c, teamId, userId);
}

agents-api/src/middleware/workAppsAuth.ts (lines 34-52)

if (env.ENVIRONMENT === 'development') {
  const origin = c.req.header('Origin');
  if (origin) {
    try {
      const originUrl = new URL(origin);
      if (originUrl.hostname === 'localhost' || originUrl.hostname === '127.0.0.1') {
        const sessionUser = c.get('user') as { id: string; email: string } | null;
        const session = c.get('session') as { activeOrganizationId?: string } | null;

        if (sessionUser?.id) {
          c.set('userId', sessionUser.id);
          c.set('userEmail', sessionUser.email);
          c.set('tenantId', session?.activeOrganizationId || 'default');
          c.set('tenantRole', 'owner');
✅ Passed (13)
Category Summary Screenshot
Adversarial Non-admin DELETE returned 403 Forbidden and the tenant-a workspace remained present. ADV-2
Adversarial Injection-style workspace identifiers were rejected with controlled 4xx responses. ADV-4
Adversarial Unsigned forged app_uninstalled callback was rejected with 401 and workspace data remained unchanged. ADV-5
Adversarial Unsigned forged tokens_revoked callback was rejected with 401 and workspace data remained unchanged. ADV-6
Edge Malformed connection IDs were rejected with 400 and service health remained stable. EDGE-1
Edge Unknown workspace uninstall returned controlled 404 and list endpoint stayed healthy. EDGE-2
Edge Repeating uninstall on the same identifier returned deterministic controlled responses. EDGE-3
Edge Mobile uninstall dialog controls remained present and actionable, with no code-level indication of a mobile-only uninstall regression. EDGE-5
Logic Post-uninstall channels/users/settings calls returned safe 404/200/200 responses with no 500s. LOGIC-1
Logic Stale-tab uninstall behavior failed safely with no phantom success and converged to empty state after reload. LOGIC-2
Happy-path Plain teamId uninstall path succeeded after valid fixture setup. ROUTE-2
Happy-path Encoded connectionId uninstall path succeeded for a valid identifier. ROUTE-3
Happy-path Uninstall flow stayed consistent after rage-click and interruption, and workspace list returned a stable HTTP 200 empty state. ROUTE-4

Commit: ff527cd

View Full Run


Tell us how we did: Give Ito Feedback

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Review Summary

(0) Total Issues | Risk: Low

Re-review scope: Focused on 1 commit since last review (4acdbc37a verify tenant id)

Delta Assessment

The new commit addresses the critical cross-tenant security vulnerability flagged by Ito testing:

Tenant ownership verification added — Lines 562-564 in workspaces.ts now call verifyTenantOwnership(c, workspace.tenantId) before executing cleanupWorkspaceInstallation(). This ensures the authenticated user's session tenant matches the workspace's tenant before allowing deletion.

if (!verifyTenantOwnership(c, workspace.tenantId)) {
  return c.json({ error: 'Access denied' }, 403);
}

Why this fix is correct:

  • The verifyTenantOwnership helper (lines 55-62) compares sessionTenantId from the authenticated context against the workspace's tenantId
  • This same pattern is already used consistently throughout the file for other workspace operations (GET, PUT, etc.)
  • The fix returns HTTP 403 (Access denied) rather than 404, which is semantically correct — the resource exists but the user lacks permission

🕐 Pending Recommendations (2)

These items from the prior review remain unaddressed:

  • 🟡 dispatcher.ts:205-208 Missing error stack trace in app_uninstalled handler (compare to app_mention handler at line 148 which logs errorStack)
  • 🟡 dispatcher.ts:242-245 Missing error stack trace in tokens_revoked handler (compare to direct_message handler at line 181 which logs errorStack)

💭 Consider (1)

💭 1) workspace-cleanup.ts:9-14 Barrel file import pattern

Issue: The file imports from ../services (barrel) which re-exports this file, creating a circular reference.

Why: While Node.js typically handles this, it's fragile and can cause initialization issues. This was noted in the prior pullfrog review.

Fix: Import directly from source modules:

import { clearWorkspaceConnectionCache, deleteWorkspaceInstallation, findWorkspaceConnectionByTeamId } from './nango';
import { revokeSlackToken } from './client';

✅ APPROVE

Summary: The delta commit successfully addresses the critical cross-tenant deletion vulnerability identified by Ito testing. The fix follows the existing tenant verification pattern used throughout the codebase. The remaining items (missing error stack traces in dispatcher catch blocks) are minor consistency improvements that don't block approval. 🎉

Discarded (0)

No items discarded in this delta review.

Reviewers (1)
Reviewer Returned Main Findings Consider While You're Here Inline Comments Pending Recs Discarded
orchestrator 3 0 1 0 0 2 0
Total 3 0 1 0 0 2 0

Note: Delta-scoped review — sub-reviewers not dispatched for this small delta (4-line security fix). Issues from prior review carried forward as Pending Recommendations.

@github-actions github-actions bot deleted a comment from claude bot Apr 7, 2026
@miles-kt-inkeep miles-kt-inkeep added this pull request to the merge queue Apr 7, 2026
Merged via the queue into main with commit 50f57fa Apr 7, 2026
26 checks passed
@miles-kt-inkeep miles-kt-inkeep deleted the fix/slack-app-submission branch April 7, 2026 19:36
inkeep bot added a commit that referenced this pull request Apr 7, 2026
…oin permission, document automatic cleanup on uninstall from Slack
@itoqa
Copy link
Copy Markdown

itoqa bot commented Apr 7, 2026

Ito Test Report ❌

19 test cases ran. 6 failed, 13 passed.

Across 19 Slack workspace lifecycle tests, 13 passed and 6 failed, showing strong coverage success for baseline install and uninstall happy paths, plain teamId and enterprise-style connectionId DELETE handling, signed lifecycle handling (including app_uninstalled and bot-token tokens_revoked cleanup, oauth-only tokens_revoked no uninstall), forged/unauthenticated rejection, malformed payload robustness, and safe post-cleanup follow-up API behavior. The most important issues were a high-severity false-success uninstall defect where the API returns success even when cleanup fails and the workspace persists (seen in rapid confirm, refresh/navigation, and mobile flows), a high-severity lifecycle replay/dedup cleanup gap in non-Vercel runtimes because background work is only registered when waitUntil exists, and a medium cross-tenant authorization flaw where an existing foreign-tenant workspace can return 404 before the 403 ownership guard runs.

❌ Failed (6)
Category Summary Screenshot
Adversarial ⚠️ Replay burst remained non-5xx, but uninstall cleanup convergence failed and the target workspace persisted. ADV-3
Edge 🟠 Cross-tenant uninstall can return 404 before tenant guard executes when token-backed workspace lookup fails. EDGE-3
Edge ⚠️ Duplicate lifecycle events are acknowledged but cleanup does not reliably execute in local/non-Vercel runtime. EDGE-5
Edge ⚠️ Uninstall API reported success but the workspace remained installed after rapid confirm actions. EDGE-6
Edge ⚠️ Uninstall followed by refresh and back/forward did not converge; the workspace still appeared installed. EDGE-7
Edge ⚠️ Mobile uninstall controls were usable, but confirming uninstall did not complete workspace removal. EDGE-8
⚠️ Abusive webhook replay burst stays idempotent
  • What failed: The service stays responsive and returns 200s during replay burst, but cleanup convergence fails and the workspace remains installed.
  • Impact: Operators get successful webhook responses during abuse/retry traffic while uninstall cleanup does not complete, creating persistent stale installations. This breaks idempotent lifecycle guarantees and can leave data behind after uninstall intent.
  • Steps to reproduce:
    1. Send a burst of valid signed app_uninstalled webhook requests with mixed duplicate and unique event_id values.
    2. Confirm responses are non-5xx and service health remains up.
    3. Check final workspace state and observe uninstall cleanup did not converge.
  • Stub / mock context: Localhost auth and tenant checks were bypassed for deterministic local execution, and test workspace fixtures plus local Slack/Nango secrets were used instead of real external integrations.
  • Code analysis: Replay behavior uses the same lifecycle cleanup path as dedup handling. dispatchSlackEvent constructs cleanup promises for uninstall events, but registerBackgroundWork in routes/events.ts is a no-op without waitUntil, and getWaitUntil() resolves to undefined outside Vercel. That explains why burst traffic remains non-5xx while cleanup side effects fail to converge.
  • Why this is likely a bug: The replay scenario exercises a production lifecycle path where acknowledged uninstall events can skip cleanup execution due to missing fallback scheduling.

Relevant code:

packages/agents-work-apps/src/slack/routes/events.ts (lines 153-157)

const registerBackgroundWork = (work: Promise<unknown>) => {
  if (waitUntil) {
    waitUntil(work);
  }
};

packages/agents-work-apps/src/slack/dispatcher.ts (lines 189-210)

} else if (innerEventType === 'app_uninstalled' && teamId) {
  outcome = 'handled';
  span.setAttribute(SLACK_SPAN_KEYS.OUTCOME, outcome);
  span.updateName('slack.webhook app_uninstalled');
  logger.info({ teamId }, 'Handling event: app_uninstalled');

  const uninstallWork = cleanupWorkspaceInstallation({
    teamId,
    skipTokenRevocation: true,
  })

packages/agents-core/src/utils/wait-until.ts (lines 25-29)

if (!process.env.VERCEL) return undefined;
    try {
      const mod = await import('@vercel/functions');
      return mod.waitUntil;
    } catch (e) {
🟠 Cross-tenant uninstall returns 404 before ownership guard
  • What failed: The API returns 404 Workspace not found for an existing tenant B workspace instead of enforcing tenant guard with 403 Access denied.
  • Impact: Authorization failures are masked as not-found responses, which breaks expected tenant-isolation behavior and makes security diagnostics harder. Clients and tests that rely on a distinct 403 path for cross-tenant access are misled.
  • Steps to reproduce:
    1. Create tenant A and tenant B workspaces with distinct ownership.
    2. Authenticate as a tenant A admin session.
    3. Send DELETE /work-apps/slack/workspaces/<TENANT_B_TEAM_ID> for a workspace that exists under tenant B.
    4. Observe a 404 response instead of the expected 403 access denied response.
  • Stub / mock context: The run used local seeded tenant workspace fixtures and a local authenticated admin session to verify cross-tenant uninstall behavior. No mocked API responses, route interception, or bypass edits were applied to this test.
  • Code analysis: I traced the uninstall path and verified the ownership guard exists, but it runs only after findWorkspaceConnectionByTeamId resolves a token-backed workspace object. That resolver returns null when a workspace exists in DB but no bot token is retrievable, so DELETE exits early with 404 and never reaches the tenant check.
  • Why this is likely a bug: The delete route's authorization decision depends on a lookup that can fail for token reasons unrelated to workspace existence, causing the wrong branch (404) for real cross-tenant records.

Relevant code:

packages/agents-work-apps/src/slack/routes/workspaces.ts (lines 557-563)

const workspace = await findWorkspaceConnectionByTeamId(teamId);
if (!workspace) {
  return c.json({ error: 'Workspace not found' }, 404);
}

if (!verifyTenantOwnership(c, workspace.tenantId)) {
  return c.json({ error: 'Access denied' }, 403);
}

packages/agents-work-apps/src/slack/services/nango.ts (lines 208-244)

const dbWorkspace = await findWorkAppSlackWorkspaceBySlackTeamId(runDbClient)(teamId);

if (dbWorkspace?.nangoConnectionId) {
  const botToken = await getConnectionAccessToken(dbWorkspace.nangoConnectionId);

  if (botToken) {
    const connection: SlackWorkspaceConnection = {
      connectionId: dbWorkspace.nangoConnectionId,
      teamId,
      teamName: dbWorkspace.slackTeamName || undefined,
      botToken,
      tenantId: dbWorkspace.tenantId,
    };
    return connection;
  }
}

return await findWorkspaceConnectionByTeamIdFromNango(teamId);
⚠️ Duplicate event_id is deduplicated
  • What failed: The endpoint acknowledges duplicate lifecycle events, but cleanup work can be dropped in non-Vercel runtime so the workspace remains instead of converging to the cleaned-up state.
  • Impact: Workspace uninstall side effects may never execute even after valid lifecycle events are accepted, leaving stale tenant data and inconsistent state. Retry/replay traffic can appear successful while cleanup silently does not happen.
  • Steps to reproduce:
    1. Send a valid signed event_callback payload for app_uninstalled to /work-apps/slack/events.
    2. Replay the same payload with the same event_id and observe both responses are acknowledged.
    3. Check workspace state and observe cleanup does not converge in local/non-Vercel runtime.
  • Stub / mock context: Localhost auth and tenant checks were bypassed for deterministic local execution, and test workspace fixtures plus local Slack/Nango secrets were used instead of real external integrations.
  • Code analysis: I traced webhook execution in packages/agents-work-apps/src/slack/routes/events.ts, event dispatch in packages/agents-work-apps/src/slack/dispatcher.ts, and runtime waitUntil behavior in packages/agents-core/src/utils/wait-until.ts. Cleanup promises are created for lifecycle events, but registration is gated on waitUntil; in non-Vercel runtime getWaitUntil() returns undefined, so those cleanup promises are never scheduled.
  • Why this is likely a bug: Valid lifecycle events are accepted, but cleanup execution depends on an unavailable runtime hook with no fallback execution path.

Relevant code:

packages/agents-work-apps/src/slack/routes/events.ts (lines 153-157)

const registerBackgroundWork = (work: Promise<unknown>) => {
  if (waitUntil) {
    waitUntil(work);
  }
};

packages/agents-core/src/utils/wait-until.ts (lines 22-26)

export async function getWaitUntil(): Promise<WaitUntilFn | undefined> {
  if (_importPromise) return _importPromise;
  _importPromise = (async () => {
    if (!process.env.VERCEL) return undefined;

packages/agents-work-apps/src/slack/dispatcher.ts (lines 195-210)

const uninstallWork = cleanupWorkspaceInstallation({
  teamId,
  skipTokenRevocation: true,
})
  .then((result) => {
    logger.info(
      { teamId, success: result.success, dbCleaned: result.dbCleaned },
      'app_uninstalled cleanup completed'
    );
  })
registerBackgroundWork(uninstallWork);
⚠️ Rapid double-confirm uninstall does not destabilize UI
  • What failed: The uninstall path returned success semantics but the workspace remained installed instead of being removed.
  • Impact: Admins can be told uninstall succeeded while the Slack workspace is still active. This leaves stale integrations in place and breaks cleanup-dependent flows.
  • Steps to reproduce:
    1. Open the Slack workspace dashboard with an installed workspace.
    2. Open Workspace options and launch the uninstall dialog.
    3. Trigger uninstall confirm rapidly (click, Enter, click).
    4. Reload the dashboard and verify the workspace is still listed.
  • Stub / mock context: Local-only auth and tenant guards were bypassed so uninstall flows could run without external sign-in, and Slack connection cleanup used a local fallback instead of live credentials. A seeded test workspace was used to keep uninstall behavior deterministic.
  • Code analysis: I reviewed the Slack uninstall route and cleanup service. The route logs partial cleanup failure but still returns a success: true response, while the cleanup service explicitly tracks failed deletion steps and marks success false.
  • Why this is likely a bug: Production code can return a successful uninstall response even when critical cleanup operations fail, which directly explains the observed persistent workspace state.

Relevant code:

packages/agents-work-apps/src/slack/routes/workspaces.ts (lines 566-573)

const result = await cleanupWorkspaceInstallation({ teamId });

if (!result.success) {
  logger.error({ teamId, result }, 'Workspace uninstall partially failed');
}

logger.info({ teamId }, 'Workspace uninstalled via API');
return c.json({ success: true });

packages/agents-work-apps/src/slack/services/workspace-cleanup.ts (lines 74-100)

{
  name: 'workspace_row',
  run: () => deleteWorkAppSlackWorkspaceByNangoConnectionId(runDbClient)(connectionId),
},
{
  name: 'nango_connection',
  run: () => deleteWorkspaceInstallation(connectionId),
},

const failures: string[] = [];
for (const step of steps) {
  try {
    await step.run();
  } catch (error) {
    failures.push(step.name);
    logger.error({ error, teamId, connectionId, step: step.name }, 'Cleanup step failed');
  }
}

result.dbCleaned = !failures.some((f) => f !== 'nango_connection');
result.nangoCleaned = !failures.includes('nango_connection');

packages/agents-work-apps/src/slack/services/workspace-cleanup.ts (lines 98-105)

result.success = failures.length === 0;
if (failures.length > 0) {
  logger.error({ teamId, connectionId, failures }, 'Workspace cleanup completed with failures');
} else {
  logger.info({ teamId, connectionId }, 'Workspace cleanup completed');
}

return result;
⚠️ Refresh and back/forward during uninstall converges correctly
  • What failed: The workspace remained listed after uninstall attempts, so the UI never converged to the expected removed state.
  • Impact: Admins can land in a persistent inconsistent state where uninstall appears complete but the workspace remains available. This undermines reliability of workspace lifecycle management and follow-up tests.
  • Steps to reproduce:
    1. Open the Slack workspace dashboard with an installed workspace.
    2. Start uninstall from the workspace options menu and confirm the action.
    3. Refresh the page and trigger browser back/forward navigation.
    4. Wait for refetch to settle and verify the workspace still appears in the list.
  • Stub / mock context: Local-only auth and tenant guards were bypassed so uninstall flows could run without external sign-in, and Slack connection cleanup used a local fallback instead of live credentials. A seeded test workspace was used to keep uninstall behavior deterministic.
  • Code analysis: The same uninstall endpoint path is used here. The backend currently returns success even when cleanupWorkspaceInstallation reports failure, which can leave the workspace row undeleted and produces the observed non-converged state after refresh/navigation.
  • Why this is likely a bug: The route-level success response does not enforce cleanup success, so refresh/history operations can reveal a workspace that should have been removed.

Relevant code:

packages/agents-work-apps/src/slack/routes/workspaces.ts (lines 566-573)

const result = await cleanupWorkspaceInstallation({ teamId });

if (!result.success) {
  logger.error({ teamId, result }, 'Workspace uninstall partially failed');
}

logger.info({ teamId }, 'Workspace uninstalled via API');
return c.json({ success: true });

packages/agents-work-apps/src/slack/services/workspace-cleanup.ts (lines 74-100)

{
  name: 'workspace_row',
  run: () => deleteWorkAppSlackWorkspaceByNangoConnectionId(runDbClient)(connectionId),
},
{
  name: 'nango_connection',
  run: () => deleteWorkspaceInstallation(connectionId),
},

const failures: string[] = [];
for (const step of steps) {
  try {
    await step.run();
  } catch (error) {
    failures.push(step.name);
    logger.error({ error, teamId, connectionId, step: step.name }, 'Cleanup step failed');
  }
}

result.dbCleaned = !failures.some((f) => f !== 'nango_connection');
result.nangoCleaned = !failures.includes('nango_connection');

packages/agents-work-apps/src/slack/services/workspace-cleanup.ts (lines 98-105)

result.success = failures.length === 0;
if (failures.length > 0) {
  logger.error({ teamId, connectionId, failures }, 'Workspace cleanup completed with failures');
} else {
  logger.info({ teamId, connectionId }, 'Workspace cleanup completed');
}

return result;
⚠️ Mobile viewport uninstall flow remains usable
  • What failed: The dialog remained usable on mobile, but confirm uninstall did not complete workspace removal.
  • Impact: Mobile admins can receive a successful uninstall action path without actual removal. This leaves active workspace integrations despite user intent to disconnect.
  • Steps to reproduce:
    1. Set viewport to 390x844 and open the Slack workspace dashboard.
    2. Open uninstall dialog from workspace options and verify Cancel/Uninstall controls are visible.
    3. Cancel once, reopen the dialog, and confirm uninstall.
    4. Check final dashboard state and confirm the workspace was not removed.
  • Stub / mock context: Local-only auth and tenant guards were bypassed so uninstall flows could run without external sign-in, and Slack connection cleanup used a local fallback instead of live credentials. A seeded test workspace was used to keep uninstall behavior deterministic.
  • Code analysis: UI control rendering was not the failure mode; the backend uninstall contract is. The route sends success even after cleanup-step failures, so mobile and desktop flows both inherit the same false-success behavior.
  • Why this is likely a bug: The mobile flow hits the same production uninstall endpoint, and that endpoint can claim success without completing cleanup.

Relevant code:

packages/agents-work-apps/src/slack/routes/workspaces.ts (lines 566-573)

const result = await cleanupWorkspaceInstallation({ teamId });

if (!result.success) {
  logger.error({ teamId, result }, 'Workspace uninstall partially failed');
}

logger.info({ teamId }, 'Workspace uninstalled via API');
return c.json({ success: true });

packages/agents-work-apps/src/slack/services/workspace-cleanup.ts (lines 74-100)

{
  name: 'workspace_row',
  run: () => deleteWorkAppSlackWorkspaceByNangoConnectionId(runDbClient)(connectionId),
},
{
  name: 'nango_connection',
  run: () => deleteWorkspaceInstallation(connectionId),
},

const failures: string[] = [];
for (const step of steps) {
  try {
    await step.run();
  } catch (error) {
    failures.push(step.name);
    logger.error({ error, teamId, connectionId, step: step.name }, 'Cleanup step failed');
  }
}

result.dbCleaned = !failures.some((f) => f !== 'nango_connection');
result.nangoCleaned = !failures.includes('nango_connection');

packages/agents-work-apps/src/slack/services/workspace-cleanup.ts (lines 98-105)

result.success = failures.length === 0;
if (failures.length > 0) {
  logger.error({ teamId, connectionId, failures }, 'Workspace cleanup completed with failures');
} else {
  logger.info({ teamId, connectionId }, 'Workspace cleanup completed');
}

return result;
✅ Passed (13)
Category Summary Screenshot
Adversarial Forged lifecycle webhook signatures were rejected with 401 and did not alter workspace state. ADV-1
Adversarial Unauthenticated DELETE to the Slack workspace endpoint returned 401 and left workspace data intact. ADV-2
Adversarial Malformed signed tokens_revoked payloads stayed non-5xx and did not delete control workspace data. ADV-4
Edge DELETE with colon-format connectionId returned 400 and invalid format error as expected. EDGE-1
Edge DELETE for unknown team returned 404 and existing seeded workspace rows remained unaffected. EDGE-2
Edge Signed oauth-only tokens_revoked event acknowledged with 200 and did not remove TEDGE4 workspace. EDGE-4
Logic Install baseline behavior was valid after local Slack config and Work Apps enablement were corrected. LOGIC-1
Logic After cleanup, dependent Slack workspace endpoints failed safely and the dashboard showed no ghost workspace state. LOGIC-2
Happy-path Executed dashboard uninstall flow end-to-end on localhost after local non-production bypass setup; confirm action removed workspace and refresh showed no Slack connections. ROUTE-1
Happy-path Seeded a plain-team workspace fixture, deleted via teamId endpoint, received 200 success, and verified workspace absence via API list and dashboard refresh. ROUTE-2
Happy-path Seeded enterprise-style connectionId fixture, deleted with encoded enterprise ID, received 200 success, and confirmed workspace removal from list/dashboard. ROUTE-3
Happy-path Signed app_uninstalled event returned 200 and asynchronous cleanup removed the TROUTE4 workspace record. ROUTE-4
Happy-path Signed tokens_revoked event with bot token payload returned 200 and cleanup removed TROUTE5 as expected. ROUTE-5

Commit: 4acdbc3

View Full Run


Tell us how we did: Give Ito Feedback

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant