-
Couldn't load subscription status.
- Fork 61
fix: Make dev server resilient to dependency re-optimization #832
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…server library greetings.
This reverts commit 0d41879.
Deploying redwood-sdk-docs with
|
| Latest commit: |
223636f
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://fd7d989a.redwood-sdk-docs.pages.dev |
| Branch Preview URL: | https://optimize-dep-resilience.redwood-sdk-docs.pages.dev |
…e devServerControl.
…r resilient state.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This addresses two distinct but related sources of instability in the Vite dev server, both of which are triggered by Vite's dependency re-optimization process.
Problem 1: Module-Level State is Discarded on Re-optimization
The framework's runtime relies on long-lived, module-level state for critical features like request context tracking via
AsyncLocalStorage. However, Vite's dependency re-optimization process, designed for browser-based hot reloading, is fundamentally incompatible with this. When a new dependency is discovered, Vite discards and re-instantiates the entire module graph. This process wipes out our module-level state, leading to unpredictable runtime errors and application crashes.Solution: A Virtual State Module
A virtual state module,
rwsdk/__state, is introduced to act as a centralized, persistent store for framework-level state.statePlugin) marks this virtual module asexternalto Vite's dependency optimizer for the worker environment. This insulates it from the re-optimization and reload process.rwsdk/__stateto a physical module (the built module indist/forsdk/src/runtime/state.ts) that contains the state container and management APIs - i.e. we bypass the dep optimizer for this specific module, so we have a stable path outside of dep optimization bundlesdefineRwState(...)), making the state resilient to reloads.This solves the state-loss problem and centralizes state management within the framework.
Problem 2: Race Conditions Cause "Stale Pre-bundle" Errors
In a standard Vite setup, handling stale dependencies is a routine process. When a re-optimization occurs, the browser might request a module with an old version hash. The Vite server correctly throws a "stale pre-bundle" error, which is caught by Vite's client-side script in the browser. This script then automatically retries the request or performs a full page reload, seamlessly recovering from the transient error.
However, our architecture introduces several layers of complexity that make this standard recovery model insufficient. The "client" making these requests is not a browser, but the Cloudflare
CustomModuleRunnerexecuting server-side within Miniflare. Furthermore, our SSR Bridge architecture means this runner interacts with a virtual module subgraph. When it needs to render an SSR component, it makes a request for a virtual module which, via our plugin, triggers a server-to-serverfetchModulecall from theworkerenvironment to the isolatedssrenvironment.This unique, cross-environment request pattern for virtual modules is at the heart of the instability. When a re-optimization happens in the
ssrenvironment, the standard recovery mechanisms are not equipped to handle the resulting state desynchronization. The failure manifests as a perfect storm of three deeper, interconnected issues:ssr_bridgemodule, leading to a request for a dependency with an old, invalid version hash.full-reloadHMR event triggered by the SSR optimizer was not being propagated to the worker environment. This meant the worker's own caches (especially theCustomModuleRunner's execution cache) were never cleared and continued to use stale modules, creating an infinite error loop.Solution: A Multi-Layered Approach to Synchronization and Stability
A combination of fixes was implemented to address this race condition:
ssrBridgePluginwas modified to no longer rely on Vite's internal, faulty resolution for virtual modules. It now manually resolves the correct, up-to-date version hash for any optimized dependency from the SSR optimizer's metadata before fetching it. This bypasses the "ghost node" problem.ssrBridgePluginnow intercepts thefull-reloadHMR event from the SSR environment and propagates it to the worker environment. This ensures the worker's module runner and module graph are correctly invalidated when the SSR environment changes.staleDepRetryPlugin): A new error-handling middleware was introduced. When it catches the inevitable "stale pre-bundle" error from the runner's premature re-import, it does not immediately retry. Instead, it waits for the server to become "stable" by monitoring thetransformhook. Once a quiet period with no module transformation activity is detected, it signals the client to perform a full reload and gracefully redirects the failed request.