Skip to content

Websocket Prediction Spike #1220

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: dev
Choose a base branch
from
Draft

Conversation

ariesninjadev
Copy link
Contributor

Client-Side Prediction Spike: Fast Input/Visual with Server Correction

This PR contains multiple elements:

  • A test folder at /client-prediction-spike containing a client-side prediction system that meets the spike requirements as specified by the ticket
    • A quick setup guide
  • An implementation guide for hooking the server into Fission/Jolt
  • Findings regarding plausability and UX

Test Stack

Included is a client-side prediction system that shows how to improve input responsiveness while keeping server authority. The client immediately applies movement using physics when inputs are received, while the server calculates the same positions and validates and corrects the predictions when they diverge from the server's decision. The client and server are essentially doing the same thing, but now the client's job is to make the game more responsive, and the server's job is to prove and validate the client action.

The spike server features:

  • All features from the Client-Server Spike
  • Input sequence tracking for catch-up rules
  • State history buffering
  • Divergence detection
  • Client-trust "Smooth" correction + Client-distrust "Snap" correction
  • Metrics system

The spike client features:

  • All features from the Client-Server Spike
  • Immediate input prediction and visualization
  • Visual debugging indicators (ghost robots, correction sphere)
  • Toggleable prediction

This spike does not feature:

  • Lag compensation (client-side rewind)
  • Rollback networking
  • Proactive cheating prevetion
  • Optimizations for low-bandwidth situations

Quick Setup

Prerequisites:

  • Node.js 18+
  • npm or yarn (more like yarnotcool)

Installation

cd client-prediction-spike
npm install

Run

npm run dev

The server will start on port 3000 by default

Independent Tests

To verify the functionality, carry out the following tests:

Prediction Responsiveness

  • Test: Press WASD keys to move the robot with prediction enabled
  • Verify: Movement is immediate (<8ms visual delay), red robot moves instantly on keypress

Correction Behavior

  • Test:
    • Move robot in weird ways (button spam, etc) with prediction enabled
    • Check corrections and ghost robot
  • Verify: Corrections should be small (< 10px average), smooth, and not too often

Network-Distrust

  • Test:
    • Simulate network delay (throttle connection through Devtools: Performance > Environment Settings)
    • Test with 2-6 clients, preferably with multiple moving at the same time
  • Verify: Predictions will continue to work for the client, and other client's movements will be smooth even if delayed

Measured Metrics

All metrics observed on Windows with 100ms artificially spoofed ping

Latency

  • Without Prediction: 100-150ms perceived input delay
  • With Prediction: 0-8ms perceived input delay (immediate)
  • Improvement: 85-95% reduction in perceived input latency

Corrections

  • Normal Movement: 2-5 corrections per second
  • Rapid Movement: 5-15 corrections per second
  • Average Divergence: 2-8 px

Performance

  • Client CPU Overhead: < 5% increase for prediction logic
  • Network Overhead: ~15% increase due to prediction state messages
  • Memory Usage: ~10MB additional for state history and input buffering

Based on these metrics, client-side prediction provides big ux improvements with little cost. The correction behavior is almost entirely unnoticable on moderate networks.

Integration into Fission

These are thrown together code fragments that could serve as a basis for hooking this implementation into Synthesis.

Prediction System Integration

The multiplayer system should be extended to support client-side prediction:

import { PredictionManager } from "@/systems/multiplayer/PredictionManager";
import { InputSystem } from "@/systems/input/InputSystem";

class MultiplayerSystem {
    private predictionManager: PredictionManager;
    private inputSequence: number = 0;

    constructor() {
        this.predictionManager = new PredictionManager();
    }

    public handleInput(input: InputData) {
        this.inputSequence++;

        if (this.predictionManager.isEnabled()) {
            this.predictionManager.applyPrediction(input, this.inputSequence);
        }

        this.sendToServer({
            type: "input",
            data: {
                ...input,
                sequence: this.inputSequence,
                timestamp: Date.now(),
            },
        });
    }

    public handleServerReconciliation(serverState: ServerState) {
        if (this.predictionManager.isEnabled()) {
            this.predictionManager.reconcileWithServer(serverState);
        }
    }
}

Physics Integration with Prediction

Extend the physics to support prediction (we r cooked):

import { PhysicsSystem } from "@/systems/physics/PhysicsSystem";
import Mechanism from "@/systems/physics/Mechanism";

class PredictivePhysicsSystem extends PhysicsSystem {
    private predictionState: Map<string, RobotState> = new Map();
    private serverState: Map<string, RobotState> = new Map();

    public applyPredictedMovement(robotId: string, input: InputData) {
        const robot = this.robots.get(robotId);
        if (!robot) return;

        this.predictionState.set(robotId, this.getRobotState(robot));

        this.applyMovementPhysics(robot, input, this.deltaTime);
    }

    public reconcileWithServer(robotId: string, serverState: RobotState) {
        const robot = this.robots.get(robotId);
        const predictedState = this.predictionState.get(robotId);

        if (!robot || !predictedState) return;

        const divergence = this.calculateDivergence(
            predictedState,
            serverState
        );

        if (divergence > this.correctionThreshold) {
            this.applyStateCorrection(robot, serverState, divergence);
        }
    }

    private applyStateCorrection(
        robot: MirabufSceneObject,
        targetState: RobotState,
        divergence: number
    ) {
        const body = robot.mechanism?.body;
        if (!body) return;

        if (divergence > this.snapThreshold) {
            body.SetPosition(ThreeVector3_JoltVec3(targetState.position));
            body.SetRotation(ThreeQuaternion_JoltQuat(targetState.rotation));
            body.SetLinearVelocity(ThreeVector3_JoltVec3(targetState.velocity));
        } else {
            this.smoothToTarget(robot, targetState, 0.3);
        }
    }
}

Input System Integration

Modify the input system to support sequence numbering:

import { MultiplayerSystem } from "@/systems/multiplayer/MultiplayerSystem";

class InputSystem {
    private multiplayerSystem?: MultiplayerSystem;
    private lastInputHash: string = "";

    public Update(): void {
        const currentInputs = this.getActiveInputs();
        const inputHash = this.hashInputs(currentInputs);

        if (inputHash !== this.lastInputHash) {
            this.lastInputHash = inputHash;

            if (this.multiplayerSystem?.isConnected()) {
                this.multiplayerSystem.handleInput(currentInputs);
            } else {
                this.processLocalInput(currentInputs);
            }
        }
    }

    private hashInputs(inputs: InputData): string {
        return JSON.stringify(inputs);
    }
}

Global Configuration Changes

Dependencies

Modifications to package,json:

{
    "dependencies": {
        // ...
        "ws": "^8.18.0",
        "uuid": "^9.0.1",
        "express": "^4.18.2"
    }
}

Build Configuration

Modifications to vite.config.ts:

export default defineConfig({
    // ...
    server: {
        proxy: {
            "/game": {
                target: "http://localhost:3000",
                ws: true,
            },
        },
    },
});

Environment Configuration

Add prediction settings to environment configs:

// config/prediction.ts
export const PredictionConfig = {
    enabled: process.env.NODE_ENV === "development",
    bufferSize: 120, // 2 seconds at 60fps
    correctionThreshold: 5, // pixels
    snapThreshold: 20, // pixels
    smoothingFactor: 0.3,
    debugVisuals: true,
    metricsCollection: true,
};

Considerations and Optimizations

Beyond the missing features, these are things that should probably be implemented before predictive multiplayer goes public.

Performance Optimizations

  • Compress prediction state updates using delta compression
  • Only send prediction updates when state changes significantly
  • Adjust buffer size based on network speed
  • Group corrections into single packets

Production

  • Validate client predictions server-side more strictly
  • Warn host of "cheating suspected" players

This concludes the Spike PR. Do not merge.

@ariesninjadev ariesninjadev self-assigned this Jul 17, 2025
@ariesninjadev ariesninjadev requested review from a team as code owners July 17, 2025 16:06
@ariesninjadev ariesninjadev added no-merge gameplay Relating to the playability of Synthesis labels Jul 17, 2025
@Autodesk Autodesk deleted a comment from autodesk-chorus bot Jul 17, 2025
@rutmanz
Copy link
Member

rutmanz commented Jul 17, 2025

image image

looks like there is steady state positional error with predictions on. The metrics tab identifies the err as 6.1px on one and 4.6px on the other. I imagine it would make sense to correct over time for small errors like this

@ariesninjadev
Copy link
Contributor Author

ariesninjadev commented Jul 17, 2025

looks like there is steady state positional error with predictions on. The metrics tab identifies the err as 6.1px on one and 4.6px on the other. I imagine it would make sense to correct over time for small errors like this

This is an intended behavior that preserves network traffic and reduces computing costs. It's a minor benefit, but keep in mind you won't see the ghost during gameplay and it won't actually affect anything. This is because as soon as any motion or collision occurs, the steady state has been interrupted, and the server who owns the authoritative "correct" position can issue a snap correction if the client's slight offset results in a largely different collision (all of which occurs in ~3 frames, not noticeable).

@autodesk-chorus
Copy link

Chorus detected one or more security issues with this pull request. See the Checks tab for more details.

As a reminder, please follow the secure code review process as part of the Secure Coding Non-Negotiable requirement.

Copy link
Member

@rutmanz rutmanz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this model seems like a pretty solid proof of concept, and the websocket style seems to work pretty well. Hard to say how it scales right now though but concept proven

@Dhruv-0-Arora Dhruv-0-Arora self-requested a review July 24, 2025 18:01
@ariesninjadev ariesninjadev marked this pull request as draft July 24, 2025 19:56
Copy link
Member

@BrandonPacewic BrandonPacewic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Solid proof of concept, certainly happy with this. Will move to schedule a meeting once we have everything for multiplayer testing done so that we can plan what arch we want to go with.

@BrandonPacewic BrandonPacewic added the testing-spike Something not to be merged but here for reference for future feature development. label Jul 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
gameplay Relating to the playability of Synthesis no-merge testing-spike Something not to be merged but here for reference for future feature development.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants