Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 25 additions & 24 deletions apps/typegpu-docs/src/content/docs/fundamentals/utils.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@ title: Utilities
description: A list of various utilities provided by TypeGPU.
---

## *prepareDispatch*
## *root.createGuardedComputePipeline*

The `prepareDispatch` function streamlines running simple computations on the GPU.
Under the hood, it wraps the callback in a `TgpuFn`, creates a compute pipeline, and returns an object with a `dispatchThreads` method that executes the pipeline.
The `root.createGuardedComputePipeline` method streamlines running simple computations on the GPU.
Under the hood, it creates a compute pipeline that calls the provided callback only if the current thread ID is within the requested range, and returns an object with a `dispatchThreads` method that executes the pipeline.
Since the pipeline is reused, there’s no additional overhead for subsequent calls.

```ts twoslash
Expand All @@ -16,16 +16,17 @@ const root = await tgpu.init();
// ---cut---
const data = root.createMutable(d.arrayOf(d.u32, 8), [0, 1, 2, 3, 4, 5, 6, 7]);

const doubleUp = root['~unstable'].prepareDispatch((x) => {
'use gpu';
data.$[x] *= 2;
});
const doubleUpPipeline = root['~unstable']
.createGuardedComputePipeline((x) => {
'use gpu';
data.$[x] *= 2;
});

doubleUp.dispatchThreads(8);
doubleUp.dispatchThreads(8);
doubleUp.dispatchThreads(4);
doubleUpPipeline.dispatchThreads(8);
doubleUpPipeline.dispatchThreads(8);
doubleUpPipeline.dispatchThreads(4);

// the command encoder will queue the read after `doubleUp`
// the command encoder will queue the read after `doubleUpPipeline`
console.log(await data.read()); // [0, 8, 16, 24, 16, 20, 24, 28]
```

Expand All @@ -34,7 +35,7 @@ Remember to mark the callback with the `'use gpu'` directive to let TypeGPU know
:::

The callback can have up to three arguments (dimensions).
`prepareDispatch` can simplify writing a pipeline helping reduce serialization overhead when initializing buffers with data.
`createGuardedComputePipeline` can simplify writing a pipeline helping reduce serialization overhead when initializing buffers with data.
Buffer initialization commonly uses random number generators.
For that, you can use the [`@typegpu/noise`](TypeGPU/ecosystem/typegpu-noise) library.

Expand All @@ -51,7 +52,7 @@ const waterLevelMutable = root.createMutable(
d.arrayOf(d.arrayOf(d.f32, 512), 1024),
);

root['~unstable'].prepareDispatch((x, y) => {
root['~unstable'].createGuardedComputePipeline((x, y) => {
'use gpu';
randf.seed2(d.vec2f(x, y).div(1024));
waterLevelMutable.$[x][y] = 10 + randf.sample();
Expand All @@ -62,7 +63,7 @@ root['~unstable'].prepareDispatch((x, y) => {
console.log(await waterLevelMutable.read());
```

The result of `prepareDispatch` can have bind groups bound using the `with` method.
The result of `createGuardedComputePipeline` can have bind groups bound using the `with` method.

```ts twoslash
import tgpu from 'typegpu';
Expand All @@ -71,35 +72,35 @@ import * as std from 'typegpu/std';
const root = await tgpu.init();
// ---cut---
const layout = tgpu.bindGroupLayout({
buffer: { storage: d.arrayOf(d.u32), access: 'mutable' },
values: { storage: d.arrayOf(d.u32), access: 'mutable' },
});
const buffer1 = root
.createBuffer(d.arrayOf(d.u32, 3), [1, 2, 3]).$usage('storage');
const buffer2 = root
.createBuffer(d.arrayOf(d.u32, 4), [2, 4, 8, 16]).$usage('storage');
const bindGroup1 = root.createBindGroup(layout, {
buffer: buffer1,
values: buffer1,
});
const bindGroup2 = root.createBindGroup(layout, {
buffer: buffer2,
values: buffer2,
});

const test = root['~unstable'].prepareDispatch((x) => {
const doubleUpPipeline = root['~unstable'].createGuardedComputePipeline((x) => {
'use gpu';
layout.$.buffer[x] *= 2;
layout.$.values[x] *= 2;
});

test.with(bindGroup1).dispatchThreads(3);
test.with(bindGroup2).dispatchThreads(4);
doubleUpPipeline.with(bindGroup1).dispatchThreads(3);
doubleUpPipeline.with(bindGroup2).dispatchThreads(4);

console.log(await buffer1.read()); // [2, 4, 6];
console.log(await buffer2.read()); // [4, 8, 16, 32];
```

It is recommended NOT to use `prepareDispatch` for:
It is recommended NOT to use guarded compute pipelines for:

- More complex compute shaders.
When using `prepareDispatch`, it is impossible to change workgroup sizes or to use [slots](/TypeGPU/fundamentals/slots).
When using guarded compute pipelines, it is impossible to change workgroup sizes, or effectively utilize workgroup shared memory.
For such cases, a manually created pipeline would be more suitable.

- Small calls.
Expand Down Expand Up @@ -129,7 +130,7 @@ import * as d from 'typegpu/data';
const root = await tgpu.init();
// ---cut---
const callCountMutable = root.createMutable(d.u32, 0);
const compute = root['~unstable'].prepareDispatch(() => {
const compute = root['~unstable'].createGuardedComputePipeline(() => {
'use gpu';
callCountMutable.$ += 1;
console.log('Call number', callCountMutable.$);
Expand Down
10 changes: 6 additions & 4 deletions apps/typegpu-docs/src/examples/algorithms/matrix-next/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -60,10 +60,12 @@ function createPipelines() {
state.gpuTime = Number(end - start) / 1_000_000;
};

const optimized = root['~unstable']
.withCompute(computeSharedMemory)
.createPipeline();
const simple = root['~unstable'].withCompute(computeSimple).createPipeline();
const optimized = root['~unstable'].createComputePipeline({
compute: computeSharedMemory,
});
const simple = root['~unstable'].createComputePipeline({
compute: computeSimple,
});

return {
'gpu-optimized': hasTimestampQuery
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -92,9 +92,9 @@ const subgroupCompute = tgpu['~unstable'].computeFn({
});

const pipelines = {
default: root['~unstable'].withCompute(defaultCompute).createPipeline(),
default: root['~unstable'].createComputePipeline({ compute: defaultCompute }),
subgroup: root.enabledFeatures.has('subgroups')
? root['~unstable'].withCompute(subgroupCompute).createPipeline()
? root['~unstable'].createComputePipeline({ compute: subgroupCompute })
: null,
};

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -101,8 +101,7 @@ export class Executor {
if (!this.#pipelineCache.has(distribution)) {
const pipeline = this.#root['~unstable']
.with(this.#distributionSlot, distribution)
.withCompute(this.#dataMoreWorkersFunc)
.createPipeline();
.createComputePipeline({ compute: this.#dataMoreWorkersFunc });
this.#pipelineCache.set(distribution, pipeline);
}

Expand All @@ -117,8 +116,7 @@ export class Executor {
if (!pipeline) {
pipeline = this.#root['~unstable']
.with(this.#distributionSlot, distribution)
.withCompute(this.#dataMoreWorkersFunc as TgpuComputeFn)
.createPipeline();
.createComputePipeline({ compute: this.#dataMoreWorkersFunc });
this.#pipelineCache.set(distribution, pipeline);
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -176,10 +176,11 @@ context.configure({
alphaMode: 'premultiplied',
});

const pipeline = root['~unstable']
.withVertex(fullScreenTriangle, {})
.withFragment(fragmentFn, { format: presentationFormat })
.createPipeline();
const pipeline = root['~unstable'].createRenderPipeline({
vertex: fullScreenTriangle,
fragment: fragmentFn,
targets: { format: presentationFormat },
});

if (navigator.mediaDevices.getUserMedia) {
video.srcObject = await navigator.mediaDevices.getUserMedia({
Expand Down
17 changes: 9 additions & 8 deletions apps/typegpu-docs/src/examples/image-processing/blur/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -172,14 +172,15 @@ const ioBindGroups = [
}),
];

const computePipeline = root['~unstable']
.withCompute(computeFn)
.createPipeline();
const computePipeline = root['~unstable'].createComputePipeline({
compute: computeFn,
});

const renderPipeline = root['~unstable']
.withVertex(fullScreenTriangle, {})
.withFragment(renderFragment, { format: presentationFormat })
.createPipeline();
const renderPipeline = root['~unstable'].createRenderPipeline({
vertex: fullScreenTriangle,
fragment: renderFragment,
targets: { format: presentationFormat },
});

function render() {
settingsUniform.write({
Expand All @@ -191,7 +192,7 @@ function render() {

for (const i of indices) {
computePipeline
.with(ioLayout, ioBindGroups[i])
.with(ioBindGroups[i])
.dispatchWorkgroups(
Math.ceil(srcWidth / settings.blockDim),
Math.ceil(srcHeight / 4),
Expand Down
68 changes: 36 additions & 32 deletions apps/typegpu-docs/src/examples/rendering/3d-fish/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -98,33 +98,34 @@ function enqueuePresetChanges() {
const buffer0mutable = fishDataBuffers[0].as('mutable');
const buffer1mutable = fishDataBuffers[1].as('mutable');
const seedUniform = root.createUniform(d.f32);
const randomizeFishPositionsOnGPU = root['~unstable'].prepareDispatch((x) => {
'use gpu';
randf.seed2(d.vec2f(d.f32(x), seedUniform.$));
const data = ModelData({
position: d.vec3f(
randf.sample() * p.aquariumSize.x - p.aquariumSize.x / 2,
randf.sample() * p.aquariumSize.y - p.aquariumSize.y / 2,
randf.sample() * p.aquariumSize.z - p.aquariumSize.z / 2,
),
direction: d.vec3f(
randf.sample() * 0.1 - 0.05,
randf.sample() * 0.1 - 0.05,
randf.sample() * 0.1 - 0.05,
),
scale: p.fishModelScale * (1 + (randf.sample() - 0.5) * 0.8),
variant: randf.sample(),
applySinWave: 1,
applySeaFog: 1,
applySeaDesaturation: 1,
const randomizeFishPositionsPipeline = root['~unstable']
.createGuardedComputePipeline((x) => {
'use gpu';
randf.seed2(d.vec2f(d.f32(x), seedUniform.$));
const data = ModelData({
position: d.vec3f(
randf.sample() * p.aquariumSize.x - p.aquariumSize.x / 2,
randf.sample() * p.aquariumSize.y - p.aquariumSize.y / 2,
randf.sample() * p.aquariumSize.z - p.aquariumSize.z / 2,
),
direction: d.vec3f(
randf.sample() * 0.1 - 0.05,
randf.sample() * 0.1 - 0.05,
randf.sample() * 0.1 - 0.05,
),
scale: p.fishModelScale * (1 + (randf.sample() - 0.5) * 0.8),
variant: randf.sample(),
applySinWave: 1,
applySeaFog: 1,
applySeaDesaturation: 1,
});
buffer0mutable.$[x] = data;
buffer1mutable.$[x] = data;
});
buffer0mutable.$[x] = data;
buffer1mutable.$[x] = data;
});

const randomizeFishPositions = () => {
seedUniform.write((performance.now() % 10000) / 10000);
randomizeFishPositionsOnGPU.dispatchThreads(p.fishAmount);
randomizeFishPositionsPipeline.dispatchThreads(p.fishAmount);
enqueuePresetChanges();
};

Expand Down Expand Up @@ -181,24 +182,27 @@ randomizeFishPositions();

// pipelines

const renderPipeline = root['~unstable']
.withVertex(vertexShader, modelVertexLayout.attrib)
.withFragment(fragmentShader, { format: presentationFormat })
.withDepthStencil({
const renderPipeline = root['~unstable'].createRenderPipeline({
attribs: modelVertexLayout.attrib,
vertex: vertexShader,
fragment: fragmentShader,
targets: { format: presentationFormat },

depthStencil: {
format: 'depth24plus',
depthWriteEnabled: true,
depthCompare: 'less',
})
.withPrimitive({ topology: 'triangle-list' })
.createPipeline();
},
});

let depthTexture = root.device.createTexture({
size: [canvas.width, canvas.height, 1],
format: 'depth24plus',
usage: GPUTextureUsage.RENDER_ATTACHMENT,
});

const simulateAction = root['~unstable'].prepareDispatch(simulate);
const simulatePipeline = root['~unstable']
.createGuardedComputePipeline(simulate);

// bind groups

Expand Down Expand Up @@ -254,7 +258,7 @@ function frame(timestamp: DOMHighResTimeStamp) {
lastTimestamp = timestamp;
cameraBuffer.write(camera);

simulateAction
simulatePipeline
.with(computeBindGroups[odd ? 1 : 0])
.dispatchThreads(p.fishAmount);

Expand Down
13 changes: 11 additions & 2 deletions apps/typegpu-docs/src/examples/rendering/3d-fish/render.ts
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,15 @@ import {
} from './schemas.ts';
import { applySinWave, PosAndNormal } from './tgsl-helpers.ts';

type Varyings = {
worldPosition: d.v3f;
worldNormal: d.v3f;
variant: number;
textureUV: d.v2f;
applySeaFog: number; // 0/1
applySeaDesaturation: number; // 0/1
};

export const vertexShader = tgpu['~unstable'].vertexFn({
in: { ...ModelVertexInput, instanceIndex: d.builtin.instanceIndex },
out: ModelVertexOutput,
Expand Down Expand Up @@ -97,6 +106,7 @@ export const fragmentShader = tgpu['~unstable'].fragmentFn({
in: ModelVertexOutput,
out: d.vec4f,
})((input) => {
'use gpu';
// shade the fragment in Phong reflection model
// https://en.wikipedia.org/wiki/Phong_reflection_model
// then apply sea fog and sea desaturation
Expand Down Expand Up @@ -137,8 +147,7 @@ export const fragmentShader = tgpu['~unstable'].fragmentFn({

let desaturatedColor = lightedColor;
if (input.applySeaDesaturation === 1) {
const desaturationFactor = -std.atan2((distanceFromCamera - 5) / 10, 1) /
3;
const desaturationFactor = -std.atan2((distanceFromCamera - 5) / 10, 1) / 3;
const hsv = rgbToHsv(desaturatedColor);
hsv.y += desaturationFactor / 2;
hsv.z += desaturationFactor;
Expand Down
17 changes: 9 additions & 8 deletions apps/typegpu-docs/src/examples/rendering/box-raytracing/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -265,9 +265,13 @@ const fragmentFunction = tgpu['~unstable'].fragmentFn({

// pipeline

const pipeline = root['~unstable']
.withVertex(mainVertex, {})
.withFragment(fragmentFunction, {
const pipeline = root['~unstable'].createRenderPipeline({
primitive: {
topology: 'triangle-strip',
},
vertex: mainVertex,
fragment: fragmentFunction,
targets: {
format: presentationFormat,
blend: {
color: {
Expand All @@ -281,11 +285,8 @@ const pipeline = root['~unstable']
operation: 'add',
},
},
})
.withPrimitive({
topology: 'triangle-strip',
})
.createPipeline();
},
});

// UI

Expand Down
Loading