跳转至

9.7 细节

Some Details

为了完成对 WebGPU 的介绍,我们将简要地看看一些有用但并未包含在前面章节中的内容。在最后两个小节中,你会发现几个新的示例程序。

To finish this introduction to WebGPU, we'll look briefly at a few useful things to know that didn't make it into earlier sections. You will find several new sample programs in the last two subsections.

9.7.1 丢失设备

Lost Device

如果您开始编写更严肃的应用程序,您应该意识到 WebGPU 设备可能会“丢失”。当这种情况发生时,设备停止工作,您尝试对设备执行的任何操作都将被忽略。使用该设备创建的资源,如缓冲区和管线,将不再有效。通常情况下,这种情况很少见。例如,如果您因为不再需要设备而调用 device.destroy(),它就可能发生。如果用户拔掉外部显示器,它也可能发生。更令人不安的是,WebGPU 规范指出:“如果着色器执行在用户代理确定的合理时间内没有结束,设备可能会丢失。”“用户代理”是运行您程序的网络浏览器。这并没有给出太多明确的预期指导。

函数 device.lost() 返回一个承诺,如果设备丢失,它就会解决。它可以用来设置一个函数,如果设备丢失就被调用。它可能像这样使用,就在创建设备之后:

device.lost().then(
    (info) => {
        if ( info.reason !== "destroyed" ) {
            ... // (可能尝试恢复)
        }
    }
);

info.reason 的唯一可能值是 "destroyed"(意味着调用了 device.destroy())和 "unknown"。如果原因不是 "destroyed",您可能尝试创建一个新的设备并重新初始化您的应用程序 —— 冒着同样的问题再次发生的风险。

希望 device.lost() 的行为在未来会有更明确的定义。

If you start writing more serious applications, you should be aware that it's possible for a WebGPU device to become "lost." When that happens, the device stops working, and anything that you try to do with the device will be ignored. Resources such as buffers and pipelines that were created with the device will no longer be valid. Ordinarily, this will be rare. It can happen, for example, if you call device.destroy() because you no longer need the device. It could happen if the user unplugs an external display. More disturbing, the WebGPU specification says, "The device may become lost if shader execution does not end in a reasonable amount of time, as determined by the user agent." The "user agent" is the web browser that is running your program. That does not give much definite guidance about what to expect.

The function device.lost() returns a promise that resolves if and when the device becomes lost. It can be used to set up a function to be called if the device is lost. It might be used something like this, just after creating the device:

device.lost().then(
(info) => {
    if ( info.reason !== "destroyed" ) {
        ... // (possibly try to recover)
    }
}
);

The only possible values of info.reason are "destroyed" (meaning that device.destroy() was called) and "unknown." If the reason is not "destroyed," you might try to create a new device and reinitialize your application—at the risk that the same thing will go wrong again.

Hopefully, the behavior of device.lost() will be better defined in the future.

9.7.2 错误处理

Error Handling

关于 WebGPU 错误,首先要记住的是,它们几乎总是在 Web 浏览器的控制台中报告。WebGPU 错误信息是有意义的,通常会给您提供如何解决这个问题的提示。第二件要知道的事情是,WebGPU 根据严格指定的标准来验证程序。如果一个程序在一个平台上通过了有效性检查,它很可能会做到每个平台上都通过。第三件事是,当 WebGPU 发现有效性错误时,它不会自动停止处理。它会将导致问题的对象标记为无效,并尝试继续。尝试使用无效对象将产生更多的错误消息。所以,如果您的程序产生了一系列的错误消息,请专注于第一个。

您可以通过标记您的对象来改善 WebGPU 生成的错误消息。您可以通过向对象添加标签属性来为您选择的文本字符串标记几乎所有 WebGPU 对象。如果 WebGPU 在对象中发现验证错误,它将把标签包含在错误消息中。例如,如果您的程序使用了几个绑定组,其中一个引起了问题,给您的绑定组添加标签可以帮助您追踪错误:

bindGroupA = device.createBindGroup({
    label: "bind group for outlines",
    layout: 
    .
    .
    .

与其依赖 Web 浏览器控制台,可以让程序检查错误。事情变得复杂,因为错误是由程序的 GPU 端检测到的。要将错误报告传回 JavaScript 端,您可以使用 device.pushErrorScope()GPU 添加错误检查。稍后,您可以通过调用 device.popErrorScope() 检索结果。pushErrorScope() 接受一个参数,指示您想要检测的错误类型。参数可以是 "validation"、"out-of-memory" 或 "internal";"validation" 是最常见的。popErrorScope() 返回一个承诺,当提交给 GPU 的所有操作在相应的 push 之后完成后,承诺解决。如果没有检测到错误,承诺返回的值将为 null;否则,它将是一个具有描述错误的 message 属性的对象。

例如,当我开发程序时,我喜欢检查我的着色器代码中的编译错误。我可以通过在尝试编译之前推入 "validation" 错误范围来做到这一点:

device.pushErrorScope("validation");
shader = device.createShaderModule({
    code: shaderSource
});
let error = await device.popErrorScope();
if (error) {
    throw Error("Compilation error in shader: " + error.message);
}

一旦程序工作正常,就可以移除错误检查。

WebGPU 遇到未被错误范围捕获的错误时,它会产生一个 "uncapturederror" 事件。您可以向设备添加事件处理程序以响应未捕获的错误:device.onuncapturederror = function(event) { ... }。但是,像往常一样,记住通常观察 Web 浏览器控制台就足够了!

The first thing to remember about WebGPU errors is that they will almost always be reported in the Web browser's console. WebGPU error messages are informative and will often give you hints about how to fix the problem. The second thing to know is that WebGPU validates programs according to tightly specified criteria. If a program passes validity checks on one platform, it is likely to do so on every platform. The third thing is that when WebGPU finds a validity error, it does not automatically stop processing. It will mark the object that caused the problem as invalid and will try to continue. Attempts to use the invalid object will produce more error messages. So, if your program produces a series of error messages, concentrate on the first one.

You can improve the error messages generated by WebGPU by labeling your objects. You can label just about any WebGPU object with a text string of your choosing by adding a label property to the object. If WebGPU finds a validation error in the object, it will include the label in the error message. For example, if your program uses several bind groups and one of them causes a problem, adding labels to your bind groups can help you track down the error:

bindGroupA = device.createBindGroup({
    label: "bind group for outlines",
    layout: 
    .
    .
    .

Instead of relying on the Web browser console, it is possible to have a program check for errors. Things are complicated by the fact that errors are detected by the GPU side of the program. To get the error report back to the JavaScript side, you can use device.pushErrorScope() to add an error check to the GPU. Later, you can retrieve the result by calling device.popErrorScope(). pushErrorScope() takes a parameter indicating the type of error that you want to detect. The parameter can be "validation", "out-of-memory", or "internal"; "validation" is the most common. popErrorScope() returns a promise that resolves when all operations submitted to the GPU after the corresponding push have been completed. The value returned by the promise will be null if no error was detected; otherwise, it will be an object with a message property that describes the error.

For example, when I am developing a program, I like to check for compilation errors in my shader code. I can do that by pushing a "validation" error scope before attempting the compilation:

device.pushErrorScope("validation");
shader = device.createShaderModule({
code: shaderSource
});
let error = await device.popErrorScope();
if (error) {
throw Error("Compilation error in shader: " + error.message);
}

The error check could be removed once the program is working.

When WebGPU encounters an error that is not captured by an error scope, it generates an "uncapturederror" event. You can add an event handler to the device to respond to uncaptured errors: device.onuncapturederror = function(event) { ... }. But, as always, remember that watching the Web browser console is usually good enough!

9.7.3 限制和特性

Limits and Features

WebGPU 设备受某些“限制”的约束,例如可以附加到渲染管线的最大顶点缓冲区数量,或者计算工作组的最大大小。当您通过调用 adapter.requestDevice() 而不带参数来创建设备时,返回的设备具有一套默认的限制,这些限制保证被每个 WebGPU 实现所支持。例如,默认的工作组最大大小是 256。对于大多数应用程序来说,默认限制就足够了。然而,如果您绝对需要一个大小为 1024 的工作组,您可以尝试请求一个具有该限制的设备:

device = await adapter.requestDevice({
    requiredLimits: {
        maxComputeInvocationsPerWorkgroup: 1024
    }
});

如果 WebGPU 适配器不支持请求的限制,这将抛出一个异常。如果它在您的 Web 浏览器中成功,这意味着您正在编写的程序可能在运行时失败,当在不支持增加限制的平台上运行时。

对象 adapter.limits 包含适配器实际支持的限制。(要查看列表,请将该对象写入控制台。)在请求增加限制之前,您应该检查此对象,看看适配器是否支持它。

WebGPU 还定义了一组“功能”,它们代表可选的设备能力。例如,特性“texture-compression-bc”使得可以使用某种类型的压缩纹理。(本书未涵盖压缩纹理。)除非在创建设备时请求,否则不能使用特性:

device = await adapter.requestDevice({
    requiredFeatures: ["texture-compression-bc"] // 特性名称数组
});

同样,如果特性不可用,这将抛出异常,并且特性请求将限制您的程序可以运行的设备。布尔值函数 adapter.hasFeature(name) 可以用来测试适配器是否支持给定名称的特性。有关可能特性的列表,请参见 WebGPU 文档。

A WebGPU device is subject to certain "limits," such as the maximum number of vertex buffers that can be attached to a render pipeline or the maximum size of a compute workgroup. When you create a device by calling adapter.requestDevice() with no parameter, the device that is returned has a default set of limits which are guaranteed to be supported by every WebGPU implementation. For example, the default maximum size for a workgroup is 256. For most applications, the default limits are fine. However, if you absolutely need a workgroup of size 1024, you can try requesting a device with that limit:

device = await adapter.requestDevice({
    requiredLimits: {
    maxComputeInvocationsPerWorkgroup: 1024
    }
});

If the WebGPU adapter doesn't support the requested limit, this will throw an exception. If it succeeds in your Web browser, it means that you are writing a program that might fail elsewhere, when run on a platform that doesn't support the increased limit.

The object adapter.limits contains the actual limits supported by the adapter. (To see a list, write the object to the console.) Before requesting an increased limit, you should check this object to see whether the adapter supports it.

WebGPU also defines a set of "features," which represent optional device capabilities. For example, the feature "texture-compression-bc" makes it possible to use a certain type of compressed texture. (Compressed textures are not covered in this book.) Features cannot be used unless they are requested when the device is created:

device = await adapter.requestDevice({
requiredFeatures: ["texture-compression-bc"] // array of feature names
});

Again, this will throw an exception if the feature is not available, and a feature request will limit the devices on which your program can run. The boolean-valued function adapter.hasFeature(name) can be used to test whether the adapter supports the feature wih the given name. For a list of possible features, see the WebGPU documentation.

9.7.4 渲染通道选项

Render Pass Options

渲染通道编码器(render pass encoder)用于向命令编码器添加绘图命令。它指定了一个管线(pipeline)和资源,比如管线所需的绑定组(bind groups)。它还有其他几个选项。这里我们看其中的两个。

视口viewport)是在画布或其他渲染目标中的矩形区域,渲染后的图像就显示在这个区域内。默认的视口是整个渲染目标,但是可以使用渲染通道编码器中的 setViewport() 函数来选择一个更小的视口。标准的 WebGPU NDC归一化设备坐标坐标系,x 和 y 的范围是从负一到正一,深度(depth)的范围是从零到一,然后映射到更小的视口上,不在该视口外进行绘制。如果 passEncoder 是一个渲染通道编码器,函数调用的形式如下:

passEncoder.setViewport( left, top, width, height, depthMin, depthMax );

其中 left、top、width 和 height 以像素坐标给出,depthMin 和 depthMax 的范围是 0 到 1,depthMin 小于 depthMax。通常,depthMin 会是零,depthMax 会是一。例如,当绘制到一个 800x600 像素的画布时,你可以将场景映射到画布的右半部分:

passEncoder.setViewport( 400, 0, 400, 600, 0, 1 );

此外,你可以使用 setScissorRect() 来限制在视口内的一个更小矩形区域内进行绘制,其形式如下:

passEncoder.setScissorRect( left, top, width, height );

同样,left、top、width 和 height 以像素坐标给出。视口和裁剪矩形(scissor rect)的区别在于裁剪矩形不会影响到坐标映射:视口显示整个渲染场景,但是裁剪矩形会阻止场景的一部分被绘制。

示例程序 webgpu/viewport_and_scissor.html 使用了视口和裁剪矩形。它是另一个移动的圆盘动画,显示有黑色轮廓的彩色圆盘。不同的视口被用来将场景的四份绘制到画布的四个象限中。在两个视口中,也应用了裁剪矩形,但只应用于圆盘的内部,不应用于它们的轮廓。

A render pass encoder is used to add drawing commands to a command encoder. It specifies a pipeline and resources such as bind groups that are required by the pipeline. It also has several other options. We'll look at two of them here.

The viewport is the rectangular region in a canvas or other render target in which the rendered image is displayed. The default viewport is the entire render target, but the setViewport() function in a render pass encoder can be used to select a smaller viewport. The standard WebGPU NDC coordinate system, with x and y ranging from minus one to one and depth ranging from zero to one, is then mapped onto the smaller viewport, and no drawing takes place outside that viewport. If passEncoder is a render pass encoder, a call to the function takes the form

passEncoder.setViewport( left, top, width, height, depthMin, depthMax );

where left, top, width, and height are given in pixel coordinates, and depthMin and depthMax are in the range 0 to 1, with depthMin less than depthMax. Usually, depthMin will be zero and depthMax will be one. For example, when drawing to an 800-by-600 pixel canvas, you can map the scene to the right half of the canvas using

passEncoder.setViewport( 400, 0, 400, 600, 0, 1 );

In addition, you can restrict drawing to a smaller rectangle within the viewport using setScissorRect(), which has the form

passEncoder.setScissorRect( left, top, width, height );

where again left, top, width, and height are given in pixel coordinates. The difference between viewport and scissor rect is that a scissor rect does not affect the coordinate mapping: The viewport shows the entire rendered scene, but a scissor rect prevents part of the scene from being drawn.

The sample program webgpu/viewport_and_scissor.html uses both viewport and scissor rect. It is yet another moving disk animation, showing colored disks with black outlines. Different viewports are used to draw four copies of the scene to the four quadrants of a canvas. In two of the viewports, a scissor rect is also applied, but just to the disk interiors, not to their outlines.

9.7.5 渲染管线选项

Render Pipeline Options

渲染管线描述符(pipeline descriptor)用于 device.createRenderPipeline() 创建渲染管线。描述符有多个选项,这些选项影响管线如何渲染原语(primitives)。例如,我们已经看到多样本属性(multisample property)是如何用于多重采样抗锯齿(9.2.5节),以及 depthStencil 如何用于配置深度测试(9.4.1节)。这里,我们来看一些更多的渲染管线选项。

颜色混合。默认情况下,由片段着色器输出的颜色会替换片段的当前颜色。但是,这两种颜色可以混合。也就是说,片段的新颜色将是“源”颜色(来自着色器)和“目标”颜色(渲染目标中片段的当前颜色)的某种组合。这通常用于实现半透明颜色,其中源颜色的 alpha 分量决定了透明度的程度。示例程序 webgpu/alpha_blend.html 展示了这一点。

颜色混合的配置嵌套在管线描述符的片段属性中。其功能类似于 WebGL 函数 gl.blendFuncSeparate(),该函数在 7.4.1节 中讨论。以下是半透明性的典型配置:

fragment: {
    module: shader,
    entryPoint: "fragmentMainForDisk",
    targets: [{
        format: navigator.gpu.getPreferredCanvasFormat(),
        blend: { // 配置用于颜色混合的公式。
            color: { // 对 RGB 颜色分量。
            operation: "add",                  // 默认是 "add"。
            srcFactor: "src-alpha",            // 默认是 "one"。
            dstFactor: "one-minus-src-alpha"   // 默认是 "zero"。
            },
            alpha: { // 对 alpha 分量。
            operation: "add",
            srcFactor: "zero",
            dstFactor: "one"
            }
        }
    }]
}

红色、绿色和蓝色颜色分量的混合与 alpha 分量是分开配置的。这里用于颜色属性的值表明,新的 RGB 颜色值是片段着色器输出和当前片段颜色的加权平均值。用于 alpha 的值表明目标的 alpha 分量将保持不变。使用 "add" 操作的通用公式是:

new_color = shader_output*srcFactor + current_color*dstFactor

另一个常见配置是将操作设置为 "add",并将 srcFactor 和 dstFactor 都设置为 "one",这意味着着色器输出直接加到当前颜色上。这可能用于通过多次传递构建目标中的颜色,每次传递都为颜色值增加一点。

颜色屏蔽。片段目标的 writeMask 属性允许你控制片段着色器输出的哪些颜色分量将被写入渲染目标。(OpenGL 中的相同功能称为 "color masking";7.4.1节讨论了它如何用于立体声图。)例如,如果你限制写入红色分量,那么只有当前片段颜色的红色分量可以改变;绿色、蓝色和 alpha 分量将保持不变。以下是在渲染管线描述符中如何做到这一点:

fragment: {
    module: shader,
    entryPoint: "fragmentMain",
    targets: [{
        format: navigator.gpu.getPreferredCanvasFormat(),
        writeMask: GPUColorWrite.RED  // 只将红色分量写入目标。
    }]
}

writeMask 属性的其他值包括 GPUColorWrite.GREEN、GPUColorWrite.BLUE 和 GPUColorWrite.ALPHA。你也可以使用或运算符("|")组合几个常量来写入几个分量。例如,

writeMask: GPUColorWrite.GREEN | GPUColorWrite.BLUE

默认值是 GPUColorWrite.ALL,这意味着所有四个颜色分量都被写入。示例程序 webgpu/color_mask.html 允许你实验写入红色、绿色和蓝色颜色分量的任何组合。注意,如果你只在黑色背景上写入红色分量,你将得到红色阴影,因为绿色和蓝色分量在写入后仍然是零。但是,如果你在白色背景上写入,你将得到蓝绿色阴影,因为绿色和蓝色分量在写入后仍然等于一,而红色分量可以小于一。

深度偏置。当启用深度测试时,绘制两个几乎完全相同深度的物体可能会有问题,因为一个物体可能在某些像素上可见,而另一个物体在其他像素上可见。参见 3.1.4节 的结尾。解决方案是向其中一个物体的深度添加少量或“偏置”。(在 OpenGL 中称为 "polygon offset";参见 3.4.1节 的结尾。)示例程序 webgpu/polyhedra.html 允许用户查看用白色面和黑色边绘制的多面体。它使用深度偏置确保边缘完全可见。配置是用于绘制面的部分的管线描述符的 depthStencil 属性

depthStencil: {  
    depthWriteEnabled: true,
    depthCompare: "less",
    format: "depth24plus",
    depthBias: 1,
    depthBiasSlopeScale: 1.0
}

depthBias 和 depthBiasSlopeScale 属性用于修改管线渲染的每个片段的深度。默认值是零,这不会改变深度。正值会增加片段的深度,使其稍微远离用户。这里显示的 depthBias 和 depthBiasSlopeScale 的值 1 和 1.0 在大多数情况下应该有效。(depthBias 的值乘以深度缓冲区可以表示的两个深度之间的最小正值差异。这本身在许多情况下可能有效,但对于用户几乎从边缘查看的三角形,可能不够。depthBiasSlopeScale 增加了一个额外的偏置,这取决于三角形与视图方向的角度。)请注意,深度偏置似乎只对三角形原语有效,不适用于线或点,因此示例程序中的深度偏置应用于多面体的面,而不是边缘。

面剔除和前向面。多面体示例使用了两个更多的管线选项:cullMode 和 frontFace。它们是渲染管线描述符的 primitive 属性中的选项。

程序中的多面体都是封闭对象:内部完全被外部隐藏。没有必要渲染背面的多边形,因为它们位于前面多边形的后面。cullMode 属性可以用来关闭正面或背面三角形的渲染。默认值 "none" 表示不剔除任何三角形。在多面体程序中,我将 cullMode 设置为 "back",以避免渲染背面三角形的开销,这些三角形在最终图像中不会可见。

但是,我不得不做另一个改变。通常的约定是通过规则确定三角形的正面,即当查看正面时,顶点以逆时针顺序给出。然而,程序中的多面体模型使用相反的约定:顺时针排序。因此,我将 primitive 的 frontFace 选项设置为 "cw" 以指定顺时针顶点排序。

primitive: {
    topology: "triangle-list",
    cullMode: "back",  // 其他值包括 "front" 和 "none"。
    frontFace: "cw"    // 另一个值是 "ccw"(逆时针)。
}

现在,这个改变对场景的外观没有影响;它只是为了效率。如果你想知道,是的,我可以只将 cullMode 设置为 "front",但那会误导人——而且它将让我没有 frontFace 的示例。

A pipeline descriptor is used with device.createRenderPipeline() to create a render pipeline. The descriptor has a number of options that affect how the pipeline will render primitives. We have seen, for example, how the multisample property is used for multisampling antialiasing (Subsection 9.2.5) and how detpthStencil is used to configure the depth test (Subsection 9.4.1). Here, we look at a few more render pipeline options.

Color Blending. By default, the color that is output by a fragment shader replaces the current color of the fragment. But it is possible for the two colors to be blended. That is, the new color of the fragment will be some combination of the "source" color (from the shader) and the "destination" color (the current color of the fragment in the render target). This is often used to implement translucent colors, where the alpha component of the source color determines the degree of transparency. For an example, see the sample program webgpu/alpha_blend.html.

The configuration for color blending is nested inside the fragment property of the pipeline descriptor. The functionality is similar to the WebGL function gl.blendFuncSeparate(), which is discussed in Subsection 7.4.1. Here is the typical configuration for translucency:

fragment: {
module: shader,
entryPoint: "fragmentMainForDisk",
targets: [{
    format: navigator.gpu.getPreferredCanvasFormat(),
    blend: { // Configure the formulas to be used for color blending.
        color: { // For RGB color components.
        operation: "add",                  // "add" is the default.
        srcFactor: "src-alpha",            // The default is "one".
        dstFactor: "one-minus-src-alpha"   // The default is "zero".
        },
        alpha: { // For the alpha component.
        operation: "add",
        srcFactor: "zero",
        dstFactor: "one"
        }
    }
}]
}

Blending for the red, green, and blue color components is configured separately from the alpha component. The values used here for the color property say that the new RGB color value is a weighted average of the fragment shader output and the current fragment color. The values used for alpha say that the alpha component of the destination will remain unchanged. The general formula, using the "add" operation, is

new_color = shader_output*srcFactor + current_color*dstFactor

Another common configuration is to set the operation to "add" and both srcFactor and dstFactor to "one", meaning that the shader output is simply added to the current color. This might be used to build up the colors in the target by using multiple passes that each add a little to the color value.

Color Masking. The writeMask property of the fragment target lets you control which color components of the fragment shader output will be written to the render target. (The same functionality is called "color masking" in OpenGL; Subsection 7.4.1 discusses how it can be used for anaglyph stereo.) For example, if you restrict writing to the red component, then only the red component of the current fragment color can be changed; the green, blue, and alpha components will be left unchanged. Here is how you would do that in a render pipeline descriptor:

fragment: {
module: shader,
entryPoint: "fragmentMain",
targets: [{
    format: navigator.gpu.getPreferredCanvasFormat(),
    writeMask: GPUColorWrite.RED  // Only write the red component to target.
}]
}

Other values for the writeMask property include GPUColorWrite.GREEN, GPUColorWrite.BLUE, and GPUColorWrite.ALPHA. You can also combine several of these constants with the or ("|") operator to write several components. For example,

writeMask: GPUColorWrite.GREEN | GPUColorWrite.BLUE

The default value is GPUColorWrite.ALL, which means that all four color components are written. The sample program webgpu/color_mask.html lets you experiment with writing to any combination of the red, green, and blue color components. Note that if you write just the red component to a black background, you will get shades of red, since the green and blue components will still be zero after writing. But if you write to a white background, you will get shades of blue-green, since the green and blue components will still equal one after the write, while the red component can be less than one.

Depth Bias. When the depth test is enabled, drawing two things at almost exactly the same depth can be a problem, because one object might be visible at some pixels while the other object is visible at other pixels. See the end of Subsection 3.1.4. The solution is to add a small amount, or "bias," to the depth of one of the objects. (This is called "polygon offset" in OpenGL; see the end of Subsection 3.4.1.) The sample program webgpu/polyhedra.html lets the users view polyhedra that are drawn with white faces and black edges. It uses depth bias to ensure that the edges are fully visible. The configuration is part of the depthStencil property of the pipeline descriptor that is used for drawing the faces:

depthStencil: {  
depthWriteEnabled: true,
depthCompare: "less",
format: "depth24plus",
depthBias: 1,
depthBiasSlopeScale: 1.0
}

The depthBias and depthBiasSlopeScale properties are used to modify the depth of each fragment that is rendered by the pipeline. The default values are zero, which leaves the depth unchanged. Positive values will increase the fragment's depth, moving it a bit away from the user. The values 1 and 1.0 for depthBias and depthBiasSlopeScale shown here should work in most cases. (The value of depthBias is multiplied by the smallest positive difference between two depths that can be represented in the depth buffer. That by itself might work in many cases, but for triangles that the user is viewing close to edge-on, it might not be enough. The depthBiasSlopeScale adds an additional bias that depends on the angle that the triangle makes with the view direction.) Note that depth bias seems to work only for triangle primitives, not for lines or points, so the depth bias in the sample program is applied to the faces of the polyhedron, not to the edges.

Face Culling and Front Face. The polyhedra example uses two more pipeline options: cullMode and frontFace. The are options in the primitive property of the render pipeline descriptor.

The polyhedra in the program are all closed objects: The interior is completely hidden by the exterior. There is no need to render back-facing polygons, since they lie behind front-facing polygons. The cullMode property can be used to turn off rendering of either front-facing or back-facing triangles. With the default value, "none", no triangles are culled. In the polyhedra program, I set cullMode to "back", to avoid the expense of rendering back-facing triangles that would not be visible in the final image.

However, I had to make another change. The usual convention is that the front face of a triangle is determined by the rule that when looking at the front face, the vertices are given in counterclockwise order. However, the polyhedron models in the program use the opposite convention: clockwise ordering. So, I set the frontFace option of the primitive to "cw" to specify clockwise vertex ordering.

primitive: {
    topology: "triangle-list",
    cullMode: "back",  // Other values are "front" and "none".
    frontFace: "cw"    // The other value is "ccw" (counterclockwise).
}

Now, that change has no effect on the appearance of the scene; it was done for efficiency only. And if you wondering, yes, I could have just set cullMode to "front", but that would be misleading—and it would have left me with no example for frontFace.