跳转至

6.4 图像纹理

Image Textures

纹理在3D图形中扮演着重要的角色,现代GPU在硬件层面上内置了对图像纹理的支持。在本节中,我们将探讨WebGL API中用于图像纹理的功能。OpenGL 1.1中的图像纹理在第4.3节中有所介绍。那一节的许多内容在现代OpenGL中,包括WebGL中仍然相关。因此,当我们在WebGL中介绍图像纹理时,大部分内容对您来说并不新鲜。然而,自OpenGL 1.1以来,有一个新特性:纹理单元

WebGL 1.0和WebGL 2.0之间的一个显著区别是增加了对不同类型的纹理以及不同使用方式的纹理的支持。访问大多数新特性需要使用GLSL ES 3.00编写着色器程序。我们将在本节的大部分时间里坚持使用WebGL 1.0,但将在最后一小节中讨论一些新的WebGL 2.0特性。

Textures play an essential role in 3D graphics, and support for image textures is built into modern GPUs on the hardware level. In this section, we look at the WebGL API for image textures. Image textures in OpenGL 1.1 were covered in Section 4.3. Much of that section is still relevant in modern OpenGL, including WebGL. So, as we cover image textures in WebGL, much of the material will not be new to you. However, there is one feature that is new since OpenGL 1.1: texture units.

One of the significant differences between WebGL 1.0 and WebGL 2.0 is an increase in support for different types of textures and for different ways of using textures. Access to most of the new features requires using GLSL ES 3.00 for the shader programs. We will stick to WebGL 1.0 for most of this section, but will discuss some of the new WebGL 2.0 features in the final subsection.

6.4.1 纹理单元和纹理对象

Texture Units and Texture Objects

纹理单元,也称为纹理映射单元(TMU)或纹理处理单元(TPU),是GPU中的一个硬件组件,用于进行采样。采样是从图像纹理和纹理坐标中计算颜色的过程。将纹理图像映射到表面上是一个相当复杂的操作,因为它不仅需要返回包含某些给定纹理坐标的纹理元素(texel)的颜色。它还需要应用适当的缩放或放大滤波器,如果可用,可能还会使用mipmap。快速的纹理采样是GPU良好性能的关键要求之一。

不应将纹理单元与纹理对象混淆。我们在4.3.7小节中遇到了纹理对象。纹理对象是一个数据结构,包含图像纹理的颜色数据,可能还包括纹理的一组mipmap,以及纹理属性的值,如缩放和放大滤波器和纹理重复模式。纹理单元必须访问纹理对象以完成其工作。纹理单元是处理器;纹理对象保存被处理的数据。

(顺便说一下,我确实应该更加小心地使用“GPU”和“硬件”这些术语。尽管纹理单元可能确实使用了GPU中的实际硬件组件,但它也可能在软件中被更慢地模拟。即使涉及硬件,拥有八个纹理单元也不一定意味着有八个硬件组件;纹理单元可能在较少数量的硬件组件上共享时间。同样,我之前说过纹理对象存储在GPU的内存中,这在特定情况下可能或可能不是字面意义上的真实。然而,您可能会发现,将纹理单元视为GPU中的一块硬件,将纹理对象视为GPU中的数据结构,在概念上更容易理解。)最后一小节。


在GLSL中,纹理查找是使用采样器变量完成的。采样器变量是着色器程序中的一个变量。在GLSL ES 1.00中,唯一的采样器类型是sampler2DsamplerCubesampler2D用于在标准纹理图像中进行查找;samplerCube用于在立方体贴图中进行查找(5.3.4小节)。采样器变量的值是对纹理单元的引用。该值指明了在使用采样器变量进行纹理查找时调用的纹理单元。采样器变量必须被声明为全局统一变量。着色器程序不能为采样器变量赋值是非法的。值必须来自JavaScript方面。

在JavaScript方面,可用的纹理单元编号为0、1、2、...,其中最大值取决于实现。可以通过以下表达式的值来确定单元数量:

gl.getParameter(gl.MAX_COMBINED_TEXTURE_IMAGE_UNITS)

(请再次记住,这里的gl是指向WebGL上下文的JavaScript变量的名称,名称由程序员决定。)

就JavaScript而言,采样器变量的值是一个整数。如果你想让采样器变量使用编号为2的纹理单元,那么你需要将采样器变量的值设置为2。这可以通过使用函数gl.uniform1i(6.1.4小节)来完成。例如,假设着色器程序声明了一个采样器变量:

uniform sampler2D u_texture;

要从JavaScript设置它的值,你需要在着色器程序中获取变量的位置。如果prog是着色器程序,位置是通过调用以下代码获得的:

u_texture_location = gl.getUniformLocation(prog, "u_texture");

然后,你可以通过调用以下代码来告诉采样器变量使用编号为2的纹理单元:

gl.uniform1i(u_texture_location, 2);

请注意,在GLSL中,整数值是不可见的。整数值告诉采样器使用哪个纹理单元,但着色器程序没有办法找出正在使用的单元编号。


要使用图像纹理,你还需要创建一个纹理对象,并将图像加载到纹理对象中。你可能想要设置纹理对象的一些属性,也可能想要为纹理创建一组mipmap。你还需要将纹理对象与纹理单元关联起来。所有这些操作都是在JavaScript方面完成的。

创建纹理对象的命令是gl.createTexture()OpenGL 1.1中的命令是glGenTextures。WebGL命令更易于使用。它创建一个单一的纹理对象并返回对它的引用。例如,

textureObj = gl.createTexture();

这仅为对象分配了一些内存。为了使用它,你必须首先通过调用gl.bindTexture来“绑定”纹理对象。例如,

gl.bindTexture(gl.TEXTURE_2D, textureObj);

第一个参数,gl.TEXTURE_2D,是纹理目标。这个目标用于使用普通的纹理图像。对于立方体贴图有不同的目标。

函数gl.texImage2D用于将图像加载到当前绑定的纹理对象中。我们将在下一小节中回到这个问题。但请记住,这个命令和其他命令始终适用于当前绑定的纹理对象。命令中没有提到纹理对象;相反,在调用命令之前,必须先绑定纹理对象。

你还需要告诉纹理单元使用纹理对象。在此之前,你需要通过调用函数gl.activeTexture来使纹理单元“激活”。参数是gl.TEXTURE0gl.TEXTURE1gl.TEXTURE2等常量之一,它们代表可用的纹理单元。(这些常量的值不是0、1、2……)。最初,纹理单元0处于激活状态。例如,要使纹理单元2处于激活状态,请使用

gl.activeTexture(gl.TEXTURE2);

(这个函数本应该叫做activeTextureUnit,或者可能是bindTextureUnit,因为它的工作方式类似于WebGL的各种“bind”函数。)如果你接着调用

gl.bindTexture(gl.TEXTURE_2D, textureObj);

在纹理单元2处于激活状态时绑定一个纹理对象,那么纹理对象textureObj就被绑定到纹理单元2,用于gl.TEXTURE_2D操作。绑定只是告诉纹理单元使用哪个纹理对象。也就是说,当纹理单元2执行TEXTURE_2D查找时,它将使用存储在textureObj中的图像和设置进行操作。一个纹理对象可以同时绑定到多个纹理单元上。然而,一个给定的纹理单元一次只能有一个绑定的TEXTURE_2D

在WebGL中使用纹理图像涉及使用纹理对象、纹理单元和采样器变量。三者之间的关系如图中所示:

123

采样器变量使用纹理单元,该纹理单元使用纹理对象,该对象保存纹理图像。设置此链的JavaScript命令在插图中显示。要将纹理图像应用于原语,您必须设置整个链。当然,您还必须为原语提供纹理坐标,并需要在着色器程序中使用采样器变量来访问纹理。

假设您有几张想要在几个不同的原语上使用的图像。在绘制原语之间,您需要更改要使用的纹理图像。在WebGL中至少有三种不同的方式来管理图像:

  1. 您可以使用单个纹理对象和单个纹理单元。绑定的纹理对象、活动的纹理单元和采样器变量的值可以设置一次,然后不再更改。要更改为新图像,您将使用gl.texImage2D将图像加载到纹理对象中。这基本上是OpenGL 1.0中的操作方式。这是非常低效的,除非您只使用每个图像一次。这就是为什么引入了纹理对象的原因。
  2. 您可以为每个图像使用不同的纹理对象,但只使用单个纹理单元。活动的纹理和采样器变量的值将不需要更改。您将使用gl.bindTexture绑定包含所需图像的纹理对象来切换到新的纹理图像。
  3. 您可以为每个图像使用不同的纹理单元。您将每个图像加载到自己的纹理对象中,并将该对象绑定到其中一个纹理单元。您可以通过更改采样器变量的值来切换到新的纹理图像。

我不知道选项2和3在效率方面如何比较。请注意,只有在您想要将多个纹理图像应用于同一原语时,才被迫使用多个纹理单元。要做到这一点,您将需要着色器程序中的几个采样器变量。它们将具有不同的值,以便它们引用不同的纹理单元,并且像素的颜色将以某种方式依赖于两个图像的样本。这张图片显示了以简单方式组合两个纹理以计算纹理正方形的颜色:

123

在左侧的图像中,灰度“砖”图像乘以“地球”图像;也就是说,像素的红色分量是通过将砖纹理的红色分量乘以地球纹理的红色分量计算的,绿色和蓝色也是如此。在右侧,相同的地球纹理从一个“布”纹理中减去。此外,图案因为在使用纹理坐标采样纹理之前进行了修改,使用了公式 texCoords.y += 0.25*sin(6.28*texCoords.x)。这是只能使用可编程着色器完成的事情!图像取自以下演示。试试看!

您可能想要查看source code以了解如何编程纹理。使用两个纹理单元。两个统一采样器变量u_texture1u_texture2的值在初始化期间使用以下代码设置

u_texture1_location = gl.getUniformLocation(prog, "u_texture1");
u_texture2_location = gl.getUniformLocation(prog, "u_texture2");
gl.uniform1i(u_texture1_location, 0);
gl.uniform1i(u_texture2_location, 1);

这些值从未更改。程序使用了几个纹理图像。每个图像都有一个纹理对象。在JavaScript方面,纹理对象的ID存储在数组textureObjects中。两个弹出菜单允许用户选择应用于原语的纹理图像。这是通过将两个选定的纹理对象绑定到纹理单元0和1来实现的,这两个单元是两个采样器变量使用的单元。代码如下:

let tex1Num = Number(document.getElementById("textureChoice1").value);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, textureObjects[tex1Num]);

let tex2Num = Number(document.getElementById("textureChoice2").value);
gl.activeTexture(gl.TEXTURE1);
gl.bindTexture(gl.TEXTURE_2D, textureObjects[tex2Num]);

将图像放入纹理对象是另一个问题,我们接下来讨论。

A texture unit, also called a texture mapping unit (TMU) or a texture processing unit (TPU), is a hardware component in a GPU that does sampling. Sampling is the process of computing a color from an image texture and texture coordinates. Mapping a texture image to a surface is a fairly complex operation, since it requires more than just returning the color of the texel that contains some given texture coordinates. It also requires applying the appropriate minification or magnification filter, possibly using mipmaps if available. Fast texture sampling is one of the key requirements for good GPU performance.

Texture units are not to be confused with texture objects. We encountered texture objects in Subsection 4.3.7. A texture object is a data structure that contains the color data for an image texture, and possibly for a set of mipmaps for the texture, as well as the values of texture properties such as the minification and magnification filters and the texture repeat mode. A texture unit must access a texture object to do its work. The texture unit is the processor; the texture object holds the data that is processed.

(By the way, I should really be more careful about throwing around the terms "GPU" and "hardware." Although a texture unit probably does use an actual hardware component in the GPU, it could also be emulated, more slowly, in software. And even if there is hardware involved, having eight texture units does not necessarily mean that there are eight hardware components; the texture units might share time on a smaller number of hardware components. Similarly, I said previously that texture objects are stored in memory in the GPU, which might or might not be literally true in a given case. Nevertheless, you will probably find it conceptually easier to think of a texture unit as a piece of hardware and a texture object as a data structure in the GPU.)final subsection.


In GLSL, texture lookup is done using sampler variables. A sampler variable is a variable in a shader program. In GLSL ES 1.00, the only sampler types are sampler2D and samplerCube. A sampler2D is used to do lookup in a standard texture image; a samplerCube is used to do lookup in a cubemap texture (Subsection 5.3.4). The value of a sampler variable is a reference to a texture unit. The value tells which texture unit is invoked when the sampler variable is used to do texture lookup. Sampler variables must be declared as global uniform variables. It is not legal for a shader program to assign a value to a sampler variable. The value must come from the JavaScript side.

On the JavaScript side, the available texture units are numbered 0, 1, 2, ..., where the maximum value is implementation dependent. The number of units can be determined as the value of the expression

gl.getParameter( gl.MAX_COMBINED_TEXTURE_IMAGE_UNITS )

(Please remember, again, that gl here is the name of a JavaScript variable that refers to the WebGL context, and that the name is up to the programmer.)

As far as JavaScript is concerned, the value of a sampler variable is an integer. If you want a sampler variable to use texture unit number 2, then you set the value of the sampler variable to 2. This can be done using the function gl.uniform1i (Subsection 6.1.4). For example, suppose a shader program declares a sampler variable

uniform sampler2D u_texture;

To set its value from JavaScript, you need the location of the variable in the shader program. If prog is the shader program, the location is obtained by calling

u_texture_location = gl.getUniformLocation( prog, "u_texture" );

Then, you can tell the sampler variable to use texture unit number 2 by calling

gl.uniform1i( u_texture_location, 2 );

Note that the integer value is not accessible in GLSL. The integer tells the sampler which texture unit to use, but there is no way for the shader program to find out the number of the unit that is being used.


To use an image texture, you also need to create a texture object, and you need to load an image into the texture object. You might want to set some properties of the texture object, and you might want to create a set of mipmaps for the texture. And you will have to associate the texture object with a texture unit. All this is done on the JavaScript side.

The command for creating a texture object is gl.createTexture(). The command in OpenGL 1.1 was glGenTextures. The WebGL command is easier to use. It creates a single texture object and returns a reference to it. For example,

textureObj = gl.createTexture();

This just allocates some memory for the object. In order to use it, you must first "bind" the texture object by calling gl.bindTexture. For example,

gl.bindTexture( gl.TEXTURE_2D, textureObj );

The first parameter, gl.TEXTURE_2D, is the texture target. This target is used for working with an ordinary texture image. There is a different target for cubemap textures.

The function gl.texImage2D is used to load an image into the currently bound texture object. We will come back to that in the next subsection. But remember that this command and other commands always apply to the currently bound texture object. The texture object is not mentioned in the command; instead, the texture object must be bound before the command is called.

You also need to tell a texture unit to use the texture object. Before you can do that, you need to make the texture unit "active," which is done by calling the function gl.activeTexture. The parameter is one of the constants gl.TEXTURE0, gl.TEXTURE1, gl.TEXTURE2, ..., which represent the available texture units. (The values of these constants are not 0, 1, 2, ....) Initially, texture unit number 0 is active. To make texture unit number 2 active, for example, use

gl.activeTexture( gl.TEXTURE2 );

(This function should really have been called activeTextureUnit, or maybe bindTextureUnit, since it works similarly to the various WebGL "bind" functions.) If you then call

gl.bindTexture( gl.TEXTURE_2D, textureObj );

to bind a texture object, while texture unit 2 is active, then the texture object textureObj is bound to texture unit number 2 for gl.TEXTURE_2D operations. The binding just tells the texture unit which texture object to use. That is, when texture unit 2 does TEXTURE_2D lookups, it will do so using the image and the settings that are stored in textureObj. A texture object can be bound to several texture units at the same time. However, a given texture unit can have only one bound TEXTURE_2D at a time.

So, working with texture images in WebGL involves working with texture objects, texture units, and sampler variables. The relationship among the three is illustrated in this picture:

123

A sampler variable uses a texture unit, which uses a texture object, which holds a texture image. The JavaScript commands for setting up this chain are shown in the illustration. To apply a texture image to a primitive, you have to set up the entire chain. Of course, you also have to provide texture coordinates for the primitive, and you need to use the sampler variable in the shader program to access the texture.

Suppose that you have several images that you would like to use on several different primitives. Between drawing primitives, you need to change the texture image that will be used. There are at least three different ways to manage the images in WebGL:

  1. You could use a single texture object and a single texture unit. The bound texture object, the active texture unit, and the value of the sampler variable can be set once and never changed. To change to a new image, you would use gl.texImage2D to load the image into the texture object. This is essentially how things were done in OpenGL 1.0. It's very inefficient, except when you are going to use each image just once. That's why texture objects were introduced.
  2. You could use a different texture object for each image, but use just a single texture unit. The active texture and the value of the sampler variable will never have to be changed. You would switch to a new texture image using gl.bindTexture to bind the texture object that contains the desired image.
  3. You could use a different texture unit for each image. You would load each image into its own texture object and bind that object to one of the texture units. You would switch to a new texture image by changing the value of the sampler variable.

I don't know how options 2 and 3 compare in terms of efficiency. Note that you are only forced to use more than one texture unit if you want to apply more than one texture image to the same primitive. To do that, you will need several sampler variables in the shader program. They will have different values so that they refer to different texture units, and the color of a pixel will somehow depend on samples from both images. This picture shows two textures being combined in simple ways to compute the colors of pixels in a textured square:

123

In the image on the left, a grayscale "brick" image is multiplied by an "Earth" image; that is, the red component of a pixel is computed by multiplying the red component from the brick texture by the red component from the Earth texture, and same for green and blue. On the right, the same Earth texture is subtracted from a "cloth" texture. Furthermore, the pattern is distorted because the texture coordinates were modified before being used to sample the textures, using the formula texCoords.y += 0.25*sin(6.28*texCoords.x). That's the kind of thing that could only be done with programmable shaders! The images are taken from the following demo. Try it out!

You might want to view the source code to see how the textures are programmed. Two texture units are used. The values of two uniform sampler variables, u_texture1 and u_texture2, are set during initialization with the code

u_texture1_location = gl.getUniformLocation(prog, "u_texture1");
u_texture2_location = gl.getUniformLocation(prog, "u_texture2");
gl.uniform1i(u_texture1_location, 0);
gl.uniform1i(u_texture2_location, 1);

The values are never changed. The program uses several texture images. There is a texture object for each image. On the JavaScript side, the IDs for the texture objects are stored in an array, textureObjects. Two popup menus allow the user to select which texture images are applied to the primitive. This is implemented in the drawing routine by binding the two selected texture objects to texture units 0 and 1, which are the units used by the two sampler variables. The code for that is:

let tex1Num = Number(document.getElementById("textureChoice1").value);
gl.activeTexture( gl.TEXTURE0 );
gl.bindTexture( gl.TEXTURE_2D, textureObjects[tex1Num] );

let tex2Num = Number(document.getElementById("textureChoice2").value);
gl.activeTexture( gl.TEXTURE1 );
gl.bindTexture( gl.TEXTURE_2D, textureObjects[tex2Num] );

Getting images into the texture objects is another question, which we turn to next.

6.4.2 处理图像

Working with Images

可以使用函数gl.texImage2D将图像加载到纹理对象中。对于WebGL的使用,这个函数通常具有以下形式:

gl.texImage2D( target, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image );

目标是gl.TEXTURE_2D,用于普通纹理;加载立方体贴图时有其他目标。第二个参数是mipmap级别,主图像为0。尽管可以加载单独的mipmap,但很少这样做。接下来两个参数提供了纹理对象内和原始图像的纹理格式。在WebGL 1.0中,两个格式参数应该具有相同的值。由于网络图像以RGBA格式存储,gl.RGBA可能是最有效的选择,很少有其他需求。但如果不需要alpha分量,可以使用gl.RGB。通过使用gl.LUMINANCEgl.LUMINANCE_ALPHA,可以将图像转换为灰度。(亮度是对红色、绿色和蓝色加权平均,近似于颜色的感知亮度。)第四个参数始终是gl.UNSIGNED_BYTE,表示图像中的颜色使用每个颜色分量一个字节存储。尽管其他值是可能的,但它们对于网络图像没有实际意义。

调用gl.texImage2D的最后一个参数是图像。通常,image将是一个由JavaScript异步加载的DOM图像元素。image也可以是元素。这意味着你可以在画布上绘制,使用HTML画布2D图形API,然后将画布作为纹理图像的源。你甚至可以使用在网页上不可见的离屏画布。

图像被加载到当前绑定到当前活动纹理单元的目标的纹理对象中。没有默认的纹理对象;也就是说,如果在调用gl.texImage2D时没有纹理被绑定,就会发生错误。活动纹理单元是使用gl.activeTexture选择的单元,或者如果从未调用过gl.activeTexture,则为纹理单元0。通过gl.bindTexture将纹理对象绑定到活动纹理单元。这在本节前面已经讨论过。

使用WebGL中的图像比较复杂,因为图像是异步加载的。也就是说,加载图像的命令只启动了加载图像的过程。你可以指定一个回调函数,在加载完成后执行。图像实际上要到回调函数被调用后才可用于使用。当加载图像用作纹理时,回调函数应将图像加载到纹理对象中。通常,它还会调用一个渲染函数来绘制场景,带或不带纹理。

示例程序webgl/simple-texture.html是在一个三角形上使用单个纹理的示例。这里是一个用于在该程序中加载纹理图像的函数。在调用该函数之前创建了纹理对象。

/**
 * 异步加载纹理图像。第一个参数是要加载图像的url。
 * 第二个参数是要加载图像的纹理对象。当图像加载完成后,
 * 将调用draw()函数来绘制带纹理的三角形。(如果加载过程中出现错误,
 * 则会在页面上显示错误消息,并调用draw()绘制不带纹理的三角形。)
 */
function loadTexture( url, textureObject ) {
    const  img = new Image();  // 一个代表图像的DOM图像元素。
    img.onload = function() { 
        // 这个函数将在图像成功加载后调用。
        // 在将图像加载到纹理对象之前,我们必须将纹理对象绑定到TEXTURE_2D目标。
        gl.bindTexture(gl.TEXTURE_2D, textureObject);
        try {
        gl.texImage2D(gl.TEXTURE_2D,0,gl.RGBA,gl.RGBA,gl.UNSIGNED_BYTE,img);
        gl.generateMipmap(gl.TEXTURE_2D);  // 创建mipmap;你必须要么这样做
                            // 或者更改缩小过滤器。
        }
        catch (e) { // 可能是安全异常,因为此页面已通过file:// URL加载。
            document.getElementById("headline").innerHTML =
            "Sorry, couldn't load texture.<br>" +
            "Some web browsers won't use images from a local disk";
        }
        draw();  // 绘制画布,带或不带纹理。  
    };
    img.onerror = function() { 
        // 如果加载过程中出现错误,将调用此函数。
        document.getElementById("headline").innerHTML =
                        "<p>Sorry, texture image could not be loaded.</p>";
        draw();  // 绘制不带纹理的图像;三角形将为黑色。
    };
    img.src = url;  // 开始加载图像。
                    // 这必须在设置onload和onerror之后完成。
}

请注意,WebGL 1.0的图像纹理应该是2的幂纹理。也就是说,图像的宽度和高度应该是2的幂,例如128、256或512。实际上,你可以使用非2的幂纹理,但你不能使用这样的纹理的mipmap,这样的纹理支持的唯一纹理重复模式是gl.CLAMP_TO_EDGE。(WebGL 2.0没有这些限制。)

(在该函数中使用try..catch语句,因为大多数网络浏览器在页面尝试使用本地文件系统的图像作为纹理时会抛出安全异常。这意味着如果你尝试运行一个使用下载的本书中的纹理的程序,使用纹理的程序可能无法工作。)


与纹理对象关联的几个参数,包括纹理重复模式和缩放与放大滤波器。它们可以使用函数gl.texParameteri设置。设置将应用于当前绑定的纹理对象。大多数细节与OpenGL 1.1中的相同(4.3.3小节)。例如,可以将最小化滤波器设置为LINEAR:

gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);

回想一下,默认的最小化滤波器在没有mipmap的情况下无法工作。要获得一个工作的纹理,你必须更改最小化滤波器或安装一组完整的mipmap。幸运的是,WebGL有一个函数可以为你生成mipmap:

gl.generateMipmap(gl.TEXTURE_2D);

纹理重复模式决定了当纹理坐标超出0.0到1.0的范围时会发生什么。纹理坐标系统中的每个方向都有单独的重复模式。在WebGL中,可能的值是gl.REPEATgl.CLAMP_TO_EDGEgl.MIRRORED_REPEAT。默认值是gl.REPEAT。在OpenGL 1.1中,模式CLAMP_TO_EDGE被称为CLAMP,而MIRRORED_REPEAT是WebGL中的新功能。使用MIRRORED_REPEAT时,纹理图像会重复以覆盖整个平面,但每隔一个图像就会被反射。这可以消除副本之间的可见接缝。要在两个方向上设置纹理使用镜像重复,使用:

gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.MIRRORED_REPEAT);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.MIRRORED_REPEAT);

在WebGL中,纹理坐标通常作为类型为vec2的属性输入到顶点着色器中。它们通过变化变量传递到片段着色器。通常,顶点着色器会将属性的值简单地复制到变化变量中。另一种可能性是在将坐标传递到片段着色器之前,在顶点着色器中对坐标应用仿射纹理变换。在片段着色器中,纹理坐标用于对纹理进行采样。GLSL ES 1.00中用于采样普通纹理的函数是:

texture2D(samplerVariable, textureCoordinates);

其中samplerVariable是代表纹理的类型为sampler2D的统一变量,textureCoordinates是包含纹理坐标的vec2。返回值是一个RGBA颜色,表示为类型vec4的值。作为一个非常简单的例子,这里有一个片段着色器,它简单地使用从纹理中采样的值作为像素的颜色。

precision mediump float;
uniform sampler2D u_texture;
varying vec2 v_texCoords;
void main() {
    vec4 color = texture2D(u_texture, v_texCoords);
    gl_FragColor = color;
}

这个着色器来自示例程序webgl/simple-texture.html

有时纹理会用在gl.POINTS类型的原语上。在这种情况下,很自然地从特殊的片段着色器变量gl_PointCoord中获取像素的纹理坐标。一个点被渲染为一个正方形,gl_PointCoord中的坐标在正方形上从0.0到1.0范围内。所以,使用gl_PointCoord意味着一个纹理副本将被粘贴到点上。如果POINTS原语有多个顶点,你将在每个顶点的位置看到纹理的副本。这是一种将图像或多个图像副本放入场景的简单方法。这种技术有时被称为“点精灵”。

以下演示绘制了一个类型为gl.POINTS的单纹理原语,以便你可以看到它的外观。在演示中,只绘制了每个正方形点的圆形切口。

WebGL中的纹理图像的像素数据从图像底部的像素行开始存储,并从那里向上工作。当WebGL通过从图像中读取数据来创建纹理时,它假设图像使用相同的格式。然而,网络浏览器中的图像以相反的顺序存储,从图像的顶部像素行开始并向下工作。这种不匹配的结果是纹理图像将出现倒置。你可以通过修改纹理坐标来解决这个问题。然而,你也可以告诉WebGL在“解包”它们时为你反转图像。要做到这一点,调用

gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, 1);

通常,你可以在初始化过程中这样做。但请注意,对于gl.POINTS原语,gl_PointCoord使用的坐标系已经是倒置的,y坐标从上到下增加。所以,如果你正在为使用在POINTS原语上的图像加载纹理,你可能想要将gl.UNPACK_FLIP_Y_WEBGL设置回其默认值0。

An image can be loaded into a texture object using the function gl.texImage2D. For use with WebGL, this function usually has the form

gl.texImage2D( target, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image );

The target is gl.TEXTURE_2D for ordinary textures; there are other targets for loading cubemap textures. The second parameter is the mipmap level, which is 0 for the main image. Although it is possible to load individual mipmaps, that is rarely done. The next two parameters give the format of the texture inside the texture object and in the original image. In WebGL 1.0, the two format parameters should have the same value. Since web images are stored in RGBA format, gl.RGBA is probably the most efficient choice, and there is rarely a need to use anything else. But you can use gl.RGB if you don't need the alpha component. And by using gl.LUMINANCE or gl.LUMINANCE_ALPHA, you can convert the image to grayscale. (Luminance is a weighted average of red, green, and blue that approximates the perceived brightness of a color.) The fourth parameter is always going to be gl.UNSIGNED_BYTE, indicating that the colors in the image are stored using one byte for each color component. Although other values are possible, they don't really make sense for web images.

The last parameter in the call to gl.texImage2D is the image. Ordinarily, image will be a DOM image element that has been loaded asynchronously by JavaScript. The image can also be a <canvas> element. This means that you can draw on a canvas, using the HTML canvas 2D graphics API, and then use the canvas as the source for a texture image. You can even do that with an off-screen canvas that is not visible on the web page.

The image is loaded into the texture object that is currently bound to target in the currently active texture unit. There is no default texture object; that is, if no texture has been bound when gl.texImage2D is called, an error occurs. The active texture unit is the one that has been selected using gl.activeTexture, or is texture unit 0 if gl.activeTexture has never been called. A texture object is bound to the active texture unit by gl.bindTexture. This was discussed earlier in this section.

Using images in WebGL is complicated by the fact that images are loaded asynchronously. That is, the command for loading an image just starts the process of loading the image. You can specify a callback function that will be executed when the loading completes. The image won't actually be available for use until after the callback function is called. When loading an image to use as a texture, the callback function should load the image into a texture object. Often, it will also call a rendering function to draw the scene, with the texture image.

The sample program webgl/simple-texture.html is an example of using a single texture on a triangle. Here is a function that is used to load the texture image in that program. The texture object is created before the function is called.

/**
 *  Loads a texture image asynchronously.  The first parameter is the url
 *  from which the image is to be loaded.  The second parameter is the
 *  texture object into which the image is to be loaded.  When the image
 *  has finished loading, the draw() function will be called to draw the
 *  triangle with the texture.  (Also, if an error occurs during loading,
 *  an error message is displayed on the page, and draw() is called to
 *  draw the triangle without the texture.)
 */
function loadTexture( url, textureObject ) {
    const  img = new Image();  //  A DOM image element to represent the image.
    img.onload = function() { 
        // This function will be called after the image loads successfully.
        // We have to bind the texture object to the TEXTURE_2D target before
        // loading the image into the texture object. 
        gl.bindTexture(gl.TEXTURE_2D, textureObject);
        try {
        gl.texImage2D(gl.TEXTURE_2D,0,gl.RGBA,gl.RGBA,gl.UNSIGNED_BYTE,img);
        gl.generateMipmap(gl.TEXTURE_2D);  // Create mipmaps; you must either
                            // do this or change the minification filter.
        }
        catch (e) { // Probably a security exception, because this page has been
                    // loaded through a file:// URL.
            document.getElementById("headline").innerHTML =
            "Sorry, couldn't load texture.<br>" +
            "Some web browsers won't use images from a local disk";
        }
        draw();  // Draw the canvas, with or without the texture.  
    };
    img.onerror = function() { 
        // This function will be called if an error occurs while loading.
        document.getElementById("headline").innerHTML =
                        "<p>Sorry, texture image could not be loaded.</p>";
        draw();  // Draw without the texture; triangle will be black.
    };
    img.src = url;  // Start loading of the image.
                    // This must be done after setting onload and onerror.
}

Note that image textures for WebGL 1.0 should be power-of-two textures. That is, the width and the height of the image should each be a power of 2, such as 128, 256, or 512. You can, in fact, use non-power-of-two textures, but you can't use mipmaps with such textures, and the only texture repeat mode that is supported by such textures is gl.CLAMP_TO_EDGE. (WebGL 2.0 does not have these restrictions.)

(The try..catch statement is used in this function because most web browsers will throw a security exception when a page attempts to use an image from the local file system as a texture. This means that if you attempt to run a program that uses textures from a downloaded version of this book, the programs that use textures might not work.)


There are several parameters associated with a texture object, including the texture repeat modes and the minification and magnification filters. They can be set using the function gl.texParameteri. The setting applies to the currently bound texture object. Most of the details are the same as in OpenGL 1.1 (Subsection 4.3.3). For example, the minification filter can be set to LINEAR using

gl.texParameteri( gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);

Recall that the default minification filter won't work without mipmaps. To get a working texture, you have to change the minification filter or install a full set of mipmaps. Fortunately, WebGL has a function that will generate the mipmaps for you:

gl.generateMipmap( gl.TEXTURE_2D );

The texture repeat modes determine what happens when texture coordinates lie outside the range 0.0 to 1.0. There is a separate repeat mode for each direction in the texture coordinate system. In WebGL, the possible values are gl.REPEAT, gl.CLAMP_TO_EDGE, and gl.MIRRORED_REPEAT. The default is gl.REPEAT. The mode CLAMP_TO_EDGE was called CLAMP in OpenGL 1.1, and MIRRORED_REPEAT is new in WebGL. With MIRRORED_REPEAT, the texture image is repeated to cover the entire plane, but every other copy of the image is reflected. This can eliminate visible seams between the copies. To set a texture to use mirrored repeat in both directions, use

gl.texParameteri( gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.MIRRORED_REPEAT);
gl.texParameteri( gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.MIRRORED_REPEAT);

In WebGL, texture coordinates are usually input to the vertex shader as an attribute of type vec2. They are communicated to the fragment shader in a varying variable. Often, the vertex shader will simply copy the value of the attribute into the varying variable. Another possibility is to apply an affine texture transformation to the coordinates in the vertex shader before passing them on to the fragment shader. In the fragment shader, the texture coordinates are used to sample a texture. The GLSL ES 1.00 function for sampling an ordinary texture is

texture2D( samplerVariable, textureCoordinates );

where samplerVariable is the uniform variable of type sampler2D that represents the texture, and textureCoordinates is a vec2 containing the texture coordinates. The return value is an RGBA color, represented as a value of type vec4. As a very minimal example, here is a fragment shader that simply uses the sampled value from the texture as the color of the pixel.

precision mediump float;
uniform sampler2D u_texture;
varying vec2 v_texCoords;
void main() {
vec4 color = texture2D( u_texture, v_texCoords );
gl_FragColor = color;
}

This shader is from the sample program webgl/simple-texture.html.

Textures are sometimes used on primitives of type gl.POINTS. In that case, it's natural to get the texture coordinates for a pixel from the special fragment shader variable gl_PointCoord. A point is rendered as a square, and the coordinates in gl_PointCoord range from 0.0 to 1.0 over that square. So, using gl_PointCoord means that one copy of the texture will be pasted onto the point. If the POINTS primitive has more than one vertex, you will see a copy of the texture at the location of each vertex. This is an easy way to put an image, or multiple copies of an image, into a scene. The technique is sometimes referred to as "point sprites."

The following demo draws a single textured primitive of type gl.POINTS, so you can see what it looks like. In the demo, only a circular cutout from each square point is drawn.

The pixel data for a texture image in WebGL is stored in memory starting with the row of pixels at the bottom of the image and working up from there. When WebGL creates the texture by reading the data from an image, it assumes that the image uses the same format. However, images in a web browser are stored in the opposite order, starting with the pixels in the top row of the image and working down. The result of this mismatch is that texture images will appear upside down. You can account for this by modifying your texture coordinates. However, you can also tell WebGL to invert the images for you as it "unpacks" them. To do that, call

gl.pixelStorei( gl.UNPACK_FLIP_Y_WEBGL, 1 );

Generally, you can do this as part of initialization. Note however that for gl.POINTS primitives, the coordinate system used by gl_PointCoord is already upside down, with the y-coordinate increasing from top to bottom. So, if you are loading an image for use on a POINTS primitive, you might want to set gl.UNPACK_FLIP_Y_WEBGL to its default value, 0.

6.4.3 更多制作纹理的方法

More Ways to Make Textures

我们已经看到了如何使用gl.texImage2D从图像或画布元素创建纹理。在WebGL中,还有几种方法可以制作图像纹理。首先,函数

glCopyTexImage2D( target, mipmapLevel, internalFormat,
                x, y, width, height, border );

在WebGL中也存在,这在4.3.6小节中有所涵盖。这个函数从颜色缓冲区(WebGL在其上渲染图像的地方)复制数据到当前绑定的纹理对象中。数据来自颜色缓冲区中的矩形区域,该区域具有指定的width和高度,其左下角位于(x,y)internalFormat通常是gl.RGBA。对于WebGL,border必须是零。例如,

glCopyTexImage2D(gl.TEXTURE_2, 0, gl.RGBA, 0, 0, 256, 256, 0);

这从颜色缓冲区的左下角256像素正方形中获取纹理数据。(在后续章节中,我们将看到,实际上可能,并且更有效,让WebGL直接将图像渲染到纹理对象中,使用所谓的“帧缓冲区”。)

也许更有趣的是,能够直接从数字数组中获取纹理数据。这些数字将成为纹理中像素的颜色分量值。用于此的函数是texImage2D的替代版本:

texImage2D( target, mipmapLevel, internalFormat, width, height,
        border, dataFormat, dataType, dataArray )

一个典型的函数调用形式为

gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 16, 16, 
            0, gl.RGBA, gl.UNSIGNED_BYTE, pixels);

texImage2D的原始版本相比,这里有三个额外的参数,widthheightborderwidthheight指定了纹理图像的大小。对于WebGL,border必须是零,对于WebGL 1.0,internalFormat和dataFormat必须相同。

这个版本的texImage2D的最后一个参数必须是类型化数组,类型为Uint8ArrayUint16Array,具体取决于纹理的dataFormat。我的例子将使用Uint8Array和纹理格式gl.RGBAgl.LUMINANCE

对于RGBA纹理,每个像素需要四个颜色分量值。这些值将以无符号字节的形式给出,值范围从0到255,在Uint8Array中。数组的长度将是4widthheight(即图像中像素数量的四倍)。数组中底部像素行的数据首先出现,然后是上面的行,以此类推,给定行中的像素从左到右运行。在单个像素的数据中,红色分量首先出现,然后是蓝色,然后是绿色,然后是alpha。

作为从头开始制作纹理数据的示例,让我们制作一个16x16的纹理图像,图像被划分为四个8x8的正方形,分别着上红色、白色和蓝色。代码利用了创建类型化数组时,它最初填充了零的事实。我们只需要改变其中的一些零为255。

let pixels = new Uint8Array(4*16*16);  // 每个像素四个字节

for (let i = 0; i < 16; i++) {
    for (let j = 0; j < 16; j++) {
        let offset = 64*i + 4*j;    // 此像素的数据起始索引
        pixels[offset + 3] = 255;    // 像素的alpha值
        if (i < 8 && j < 8) { // 左下象限是红色
            pixels[offset] = 255;  // 将红色分量设置为最大
        }
        else if (i >= 8 && j >= 8) { // 右上象限是蓝色
            pixels[offset + 2] = 255; // 将蓝色分量设置为最大
        }
        else { // 另外两个象限是白色
            pixels[offset] = 255;     // 将所有分量设置为最大
            pixels[offset + 1] = 255;
            pixels[offset + 2] = 255;
        }
    }
}

texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 16, 16, 
                0, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);

最后一行是因为没有mipmap时,默认的最小化滤波器无法工作。纹理使用默认的放大滤波器,也是gl.LINEAR。这个纹理用在下图中左上角的正方形上。图像来自示例程序webgl/texture-from-pixels.html

123

注意左上角正方形中颜色边缘之间的混合。混合是由gl.LINEAR放大滤波器引起的。第二个正方形使用相同的纹理,但是使用gl.NEAREST放大滤波器,它消除了混合。在接下来的两个正方形中也可以看到相同的效果,它们使用黑白棋盘格图案,一个使用gl.Linear作为放大滤波器,一个使用gl.NEAREST。纹理在正方形上水平和垂直重复了十次。在这种情况下,纹理是一个非常小的2x2图像,有两个黑色和两个白色像素。

作为另一个示例,考虑图像中右下角的正方形。该正方形上的渐变效果来自于一个纹理。纹理大小为256x1像素,颜色沿着纹理的长度从黑色变为白色。纹理的一个副本被映射到正方形上。对于渐变纹理,我使用gl.LUMINANCE作为纹理格式,这意味着数据由每个像素一个字节组成,给出该像素的灰度值。纹理可以使用以下方式创建:

let pixels = new Uint8Array(256);  // 每个像素一个字节
for (let i = 0; i < 256; i++) {
    pixels[i] = i;  // 像素i的灰度值是i。
}

gl.texImage2D(gl.TEXTURE_2D, 0, gl.LUMINANCE, 256, 1, 
                0, gl.LUMINANCE, gl.UNSIGNED_BYTE, pixels);

有关更多详细信息,请参见示例程序

We have seen how to create a texture from an image or canvas element using gl.texImage2D. There are several more ways to make an image texture in WebGL. First of all, the function

glCopyTexImage2D( target, mipmapLevel, internalFormat,
                                    x, y, width, height, border );

which was covered in Subsection 4.3.6 also exists in WebGL. This function copies data from the color buffer (where WebGL renders its images) into the currently bound texture object. The data is taken from the rectangular region in the color buffer with the specified width and height and with its lower left corner at (x,y). The internalFormat is usually gl.RGBA. For WebGL, the border must be zero. For example,

glCopyTexImage2D( gl.TEXTURE_2, 0, gl.RGBA, 0, 0, 256, 256, 0);

This takes the texture data from a 256-pixel square in the bottom left corner of the color buffer. (In a later chapter, we will see that it is actually possible, and more efficient, for WebGL to render an image directly to a texture object, using something called a "framebuffer.")

More interesting, perhaps, is the ability to take the texture data directly from an array of numbers. The numbers will become the color component values for the pixels in the texture. The function that is used for this is an alternative version of texImage2D:

texImage2D( target, mipmapLevel, internalFormat, width, height,
                                border, dataFormat, dataType, dataArray )

and a typical function call would have the form

gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 16, 16, 
                                0, gl.RGBA, gl.UNSIGNED_BYTE, pixels);

Compared to the original version of texImage2D, there are three extra parameters, width, height, and border. The width and height specify the size of the texture image. For WebGL, the border must be zero, and for WebGL 1.0, the internalFormat and dataFormat must be the same.

The last parameter in this version of texImage2D must be a typed array of type Uint8Array or Uint16Array, depending on the dataFormat of the texture. My examples will use Uint8Array and texture format gl.RGBA or gl.LUMINANCE.

For an RGBA texture, four color component values are needed for each pixel. The values will be given as unsigned bytes, with values ranging from 0 to 255, in a Uint8Array. The length of the array will be 4*width*height (that is, four times the number of pixels in the image). The data for the bottom row of pixels comes first in the array, followed by the row on top of that, and so on, with the pixels in a given row running from left to right. And within the data for one pixel, the red component comes first, followed by the blue, then the green, then the alpha.

As an example of making up texture data from scratch, let's make a 16-by-16 texture image, with the image divided into four 8-by-8 squares that are colored red, white, and blue. The code uses the fact that when a typed array is created, it is initially filled with zeros. We just have to change some of those zeros to 255.

let pixels = new Uint8Array( 4*16*16 );  // four bytes per pixel

for (let i = 0; i < 16; i++) {
    for (let j = 0; j < 16; j++) {
        let offset = 64*i + 4*j ;    // starting index of data for this pixel
        pixels[offset + 3] = 255;    // alpha value for the pixel
        if ( i < 8 && j < 8) { // bottom left quadrant is red
            pixels[offset] = 255;  // set red component to maximum
        }
        else if ( i >= 8 && j >= 8 ) { // top right quadrant is blue
            pixels[offset + 2] = 255; // set blue component to maximum
        }
        else { // the other two quadrants are white
            pixels[offset] = 255;     // set all components to maximum
            pixels[offset + 1] = 255;
            pixels[offset + 2] = 255;
        }
    }
}

texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 16, 16, 
                            0, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);

The last line is there because the default minification filter won't work without mipmaps. The texture uses the default magnification filter, which is also gl.LINEAR. This texture is used on the leftmost square in the image shown below. The image is from the sample program webgl/texture-from-pixels.html.

123

Note the blending along the edges between colors in the leftmost square. The blending is caused by the gl.LINEAR magnification filter. The second square uses the same texture, but with the gl.NEAREST magnification filter, which eliminates the blending. The same effect can be seen in the next two squares, which use a black/white checkerboard pattern, one with gl.Linear as the magnification filter and one using gl.NEAREST. The texture is repeated ten times horizontally and vertically on the square. In this case, the texture is a tiny 2-by-2 image with two black and two white pixels.

As another example, consider the rightmost square in the image. The gradient effect on that square comes from a texture. The texture size is 256-by-1 pixels, with the color changing from black to white along the length of the texture. One copy of the texture is mapped to the square. For the gradient texture, I used gl.LUMINANCE as the texture format, which means that the data consists of one byte per pixel, giving the grayscale value for that pixel. The texture can be created using

let pixels = new Unit8Array( 256 );  // One byte per pixel
for ( let i = 0; i < 256; i++ ) {
    pixels[i] = i;  // Grayscale value for pixel number i is i.
}

gl.texImage2D(gl.TEXTURE_2D, 0, gl.LUMINANCE, 256, 1, 
                            0, gl.LUMINANCE, gl.UNSIGNED_BYTE, pixels);

See the sample program for more detail.

6.4.4 立方体贴图纹理

Cubemap Textures

我们在5.3.4小节中遇到了立方体贴图纹理,在那里我们看到了它们是如何在three.js中用于天空盒和环境映射的。WebGL内置了对立方体贴图的支持。与表示普通图像纹理不同,纹理对象可以包含一个立方体贴图。并且两个纹理对象可以同时绑定到同一个纹理单元,一个包含普通纹理,一个包含立方体贴图。这两种纹理绑定到不同的目标,gl.TEXTURE_2Dgl.TEXTURE_CUBE_MAP。通过调用

gl.bindTexture(gl.TEXTURE_CUBE_MAP, texObj);

可以将纹理对象texObj绑定到当前活动纹理单元的立方体贴图目标上。

一个给定的纹理对象可以是常规纹理或立方体贴图,但不能两者都是。一旦它被绑定到一个纹理目标上,它就不能被重新绑定到另一个目标。

立方体贴图由六幅图像组成,每个立方体的每个面一幅。包含立方体贴图的纹理对象有六个图像插槽,由以下常量标识

gl.TEXTURE_CUBE_MAP_NEGATIVE_X
gl.TEXTURE_CUBE_MAP_POSITIVE_X
gl.TEXTURE_CUBE_MAP_NEGATIVE_Y
gl.TEXTURE_CUBE_MAP_POSITIVE_Y
gl.TEXTURE_CUBE_MAP_NEGATIVE_Z
gl.TEXTURE_CUBE_MAP_POSITIVE_Z

这些常量被用作gl.texImage2Dgl.copyTexImage2D中的目标,在gl.TEXTURE_2D的位置。(注意,有六个目标用于将图像加载到立方体贴图对象中,但只有一个目标,gl.TEXTURE_CUBE_MAP,用于将纹理对象绑定到纹理单元。)立方体贴图通常存储为一组六个图像,这些图像必须分别加载到纹理对象中。当然,WebGL也可以通过渲染这六个图像来创建立方体。

与网络上的图像一样,这里也有异步图像加载的问题需要处理。下面是一个示例,展示了在我的示例程序webgl/cubemap-fisheye.html中如何创建立方体贴图:

function loadCubemapTexture() {
    const tex = gl.createTexture();
    let imageCt = 0; // 完成加载的图像数量。

    load("cubemap-textures/park/negx.jpg", gl.TEXTURE_CUBE_MAP_NEGATIVE_X);
    load("cubemap-textures/park/posx.jpg", gl.TEXTURE_CUBE_MAP_POSITIVE_X);
    load("cubemap-textures/park/negy.jpg", gl.TEXTURE_CUBE_MAP_NEGATIVE_Y);
    load("cubemap-textures/park/posy.jpg", gl.TEXTURE_CUBE_MAP_POSITIVE_Y);
    load("cubemap-textures/park/negz.jpg", gl.TEXTURE_CUBE_MAP_NEGATIVE_Z);
    load("cubemap-textures/park/posz.jpg", gl.TEXTURE_CUBE_MAP_POSITIVE_Z);

    function load(url, target) {
        let img = new Image();
        img.onload = function() {
            gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex);
            try {
                gl.texImage2D(target, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img);
            }
            catch (e) {
                document.getElementById("headline").innerHTML =
                "无法访问纹理。请注意,一些浏览器" +
                "无法从本地文件使用纹理。";
                return;
            }
            imageCt++;
            if (imageCt === 6) {  // 所有6个图像都已加载
                gl.generateMipmap(gl.TEXTURE_CUBE_MAP);
                document.getElementById("headline").innerHTML =
                "有趣的立方体贴图(鱼眼相机效果)";
                textureObject = tex;
                draw();
            }
        };
        img.onerror = function() {
            document.getElementById("headline").innerHTML =
            "对不起,无法加载纹理";
        };
        img.src = url;
    }
}

立方体贴图的图像必须具有相同的大小。它们必须是正方形。大小应该是2的幂。对于立方体贴图,诸如最小化滤波器之类的纹理参数是使用目标gl.TEXTURE_CUBE_MAP设置的,它们适用于立方体的所有六个面。例如,

gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR);

同样,gl.generateMipmap将为所有六个面生成mipmap(因此直到所有六个图像都已加载后才应该调用它)。


在着色器程序中,立方体贴图由类型为samplerCube的统一变量表示。在GLSL ES 1.00中,使用函数textureCube对纹理进行采样。例如,

vec4 color = textureCube(u_texture, vector);

第一个参数是表示纹理的samplerCube变量。第二个参数是一个vec3。立方体贴图不是使用常规纹理坐标进行采样的。相反,使用一个3D向量。目标是在纹理中选取一个点。纹理位于立方体的表面上。要使用向量在纹理中选取一个点,从立方体中心沿着向量方向投射一条射线,并检查该射线与立方体的交点。也就是说,如果你把向量的起始点放在立方体的中心,它就指向要采样纹理的立方体上的点。

由于我们在这一章中没有进行3D图形处理,我们不能以常规方式使用立方图。示例程序webgl/cubemap-fisheye.html以一种有趣但可能不太实用的方式使用立方图。该程序使用2D纹理坐标。片段着色器将一对2D纹理坐标转换为3D向量,然后用于采样立方体贴图。效果类似于由鱼眼镜头相机拍摄的照片。它看起来像这样。

123

左侧的图片模仿了一个170度视场的鱼眼镜头相机。右侧的视场是330度,以至于圆盘边缘附近的像素实际上显示了位于相机后面的立方体的部分。

对于每张图片,程序绘制一个纹理坐标范围从0.0到1.0的正方形。在纹理坐标系统中,距离点(0.5,0.5)大于0.5的像素被着色为白色。在半径为0.5的圆盘内,围绕中心的每个圆圈被映射到单位球上的一个圆圈。然后该点被用作采样立方体贴图的方向向量。在圆盘中心出现的纹理中的点是立方体与正z轴相交的点,即立方图中“正z”图像的中心。实际上你不需要理解这些,但这里是完成这项工作的片段着色器:

#ifdef GL_FRAGMENT_PRECISION_HIGH
    precision highp float;
#else
    precision mediump float;
#endif
uniform samplerCube u_texture;  
uniform float u_angle;  // 视场角度
varying vec2 v_texCoords;  
void main() {
    float dist = distance(v_texCoords, vec2(0.5));
    if (dist > 0.5)
        gl_FragColor = vec4(1.0);  // 白色
    else {
        float x, y; // 相对于中心(0.5,0.5)的坐标
        x = v_texCoords.x - 0.5; 
        y = v_texCoords.y - 0.5;
        vec2 circ = normalize(vec2(x, y));  // 在单位圆上
        float phi = radians(u_angle/2.0)*(2.0*dist);  // “纬度”
        vec3 vector = vec3(sin(phi)*circ.x, sin(phi)*circ.y, cos(phi));
        gl_FragColor = textureCube(u_texture, vector);  
    } 
}

We encountered cubemap textures in Subsection 5.3.4, where saw how they are used in three.js for skyboxes and environment mapping. WebGL has built-in support for cubemap textures. Instead of representing an ordinary image texture, a texture object can hold a cubemap texture. And two texture objects can be bound to the same texture unit simultaneously, one holding an ordinary texture and one holding a cubemap texture. The two textures are bound to different targets, gl.TEXTURE_2D and gl.TEXTURE_CUBE_MAP. A texture object, texObj, is bound to the cubemap target in the currently active texture unit by calling

gl.bindTexture( gl.TEXTURE_CUBE_MAP, texObj );

A given texture object can be either a regular texture or a cubemap texture, not both. Once it has been bound to one texture target, it cannot be rebound to the other target.

A cubemap texture consists of six images, one for each face of the cube. A texture object that holds a cubemap texture has six image slots, identified by the constants

gl.TEXTURE_CUBE_MAP_NEGATIVE_X
gl.TEXTURE_CUBE_MAP_POSITIVE_X
gl.TEXTURE_CUBE_MAP_NEGATIVE_Y
gl.TEXTURE_CUBE_MAP_POSITIVE_Y
gl.TEXTURE_CUBE_MAP_NEGATIVE_Z
gl.TEXTURE_CUBE_MAP_POSITIVE_Z

The constants are used as the targets in gl.texImage2D and gl.copyTexImage2D, in place of gl.TEXTURE_2D. (Note that there are six targets for loading images into a cubemap texture object, but only one target, gl.TEXTURE_CUBE_MAP, for binding the texture object to a texture unit.) A cubemap texture is often stored as a set of six images, which must be loaded separately into a texture object. Of course, it is also possible for WebGL to create a cubemap by rendering the six images.

As usual for images on the web, there is the problem of asynchronous image loading to be dealt with. Here, for example, is a function that creates a cubemap texture in my sample program webgl/cubemap-fisheye.html:

function loadCubemapTexture() {
    const  tex = gl.createTexture();
    let  imageCt = 0; // Number of images that have finished loading.

    load( "cubemap-textures/park/negx.jpg", gl.TEXTURE_CUBE_MAP_NEGATIVE_X );
    load( "cubemap-textures/park/posx.jpg", gl.TEXTURE_CUBE_MAP_POSITIVE_X );
    load( "cubemap-textures/park/negy.jpg", gl.TEXTURE_CUBE_MAP_NEGATIVE_Y );
    load( "cubemap-textures/park/posy.jpg", gl.TEXTURE_CUBE_MAP_POSITIVE_Y );
    load( "cubemap-textures/park/negz.jpg", gl.TEXTURE_CUBE_MAP_NEGATIVE_Z );
    load( "cubemap-textures/park/posz.jpg", gl.TEXTURE_CUBE_MAP_POSITIVE_Z );

    function load(url, target) {
        let  img = new Image();
        img.onload = function() {
            gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex);
            try {
                gl.texImage2D(target, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img);
            }
            catch (e) {
                document.getElementById("headline").innerHTML =
                "Can't access texture.  Note that some browsers" +
                " can't use  a texture from a local file.";
                return;
            }
            imageCt++;
            if (imageCt === 6) {  // all 6 images have been loaded
                gl.generateMipmap( gl.TEXTURE_CUBE_MAP );
                document.getElementById("headline").innerHTML = 
                                    "Funny Cubemap (Fisheye Camera Effect)";
                textureObject = tex;
                draw();
            }
        };
        img.onerror = function() {
            document.getElementById("headline").innerHTML = 
                                            "SORRY, COULDN'T LOAD TEXTURES";
        };
        img.src = url;
    }
}

The images for a cubemap must all be the same size. They must be square. The size should, as usual, be a power of two. For a cubemap texture, texture parameters such as the minification filter are set using the target gl.TEXTURE_CUBE_MAP, and they apply to all six faces of the cube. For example,

gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR);

Similarly, gl.generateMipmap will generate mipmaps for all six faces (so it should not be called until all six images have been loaded).


In a shader program, a cube map texture is represented by a uniform variable of type samplerCube. In GLSL ES 1.00, the texture is sampled using function textureCube. For example,

vec4 color = textureCube( u_texture, vector );

The first parameter is the samplerCube variable that represents the texture. The second parameter is a vec3. Cube map textures are not sampled using regular texture coordinates. Instead, a 3D vector is used. The goal is to pick out a point in the texture. The texture lies on the surface of a cube. To use a vector to pick out a point in the texture, cast a ray from the center of the cube in the direction given by the vector, and check where that ray intersects the cube. That is, if you put the starting point of the vector at the center of the cube, it points to the point on the cube where the texture is to be sampled.

Since we aren't doing 3D graphics in this chapter, we can't use cube maps in the ordinary way. The sample program webgl/cubemap-fisheye.html uses a cube map in an interesting, if not very useful way. The program uses 2D texture coordinates. The fragment shader transforms a pair of 2D texture coordinates into a 3D vector that is then used to sample the cubemap texture. The effect is something like a photograph produced by a fisheye camera. Here's what it looks like.

123

The picture on the left imitates a fisheye camera with a 170-degree field of view. On the right the field of view is 330-degrees, so that pixels near the edge of the disk actually show parts of the cube that lie behind the camera.

For each picture, the program draws a square with texture coordinates ranging from 0.0 to 1.0. In the texture coordinate system, pixels at a distance greater than 0.5 from the point (0.5,0.5) are colored white. Within the disk of radius 0.5, each circle around the center is mapped to a circle on the unit sphere. That point is then used as the direction vector for sampling the cubemap texture. The point in the texture that appears at the center of the disk is the point where the cube intersects the positive z-axis, that is, the center of the "positive z" image from the cube map. You don't actually need to understand this, but here, for your information, is the fragment shader that does the work:

#ifdef GL_FRAGMENT_PRECISION_HIGH
    precision highp float;
#else
    precision mediump float;
#endif
uniform samplerCube u_texture;  
uniform float u_angle;  // field of view angle
varying vec2 v_texCoords;  
void main() {
float dist =  distance( v_texCoords, vec2(0.5) );
if (dist > 0.5)
    gl_FragColor = vec4(1.0);  // white
else {
    float x,y; // coords relative to a center at (0.5,0.5)
    x = v_texCoords.x - 0.5; 
    y = v_texCoords.y - 0.5;
    vec2 circ = normalize(vec2(x,y));  // on the unit circle
    float phi = radians(u_angle/2.0)*(2.0*dist);  // "latitude"
    vec3 vector = vec3(sin(phi)*circ.x, sin(phi)*circ.y, cos(phi));
    gl_FragColor = textureCube( u_texture, vector );  
    } 
}

6.4.5 计算示例

A Computational Example

GPU可以提供巨大的处理能力。虽然GPU最初是设计用来渲染图像的,但人们很快意识到,同样的能力可以用来进行更通用的编程。并非每个编程任务都能利用典型GPU的高度并行架构,但如果一个任务可以分解为许多可以并行运行的子任务,那么通过将其适应在GPU上运行,就可能显著加速任务。现代GPU已经变得更加计算多样化,但在仅设计用于处理颜色的GPU中,这可能意味着以颜色值的方式表示计算的数据。通常的技巧是将数据表示为纹理中的颜色,并使用纹理查找函数访问数据。

示例程序webgl/webgl-game-of-life.html是这种方法的一个简单示例。该程序实现了约翰·康威(John Conway)著名的生命游戏(这并不是一个游戏)。生命游戏板由一个可以是活细胞或死细胞的单元格网格组成。有一套规则,根据棋盘的当前状态或“一代”,产生新的一代。一旦为每个单元格分配了一些初始状态,游戏就可以自行进行,根据规则产生一代又一代。规则根据当前一代的细胞及其八个邻近细胞的状态计算下一代的细胞状态。要应用规则,您必须查看每个邻近细胞并计算活细胞邻居的数量。相同的过程适用于每个细胞,因此它是一个非常易于并行化的任务,可以轻松适应在GPU上运行。

在示例程序中,生命游戏板是一个1024x1024的画布,每个像素代表一个单元格。活细胞被涂成白色,死细胞是黑色的。该程序使用WebGL根据当前棋盘计算棋盘的下一代。工作是在片段着色器中完成的。要触发计算,绘制一个覆盖整个画布的正方形,这会导致片段着色器被调用画布上的每个像素。片段着色器需要访问片段及其八个邻居的当前颜色,但它无法直接查询这些颜色。为了给着色器提供这些信息,程序使用函数gl.copyTexImage2D()将棋盘复制到纹理对象中。然后,片段着色器可以使用GLSL纹理查找函数texture2D()获取所需的信息。

有趣的一点是,片段着色器不仅需要自身的纹理坐标,还需要其邻居的纹理坐标。片段本身的纹理坐标作为变化变量传递到片段着色器,每个坐标的值范围为0到1。它可以通过在其自身的纹理坐标上添加偏移量来获取邻居的纹理坐标。由于纹理是1024x1024像素,因此邻居的纹理坐标需要偏移1.0/1024.0。以下是完整的GLSL ES 1.00片段着色器程序:

#ifdef GL_FRAGMENT_PRECISION_HIGH
    precision highp float;
#else
    precision mediump float;
#endif
varying vec2 v_coords;     // 此单元格的纹理坐标
const float scale = 1.0/1024.0;  // 1.0/画布大小;(在纹理坐标中
                                //   邻近单元格之间的偏移)
uniform sampler2D source;  // 持有前一代的纹理

void main() {
    int alive;  // 这个单元格是活的吗?
    if (texture2D(source,v_coords).r > 0.0)
        alive = 1;
    else
        alive = 0;

    // 计算活邻居的数量。要检查是否活着,只需测试
    // 颜色的红色分量,这将是一个活细胞的1.0和一个
    // 死细胞的0.0。

    int neighbors = 0; // 将是一个活邻居的数量

    if (texture2D(source,v_coords+vec2(scale,scale)).r > 0.0)
        neighbors += 1;
    if (texture2D(source,v_coords+vec2(scale,0)).r > 0.0)
        neighbors += 1;
    if (texture2D(source,v_coords+vec2(scale,-scale)).r > 0.0)
        neighbors += 1;

    if (texture2D(source,v_coords+vec2(0,scale)).r > 0.0)
        neighbors += 1;
    if (texture2D(source,v_coords+vec2(0,-scale)).r > 0.0)
        neighbors += 1;

    if (texture2D(source,v_coords+vec2(-scale,scale)).r > 0.0)
        neighbors += 1;
    if (texture2D(source,v_coords+vec2(-scale,0)).r > 0.0)
        neighbors += 1;
    if (texture2D(source,v_coords+vec2(-scale,-scale)).r > 0.0)
        neighbors += 1;

    // 使用生命规则输出此单元格的新颜色。

    float color = 0.0; // 死细胞的颜色
    if (alive == 1) {
        if (neighbors == 2 || neighbors == 3)
            color = 1.0; // 活细胞的颜色;细胞保持活力
    }
    else if ( neighbors == 3 )
        color = 1.0; // 活细胞的颜色;细胞复活

    gl_FragColor = vec4(color, color, color, 1);
}

程序中还有一些其他有趣的点。在创建WebGL图形上下文时,会关闭反锯齿以确保每个像素完全是黑色或白色。反锯齿可能会通过平均附近像素的颜色来模糊颜色。类似地,纹理的放大和缩小滤波器被设置为gl.NEAREST以避免颜色平均。还有在棋盘上设置初始配置的问题——这是通过使用具有不同片段着色器的另一个着色器程序在棋盘上绘制来完成的。

在我的电脑上,webgl/webgl-game-of-life.html可以轻松地每秒计算360代。我强烈建议您尝试一下。观看它可以很有趣。

GPU上的通用编程变得越来越重要。现代GPU可以执行与颜色无关的计算,使用各种数值数据类型。正如我们将看到的,WebGL 2.0已经朝这个方向迈进了一点,但从Web访问GPU的全部计算能力将需要一个新的API。目前正在开发中的WebGPU,已经在一些网络浏览器中作为实验性功能提供,是满足这一需求的尝试。(然而,与WebGL不同,它不基于OpenGL。)

A GPU can offer an immense amount of processing power. Although GPUs were originally designed to apply that power to rendering images, it was quickly realized that the same power could be harnessed to do much more general types of programming. Not every programming task can take advantage of the highly parallel architecture of the typical GPU, but if a task can be broken down into many subtasks that can be run in parallel, then it might be possible to speed up the task significantly by adapting it to run on a GPU. Modern GPUs have become much more computationally versatile, but in GPUs that were designed to work only with colors, that might mean somehow representing the data for a computation as color values. The trick often involves representing the data as colors in a texture, and accessing the data using texture lookup functions.

The sample program webgl/webgl-game-of-life.html is a simple example of this approach. The program implements John Conway's well-known Game of Life (which is not really a game). A Life board consists of a grid of cells that can be either alive or dead. There is a set of rules that takes the current state, or "generation," of the board and produces a new generation. Once some initial state has been assigned to each cell, the game can play itself, producing generation after generation, according to the rules. The rules compute the state of a cell in the next generation from the states of the cell and its eight neighboring cells in the current generation. To apply the rules, you have to look at each neighboring cell and count the number of neighbors that are alive. The same process is applied to every cell, so it is a highly parallelizable task that can be easily adapted to run on a GPU.

In the sample program, the Life board is a 1024-by-1024 canvas, with each pixel representing a cell. Living cells are colored white, and dead cells are black. The program uses WebGL to compute the next generation of the board from the current board. The work is done in a fragment shader. To trigger the computation, a single square is drawn that covers the entire canvas, which causes the fragment shader to be called for every pixel in the canvas. The fragment shader needs access to the current color of the fragment and of its eight neighbors, but it has no way to query those colors directly. To give the shader access to that information, the program copies the board into a texture object, using the function gl.copyTexImage2D(). The fragment shader can then get the information that it needs using the GLSL texture lookup function texture2D().

An interesting point is that the fragment shader needs the texture coordinates not just for itself but for its neighbors. The texture coordinates for the fragment itself are passed into the fragment shader as a varying variable, with values in the range 0 to 1 for each coordinate. It can get the texture coordinates for a neighbor by adding an offset to its own texture coordinates. Since the texture is 1024-by-1024 pixels, the texture coordinates for a neighbor need to be offset by 1.0/1024.0. Here is the complete GLSL ES 1.00 fragment shader program:

#ifdef GL_FRAGMENT_PRECISION_HIGH
precision highp float;
#else
precision mediump float;
#endif
varying vec2 v_coords;     // texture coordinates for this cell
const float scale = 1.0/1024.0;  // 1.0 / canvas_size; (offset between 
                                //   neighboring cells, in texture coords)
uniform sampler2D source;  // the texture holding the previous generation

void main() {
    int alive;  // is this cell alive ?
    if (texture2D(source,v_coords).r > 0.0)
    alive = 1;
    else
    alive = 0;

    // Count the living neighbors.  To check for living, just test
    // the red component of the color, which will be 1.0 for a
    // living cell and 0.0. for a dead cell.

    int neighbors = 0; // will be the number of neighbors that are alive

    if (texture2D(source,v_coords+vec2(scale,scale)).r > 0.0)
    neighbors += 1;
    if (texture2D(source,v_coords+vec2(scale,0)).r > 0.0)
    neighbors += 1;
    if (texture2D(source,v_coords+vec2(scale,-scale)).r > 0.0)
    neighbors += 1;

    if (texture2D(source,v_coords+vec2(0,scale)).r > 0.0)
    neighbors += 1;
    if (texture2D(source,v_coords+vec2(0,-scale)).r > 0.0)
    neighbors += 1;

    if (texture2D(source,v_coords+vec2(-scale,scale)).r > 0.0)
    neighbors += 1;
    if (texture2D(source,v_coords+vec2(-scale,0)).r > 0.0)
    neighbors += 1;
    if (texture2D(source,v_coords+vec2(-scale,-scale)).r > 0.0)
    neighbors += 1;

    // Output the new color for this cell. using the rules of Life.

    float color = 0.0; // color for dead cell
    if (alive == 1) {
        if (neighbors == 2 || neighbors == 3)
        color = 1.0; // color for living cell; cell stays alive
    }
    else if ( neighbors == 3 )
        color = 1.0; // color for living cell; cell comes to life

    gl_FragColor = vec4(color, color, color, 1);
}

There are some other points of interest in the program. When the WebGL graphics context is created, anti-aliasing is turned off to make sure that every pixel is either perfectly black or perfectly white. Antialiasing could smear out the colors by averaging the colors of nearby pixels. Similarly, the magnification and minification filters for the texture are set to gl.NEAREST to avoid averaging of colors. Also, there is the issue of setting the initial configuration onto the board—that's done by drawing onto the board using another shader program with a different fragment shader.

On my computer, the webgl/webgl-game-of-life.html can easily compute 360 generations per second. I urge you to try it. It can be fun to watch.

General purpose programming on GPUs has become more and more important. Modern GPUs can do computations that have nothing to do with color, using various numerical data types. WebGL 2.0, as we'll see, has moved a bit in that direction, but accessing the full computational power of GPUs from the Web will require a new API. WebGPU, currently under development and already available as an experimental feature in some web browsers, is an attempt to fulfill that need. (However, unlike WebGL, it is not based on OpenGL.)

6.4.6 WebGL 2.0 中的纹理

Textures in WebGL 2.0

WebGL 2.0中的一个重大变化是大大增加了对纹理的支持。增加了许多新的纹理格式。在OpenGL中,RGBA颜色分量被表示为介于零和一之间的浮点值,但实际上通常被存储为一个字节的无符号整数,值范围在0到255之间,这与大多数屏幕上显示颜色的格式相匹配。事实上,您无法真正控制用于显示的颜色的内部表示方式。有些计算机显示器每个像素只用16位而不是32位,而新的HDR(高动态范围)显示器每个像素可以使用更多的位。但是,在存储纹理数据时,并不真正需要与物理显示上使用的颜色格式相匹配。

WebGL 2.0引入了大量所谓的“有大小”的纹理格式,这使程序员能够控制纹理中数据的表示方式。例如,如果格式是gl.RGBA32F,则纹理包含每个像素的四个32位浮点数,每个RGBA颜色分量一个。格式gl.R32UI表示每个像素一个32位无符号整数。而gl.RG8I意味着每个像素两个8位整数。gl.RGBA8对应于通常的格式,每个颜色分量使用一个8位无符号整数。这些大小格式用于纹理的内部格式,在调用gl.texImage2D()等函数时作为internalFormat参数,指定数据在纹理中的实际存储方式。您可以将具有大小内部格式的纹理用作图像纹理进行渲染。但是,对于颜色分量来说,32位编码了比视觉上能区分的还要多的颜色。这些数据格式特别适用于计算应用程序,您真正需要控制您正在处理的数据类型。但是,要有效地计算存储在纹理中的数据,我们确实需要能够将数据写入纹理,以及从纹理中读取数据。而这,我们需要帧缓冲区,直到第7.4节才会介绍。现在,我们只看看WebGL 2.0 API用于处理纹理的一些方面。

可以使用各种版本的texImage2D()函数从图像或数据数组初始化纹理,或者在没有提供数据源时将其初始化为零。WebGL 2.0还有另一个可能更高效的函数来为纹理分配存储空间并将其初始化为零:

gl.texStorage2D(target, levels, internalFormat, width, height);

第一个参数是gl.TEXTURE_2D或gl.TEXTURE_CUBE_MAP。第二个参数指定应生成的mipmap级别数量;通常,这将是1。width和height给出纹理的大小,当然,internalFormat指定纹理的数据格式。internalFormat必须是大小内部格式之一,如gl.RGBA8。

WebGL 2.0支持3D纹理,它们保存3D网格的texels数据,有函数gl.texImage3D()和gl.texStorage3D()。它有深度纹理,存储像深度测试中使用的深度值,通常用于阴影映射。它可以处理压缩纹理,这可以减少CPU和GPU之间需要传输的数据量。但是,如果您需要这些功能,我将让您自己探索。

着色器程序使用采样器变量从纹理中读取数据。着色器编程语言GLSL ES 3.00引入了许多新的采样器类型来处理WebGL 2.0中的新纹理格式。GLSL ES 1.00只有sampler2DsamplerCube,新语言增加了类型,如用于3D纹理的sampler3D,用于采样值为有符号整数的纹理的isampler2D,以及用于采样深度纹理的sampler2DShadow。例如,对于采样具有32位整数格式的纹理,您可能会声明一个采样器变量,如下所示:

uniform highp isampler2D datatexture;

由于isampler2D变量没有默认精度,因此必须指定精度限定符highp。使用高精度确保您可以精确地读取32位整数。(sampler2D类型具有默认精度lowp,当颜色分量确实是8位整数时足够,但可能不是您想要的浮点数据纹理。)

GLSL ES 1.00使用函数texture2D()对2D纹理进行采样,使用textureCube()对立方体贴图进行采样。而不是为每种采样器类型都有单独的函数,GLSL ES 3.00去除了texture2DtextureCube,并用一个重载函数texture()替换它们,该函数可用于采样任何类型的纹理。所以,上面定义的datatexture可能使用以下方式进行采样:

highp ivec4 data = texture(datatexture, coords);

其中coords是一个包含纹理坐标的vec2。但实际上,您可能想要更直接地访问texel值。有一个新的texelFetch()函数,它从纹理中提取texel值,将纹理视为texel数组。使用从0到纹理大小的整数值来访问texels。应用于datatexture,这可能看起来像:

highp ivec4 data = texelFetch(datatexture, 0, ivec2(i,j));

其中i的范围是0到纹理宽度减一,j的范围是0到高度减一。这里的第二个参数0,指定了正在访问的mipmap级别。(对于整数纹理,您可能不会使用mipmap。)

(示例程序webgl/texelFetch-MonaLisa-webgl2.html是使用texelFetch()的一个相当奇特的例子,尽管它使用的是普通图像纹理而不是数据纹理。)

关于WebGL 2.0纹理还有更多可以讨论的内容,但这将使我们远远超出这本入门教科书所需的范围。

One of the major changes in WebGL 2.0 is greatly increased support for textures. A large number of new texture formats have been added. RGBA color components in OpenGL are represented as floating point values in the range zero to one, but in practice are often stored as one-byte unsigned integers, with values in the range 0 to 255, which matches the format that is used for displaying colors on most screens. In fact, you don't really have control over how colors are represented internally for use on displays. There have been computer displays that used only 16 bits per pixel instead of 32, and new HDR (High Dynamic Range) displays can use even more bits per pixel. But when storing data in a texture, it's not really necessary to match the color format that is used on a physical display.

WebGL 2.0 introduced a large number of so-called "sized" texture formats, which give the programmer control over how the data in the texture is represented. For example, if the format is gl.RGBA32F, then the texture contains four 32-bit floating point numbers for each pixel, one for each of the four RGBA color components. The format gl.R32UI indicates one 32-bit unsigned integer per pixel. And gl.RG8I means two 8-bit integers per pixel. And gl.RGBA8 corresponds to the usual format, using one 8-bit unsigned integer for each color component. These sized formats are used for the internal format of a texture, the internalFormat parameter in a call to a function like gl.texImage2D(), which specifies how the data is actually stored in the texture. You can use textures with sized internal formats as image textures for rendering. But 32 bits for a color component encodes far more different colors than could ever be distinguished visually. These data formats are particularly useful for computational applications, where you really need to control what kind of data you are working with. But to effectively compute with data stored in textures, we really need to be able to write data to textures, as well as read from textures. And for that, we need framebuffers, which won't be covered until Section 7.4. For now, we will just look at a few aspects of the WebGL 2.0 API for working with textures.

Various versions of the texImage2D() function can be used to initialize a texture from an image or from an array of data—or to zero, when no data source is provided. WebGL 2.0 has another, potentially more efficient, function for allocating the storage for a texture and initializing it to zero:

gl.texStorage2D( target, levels, internalFormat, width, height );

The first parameter is gl.TEXTURE_2D or gl.TEXTURE_CUBE_MAP. The second parameter specifies the number of mipmap level that should be generated; generally, this will be 1. The width and height give the size of the texture, and of course the internalFormat specifies the data format for the texture. The internalFormat must be one of the sized internal formats, such as gl.RGBA8.

WebGL 2.0 has support for 3D textures, which hold data for a 3D grid of texels, with functions gl.texImage3D() and gl.texStorage3D(). It has depth textures, which store depth values like those used in the depth test and are commonly used for shadow mapping. And it can work with compressed textures, which can decrease the amount of data that needs to be transferred between the CPU and the GPU. However, I will leave you to explore these capabilities on your own if you need them.

Shader programs use sampler variables to read data from textures. The shader programming language GLSL ES 3.00 introduces a number of new sampler types to deal with the new texture formats in WebGL 2.0. Where GLSL ES 1.00 had only sampler2D and samplerCube, the newer language adds types such as sampler3D for 3D textures, isampler2D for sampling textures whose values are signed integers, and sampler2DShadow for sampling depth textures. For example, for sampling a texture with a 32-bit integer format, you might declare a sampler variable such as

uniform highp isampler2D datatexture;

The precision qualifier, highp, must be specified because isampler2D variables do not have a default precision. Using high precision ensures that you can read 32-bit integers exactly. (The sampler2D type has default precision lowp, which is sufficient when color components are really 8-bit integers but which might not be what you want for floating point data textures.)

GLSL ES 1.00 uses the function texture2D() to sample a 2D texture and textureCube() for sampling a cubemap texture. Rather than have a separate function for each sampler type, GLSL ES 3.00 removes texture2D and textureCube and replaces them with a single overloaded function texture(), which can be used to sample any kind of texture. So, the datatexture defined above might be sampled using

highp ivec4 data = texture( datatexture, coords );

where coords is a vec2 holding the texture coordinates. But in fact, you might want to access texel values more directly. There is a new texelFetch() function that fetches texel values from a texture, treating the texture as an array of texels. Texels are accessed using integer coordinates that range from 0 up to the size of the texture. Applied to datatexture, this could look like

highp ivec4 data = texelFetch( datatexture, 0, ivec2(i,j) );

where i ranges from 0 to the width of the texture minus one, and j ranges from 0 to the height minus one. The second parameter, 0 here, specifies the mipmap level that is being accessed. (For integer textures, you are not likely to be using mipmaps.)

(The sample program webgl/texelFetch-MonaLisa-webgl2.html is a rather fanciful example of using texelFetch(), though with an ordinary image texture rather than a data texture.)

There is a lot more that could be said about WebGL 2.0 textures, but it would take us well beyond what I need for this introductory textbook.