Skip to main content
Engineering LibreTexts

7.5: WebGL Extensions

  • Page ID
    13749
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    WebGL is designed to run on a wide variety of devices, including mobile devices that have relatively limited graphical capabilities. Because of this, only a minimal set of features is required of all WebGL implementations. However, WebGL has a mechanism for activating additional, optional features. The optional features are defined in WebGL extensions. A web page that requires a WebGL extension is not guaranteed to work in every implementation of WebGL. However, in many cases, it is fairly easy to write a page that can work with or without the extension, though perhaps with some missing feature when the extension is not available. There are several dozen extensions whose definitions have been standardized. These standard extensions are documented at https://www.khronos.org/registry/webgl/extensions/.

    Standard OpenGL also has an extension mechanism. Historically, many features from ex- tensions in one version of OpenGL have become required features in later versions. The same is likely to be true for WebGL extensions. In fact, some of the WebGL 1.0 extensions have been incorporated as required features in WebGL 2.0.

    This section covers the WebGL extension mechanism, and it discusses a few of the standard extensions.

    Anisotropic Filtering

    We start with a simple extension that can improve the appearance of textures in some scenes. The standard filtering methods for sampling an image texture give poor results when the texture is viewed at an oblique angle. In that case, a pixel on the surface corresponds to a trapezoidal region in the texture, and the standard minification and magnification filter rules such as gl.LINEAR don’t handle that case very well. (Filtering was covered in Subsection 4.3.2.) A better result can be obtained, at the cost of additional computation, using something called anisotropic filtering, which samples the texture taking the trapezoidal shape into account. Many GPUs can do anisotropic filtering. It is not a required feature in WebGL implementations, but it is commonly available as an extension.

    The sample program webgl/anisotropic-filtering.html shows how to use anisotropic filtering in WebGL. It shows a large plane textured with a brick image that can be viewed from a sharp, oblique angle. If the extension is available, then the user can turn anisotropic filtering on and off. If it is not available, the program will still draw the scene, but only using standard filtering. Here are two images from the program. Anisotropic filtering is used in the image on the right. On the left, without anisotropic filtering, the texture is blurred even at moderate distanced from the viewer:

    Figure 120

    Each WebGL extension has a name. The function gl.getExtension(name) is used to activate an extension, where name is a string containing the name of the extension. The return value of the function is null if the extension is not available, and you should always check the return value before attempting to use the extension. If the return value is not null, then it is a JavaScript object. The object might contain, for example, constants that are meant to be passed to WebGL API functions in order to make use of the functionality of the extension.

    The name of the anisotropic filtering extension is “EXT_texture_filter_anisotropic.” (Names are not case-sensitive.) To test for the availability of the extension and to activate it, a program can use a statement such as

    anisotropyExtension = gl.getExtension("EXT_texture_filter_anisotropic");

    If anisotropyExtension is null, then the extension is not available. If it is not null, then the object has a property named TEXTURE_MAX_ANISOTROPY_EXT that can be used as a parameter to gl.texParameteri to set the level, or amount, of anisotropic filtering that will be applied to the texture. For example, after creating and binding a texture, a program might say

    gl.texParameteri(gl.TEXTURE_2D, anisotropyExtension.TEXTURE_MAX_ANISOTROPY_EXT, 16);

    The third parameter is the anisotropic filtering level. Setting the level to 1 will turn off anisotropic filtering. Higher values give better results. There is an implementation-dependent maximum level, but asking for a level greater than the maximum is not an error—you will simply get the maximum level. To find out the maximum, you can use

    max = gl.getParameter( anisotropyExtension.MAX_TEXTURE_MAX_ANISOTROPY_EXT );

    It is recommended to use gl.LINEAR_MIPMAP_LINEAR as the minification filter and gl.LINEAR as the magnification filter when using anisotropic filtering. A texture would typically be configured using code similar to the following:

    gl.bindTexture(gl.TEXTURE_2D);
    gl.generateMipmap(gl.TEXTURE_2D); // Need mipmaps for the minification filter!
    gl.texParameteri(gl.TEXTURE_2D,gl.TEXTURE_MIN_FILTER,gl.LINEAR_MIPMAP_LINEAR);
    gl.texParameteri(gl.TEXTURE_2D,gl.TEXTURE_MAG_FILTER,gl.LINEAR);
    if (anisotropyExtension != null) {
            // turn on anisotropic filtering only if it is available.
        max = gl.getParameter(anisotropyExtension.MAX_TEXTURE_MAX_ANISOTROPY_EXT);
        gl.texParameteri(gl.TEXTURE_2D, anisotropyExtension.TEXTURE_MAX_ANISOTROPY_EXT, max);
    }

    If the extension is not available, the texture might not look as good as it might have, but it will still work (and only a very observant user is likely to notice).

    Floating-Point Colors

    As a second example, we consider a pair of extensions that are named “OES_texture_float” and “WEBGL_color_buffer_float”. The first of these makes it possible to use textures in which color component values are floating-point numbers, instead of eight-bit integers. The second makes it possible to render to such a texture by using it as the color buffer in a framebuffer.

    Why would someone want to do this? Eight-bit integers are fine for representing colors visually, but they don’t have enough precision for doing accurate calculations. For applications that do significant numerical processing with color components, floating-point values are essential.

    As an example, consider finding the average color value of an image, which requires adding up the color values from a large number of pixels. This is something that can be speeded up by using the parallel processing power of a GPU. My technique for doing so uses two framebuffers, with two textures serving as color buffers. I assume that the image width and height are powers of two. Start by drawing the image to the first texture. Think of the image as divided in half, horizontally and vertically, giving four equal-sizes rectangles. As a first step, compute a half-size image that is the average of those four rectangles. That is, the color of a pixel in the half-size image is the average of the colors of four pixels in the original. The averaged image can be computed by drawing a half-size rectangle to the second framebuffer, using multiple samples from the image in the first texture. Here is a fragment shader that does the work:

    precision mediump float;
    varying vec2 v_coords; // Texture coordinates, same as object coords.
    uniform sampler2D texture; // A texture containing the original image.
    uniform float offset; // Size of square in texture coordinate space.
    void main() {
        vec4 a = texture2D(texture, v_coords);
        vec4 b = texture2D(texture, v_coords + vec2(offset,0));
        vec4 c = texture2D(texture, v_coords + vec2(0,offset));
        vec4 d = texture2D(texture, v_coords + vec2(offset,offset));
        gl_FragColor = (a + b + c + d)/4.0; // Color is average of four samples.
    }

    In this first pass, the square with vertices at (0,0) and (0.5,0.5) is rendered, and offset is 0.5. The drawing is done in a coordinate system in which the square with vertices (0,0) and (1,1) covers the entire drawing area. In that coordinate system, the square with vertices at (0,0) and (0.5,0.5) covers the lower left quarter of the drawing area. The first sample in the fragment shader comes from that quarter of the texture image, and the other three samples come from corresponding points in the other three quarters of the the image.

    In a second pass, the roles of the two framebuffers are swapped, and a square with vertices at (0,0) and (0.25,0.25) is drawn, using the same fragment shader with offset equal to 0.25. Since the framebuffers were swapped, the second pass is sampling the half-sized image that was produced in the first pass. The result is a quarter-sized image that is the average of four rectangles that cover the half-sized image—and therefore of 16 rectangles that cover the original image. This can be repeated, with smaller and smaller squares, until the resulting image is small enough that its colors can be efficiently read back into the CPU and averaged there. The result is a color value that is the average of all the pixels from the original image. We expect that, because a lot of the work is done in parallel by the GPU, we can get the answer much faster using this technique than if we had simply done all the computations on the CPU.

    The point here is that for an accurate result, we want the color components to be represented as floating point values in the GPU, not as eight-bit integers.

    I use this technique in the sample program webgl/image-evolver.html. In that program, the problem is to find the average difference in color between two images. I start by drawing the two images to two textures. I then render a difference image, in which the color of a pixel is the absolute value of the difference between the colors of the same pixel in the two textures. This is done with another special-purpose shader program. I then apply the above averaging process to the difference image.

    The actual point of the sample program is to try to “evolve” an approximation to a given image, using a “genetic algorithm.” (It was inspired by two students from my Fall, 2015 class, Felix Taschbach and Pieter Schaap, who worked on a similar program for their final project, though they didn’t use the GPU.) In the program, the average difference between the original image and an approximation is used as a measure of how good the approximation is. I used a very simple grayscale image as the goal, with approximations made from triangles. You don’t need to know anything about the genetic algorithm, especially since the program has no practical purpose. However, the source code is heavily commented if you want to try to understand it. Here is a screenshot from one run of the program, showing the original image and the best approximation produced after running the genetic algorithm for several hundred generations:

    Figure 121

    But what interests us here is how the program uses the WebGL floating-point color extensions. The program attempts to activate the extensions during initialization using the following code:

    var EXTcbf = gl.getExtension("WEBGL color buffer float");
    var EXTtf = gl.getExtension("OES texture float");
    if (!EXTtf) {
        throw "This program requires the WebGL extension OES texture float," + 
            "which is not available in this browser."
    }

    The program requires the extensions, so an exception is thrown if they can’t be activated. The extension objects, EXTcbf and EXTtf, dont’t have any properties that are needed in this program; however, it is still necessary to call gl.getExension to activate the extensions. (I only test that OES_texture_float is available. Historically, that extension included the functionality that I need from WEBGL_color_buffer_float, and a browser that has support for OES_texture_float can probably run the program whether or not it says that it supports the other extension. Officially, new programs are supposed to activate both extensions explicitly.)

    The program creates two floating-point textures that are attached to framebuffers for use as color buffers. (See Subsection 7.4.2.) Here is the code that creates one of those textures:

    tex1 = gl.createTexture();
    gl.bindTexture(gl.TEXTURE_2D, tex1);
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 256, 256, 0, gl.RGBA, gl.FLOAT, null);

    The parameter gl.FLOAT in the last line specifies that the data type for the color components in the texture is float. That data type would be an error if the extensions had not been activated.

    When the GPU does the averaging computation with these textures, it is doing floating-point calculations. The program computes a series of smaller and smaller averaged images, stopping with a 4-by-4 pixel image. It then reads the 16 pixel colors back from the texture using the following code:

    var data = new Float32Array( 4*4*4 ); // 16 pixels, 4 numbers per pixels
    gl.readPixels(0,0,4,4,gl.RGBA,gl.FLOAT,data)

    The call to gl.readPixels reads the color data for the 16 pixels into the array, data. Again, the gl.FLOAT parameter specifies the data type, and that parameter value is legal in gl.readPixels only because the extensions have been activated.

    Deferred Shading

    I will discuss one more WebGL extension, one that is useful for an important rendering technique called deferred shading. I don’t have a sample program for deferred rendering, and I will only discuss it in general terms.

    Deferred shading is used as an optimization when rendering complex scenes, and it is often used to speed up rendering in video games. It is most closely associated with lighting, since it can be used to render scenes with large numbers of light sources, but it can also be useful for other effects.

    Recall that the number of lights that can be represented in OpenGL or in a WebGL shader is limited. But scenes with many lights can be rendered using a multi-pass algorithm: Each pass computes the contribution of one light, or a small number of lights, and the results of the passes are added together to give the complete scene. The problem is that, if the rendering in each pass is done in the normal way, then there are a lot of things that have to be recomputed, in exactly the same way, in each pass. For example, assuming that per-pixel lighting is used, that includes computing material properties and a unit normal vector for each pixel in the image. Deferred shading aims to avoid the duplicated effort.

    In deferred shading, a first pass is used to compute material properties, normal vectors, and whatever other data is needed, for each pixel in the image. All of that data is saved, to be used in additional passes that will compute lighting and possibly other effects. For a given pixel, only the data for the object that is actually visible at the pixel is saved, since data for hidden surfaces is not needed to render the scene. The first pass uses the geometry and attributes of objects in the scene. Everything that the later passes need to know about geometry and attributes is in the saved data.

    The saved data can be stored in texture objects. (Floating point textures are ideal for this, since the data will be used in further calculations.) In this case, the values in the textures don’t necessarily represent images. For example, the RGB color components in one texture might represent the x, y, and z coordinates of a normal vector. And if a depth value is needed in later passes, it might be stored in the alpha color component of the same texture. Another texture might hold a diffuse color, while a third holds a specular color in its RGB components and a shininess value in its alpha component. Shader programs are free to interpret data in a texture however they like.

    A WebGL shader can write data to a texture, using a framebuffer. But standard WebGL can only write to one framebuffer at a time. Now, it would be possible to use a separate pass for each texture that we need to compute, but that would involve a lot of redundant calculations, which is what we are trying to avoid. What we need is a WebGL extension that makes it possible for a shader to write to several framebuffers simultaneously. The extension that we need is named “WEBGL_draw_buffers”. When that extension is activated, it becomes possible to attach several textures (or renderbuffers) to a framebuffer, and it becomes possible for a shader to write data to all of the attached shaders simultaneously. The extension is relatively complicated to use. It must be activated, as usual, with a statement of the form

    EXTdb = gl.getExtension("WEBGL_draw_buffers");

    Assuming that the extension is available, the maximum number of color buffers that can be used in a shader is given by EXTdb.MAX_DRAW_BUFFERS_WEBGL, which will be at least four. With the extension in place, you can attach multiple textures as color buffers for a framebuffer, using code of the form

    gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
    gl.framebufferTexture2D(gl.FRAMEBUFFER, 
        EXTdb.COLOR_ATTACHMENT0_WEBGL, gl.TEXTURE_2D, texture1, 0);
    gl.framebufferTexture2D(gl.FRAMEBUFFER, 
        EXTdb.COLOR_ATTACHMENT1_WEBGL, gl.TEXTURE_2D, texture2, 0);

    and so on, using using constants such as EXTdb.COLOR_ATTACHMENT1_WEBGL from the extension object to specify the attachment points.

    Usually in a fragment shader, the color that is output to the color buffer is specified by assigning a value to the special variable gl_FragColor. That changes when multiple color buffers are used. In that case, instead of gl_FragColor, the fragment shader has a special variable gl_FragData which is an array of vec4, one for each possible color buffer. Colors are output to the color buffers by assigning values to gl_FragData[0], gl_FragData[1], \( \dots \). Because this is a change in the legal syntax of the shader, the extension must also be activated in the fragment shader source code by adding the line

    #extension GL_EXT_draw_buffers : require

    to the beginning of the code. Suppose, for example, that we want to store a normal vector, a diffuse color, a specular color, and object coordinates in the color buffers. Let’s say that these values are input to the fragment shader as varying variables or uniform variables, except for the diffuse color, which is sampled from a texture. Then the fragment shader might take the form

    #extension GL EXT draw buffers : require
    precision highp float;
    varying vec3 v_normal, v_objectCoords;
    varying vec2 v_texCoords;
    uniform vec3 u_specular;
    uniform float u_shininess;
    uniform sampler2D texture;
    void main() {
        gl FragData[0] = vec4( normalize(v_normal), 0 );
        gl FragData[1] = vec4( v_object_coords, 1 );
        gl FragData[2] = texture2D( texture, v_texCoords );
        gl fragData[3] = vec4( u_specular, u_shininess );
    }

    The final requirement for using the extension is to specify the correspondence between the indices that are used in gl_FragData and the color buffers that have been attached to the frame- buffer. It seems like the correspondence should be automatic, but it’s not. You have to specify it using the JavaScript function, EXTdb.drawBuffersWEBGL from the extension object. This function takes an array as parameter, and the values in the array are chosen from the constants EXTdb.COLOR_ATTACHMENT0_WEBGL, EXTdb.COLOR_ATTACHMENT1_WEBGL, \( \dots \)

    These are the same constants that are used to specify the color buffer attachment points in a framebuffer. For example, if for some reason you wanted a fragment shader to output to the color buffers that are attached at attachment points 2 and 3, you would call

    EXTdb.drawBuffersWEBGL( [ 
        EXTdb.COLOR_ATTACHMENT2_WEBGL, 
        EXTdb.COLOR_ATTACHMENT3_WEBGL
    ] );

    With all that setup, you are ready to do the first pass for deferred shading. For the subsequent passes, you would use a different shader, with a single color buffer (and with the blend function set to do additive blending). For those passes, you want to run the fragment shader once for each pixel in the image. The fragment shader will use the pixel data that was saved in the first pass, together with other information such as light properties, to compute the output color for the pixel. You can trigger a call to the fragment shader for each pixel simply by drawing a single rectangle that covers the image.

    The theory behind deferred shading is not all that complicated, but there are a lot of details to get right in the implementation. Deferred shading is just one of many tricks that are used by video game programmers to improve the rendering speed for their games.


    This page titled 7.5: WebGL Extensions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by David J. Eck via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?