Skip to main content
Engineering LibreTexts

2.6: HTML Canvas Graphics

  • Page ID
    13715
  • Most modern web browsers support a 2D graphics API that can be used to create images on a web page. The API is implemented using JavaScript, the client-side programming language for the web. I won’t cover the JavaScript language in this section. To understand the material presented here, you don’t need to know much about it. Even if you know nothing about it at all, you can learn something about its 2D graphics API and see how it is similar to, and how it differs from, the Java API presented in the previous section. (For a short review of JavaScript, see Section A.3 in Appendix A.)

    The 2D Graphics Context

    The visible content of a web page is made up of “elements” such as headlines and paragraphs. The content is specified using the HTML language. A “canvas” is an HTML element. It appears on the page as a blank rectangular area which can be used as a drawing surface by what I am calling the “HTML canvas” graphics API. In the source code of a web page, a canvas element is created with code of the form

    <canvas width="800" height="600" ></canvas>

    The width and height give the size of the drawing area, in pixels. The id is an identifier that can be used to refer to the canvas in JavaScript.

    To draw on a canvas, you need a graphics context. A graphics context is an object that contains functions for drawing shapes. It also contains variables that record the current graphics state, including things like the current drawing color, transform, and font. Here, I will generally use graphics as the name of the variable that refers to the graphics context, but the variable name is, of course, up to the programmer. This graphics context plays the same role in the canvas API that a variable of type Graphics2D plays in Java. A typical starting point is

    canvas = document.getElementById("theCanvas");
    graphics = canvas.getContext("2d");

    The first line gets a reference to the canvas element on the web page, using its id. The second line creates the graphics context for that canvas element. (This code will produce an error in a web browser that doesn’t support canvas, so you might add some error checking such as putting these commands inside a try...catch statement.)

    Typically, you will store the canvas graphics context in a global variable and use the same graphics context throughout your program. This is in contrast to Java, where you typically get a new Graphics2D context each time the paintComponent() method is called, and that new context is in its initial state with default color and stroke properties and with no applied transform. When a graphics context is global, changes made to the state in one function call will carry over to subsequent function calls, unless you do something to limit their effect. This can actually lead to a fairly common type of bug: For example, if you apply a 30-degree rotation in a function, those rotations will accumulate each time the function is called, unless you do something to undo the previous rotation before applying the next rotation.

    The rest of this section will be mostly concerned with describing what you can do with a canvas graphics context. But here, for the record, is the complete source code for a very minimal web page that uses canvas graphics:

    <!DOCTYPE html>
    <html>
    <head>
    <title>Canvas Graphics</title>
    <script>
        var canvas;    // DOM object corresponding to the canvas
        var graphics;  // 2D graphics context for drawing on the canvas
    
        function draw() {
                // draw on the canvas, using the graphics context
            graphics.fillText("Hello World", 10, 20);
        }
    
        function init() {
            canvas = document.getElementById("theCanvas");
            graphics = canvas.getContext("2d");
            draw();  // draw something on the canvas
        }
    </script>
    </head>
    <body onload="init()">
        <canvas  width="640" height="480"></canvas>
    </body>
    </html>
    

    For a more complete, though still minimal, example, look at the sample page canvas2d/GraphicsStarter.html. (You should look at the page in a browser, but you should also read the source code.) This example shows how to draw some basic shapes using canvas graphics, and you can use it as a basis for your own experimentation. There are also three more advanced “starter” examples: canvas2d/GraphicsPlusStarter.html adds some utility functions for drawing shapes and setting up a coordinate system; canvas2d/AnimationStarter.html adds animation and includes a simple hierarchical modeling example; and canvas2d/EventsStarter.html shows how to respond to keyboard and mouse events.

    Shapes

    The default coordinate system on a canvas is the usual: The unit of measure is one pixel; (0,0) is at the upper left corner; the x-coordinate increases to the right; and the y-coordinate increases downward. The range of x and y values are given by the width and height properties of the <canvas> element. The term “pixel” here for the unit of measure is not really correct. Probably, I should say something like “one nominal pixel.” The unit of measure is one pixel at typical desktop resolution with no magnification. If you apply a magnification to a browser window, the unit of measure gets stretched. And on a high-resolution screen, one unit in the default coordinate system might correspond to several actual pixels on the display device.

    The canvas API supports only a very limited set of basic shapes. In fact, the only basic shapes are rectangles and text. Other shapes must be created as paths. Shapes can be stroked and filled. That includes text: When you stroke a string of text, a pen is dragged along the outlines of the characters; when you fill a string, the insides of the characters are filled. It only really makes sense to stroke text when the characters are rather large. Here are the functions for drawing rectangles and text, where graphics refers to the object that represents the graphics context:

    • graphics.fillRect(x,y,w,h) — draws a filled rectangle with corner at (x,y), with width w and with height h. If the width or the height is less than or equal to zero, nothing is drawn.
    • graphics.strokeRect(x,y,w,h) — strokes the outline of the same rectangle.
    • graphics.clearRect(x,y,w,h) — clears the rectangle by filling it with fully transparent pixels, allowing the background of the canvas to show. The background is determined by the properties of the web page on which the canvas appears. It might be a background color, an image, or even another canvas.
    • graphics.fillText(str,x,y) — fills the characters in the string str. The left end of the baseline of the string is positioned at the point (x,y).
    • graphics.strokeText(str,x,y) — strokes the outlines of the characters in the string.

    A path can be created using functions in the graphics context. The context keeps track of a “current path.” In the current version of the API, paths are not represented by objects, and there is no way to work with more than one path at a time or to keep a copy of a path for later reuse. Paths can contain lines, Bezier curves, and circular arcs. Here are the most common functions for working with paths:

    • graphics.beginPath() — start a new path. Any previous path is discarded, and the current path in the graphics context is now empty. Note that the graphics context also keeps track of the current point, the last point in the current path. After calling graphics.beginPath(), the current point is undefined.
    • graphics.moveTo(x,y) — move the current point to (x,y), without adding anything to the path. This can be used for the starting point of the path or to start a new, disconnected segment of the path.
    • graphics.lineTo(x,y) — add the line segment starting at current point and ending at (x,y) to the path, and move the current point to (x,y).
    • graphics.bezierCurveTo(cx1,cy1,c2x,cy2,x,y) — add a cubic Bezier curve to the path. The curve starts at the current point and ends at (x,y). The points (cx1,cy1) and (cx2,cy2) are the two control points for the curve. (Bezier curves and their control points were discussed in Subsection 2.2.3.)
    • graphics.quadraticCurveTo(cx,cy,x,y) — adds a quadratic Bezier curve from the current point to (x,y), with control point (cx,cy).
    • graphics.arc(x,y,r,startAngle,endAngle) — adds an arc of the circle with center (x,y) and radius r. The next two parameters give the starting and ending angle of the arc. They are measured in radians. The arc extends in the positive direction from the start angle to the end angle. (The positive rotation direction is from the positive x-axis towards the positive y-axis; this is clockwise in the default coordinate system.) An optional fifth parameter can be set to true to get an arc that extends in the negative direction. After drawing the arc, the current point is at the end of the arc. If there is a current point before graphics.arc is called, then before the arc is drawn, a line is added to the path that extends from the current point to the starting point of the arc. (Recall that immediately after graphics.beginPath(), there is no current point.)
    • graphics.closePath() — adds to the path a line from the current point back to the starting point of the current segment of the curve. (Recall that you start a new segment of the curve every time you use moveTo.)

    Creating a curve with these commands does not draw anything. To get something visible to appear in the image, you must fill or stroke the path.

    The commands graphics.fill() and graphics.stroke() are used to fill and to stroke the current path. If you fill a path that has not been closed, the fill algorithm acts as though a final line segment had been added to close the path. When you stroke a shape, it’s the center of the virtual pen that moves along the path. So, for high-precision canvas drawing, it’s common to use paths that pass through the centers of pixels rather than through their corners. For example, to draw a line that extends from the pixel with coordinates (100,200) to the pixel with coordinates (300,200), you would actually stroke the geometric line with endpoints (100.5,200.5) and (100.5,300.5). We should look at some examples. It takes four steps to draw a line:

    graphics.beginPath();          // start a new path
    graphics.moveTo(100.5,200.5);  // starting point of the new path
    graphics.lineTo(300.5,200.5);  // add a line to the point (300.5,200.5)
    graphics.stroke();             // draw the line
    

    Remember that the line remains as part of the current path until the next time you call graphics.beginPath(). Here’s how to draw a filled, regular octagon centered at (200,400) and with radius 100:

    graphics.beginPath();
    graphics.moveTo(300,400);
    for (var i = 1; i < 8; i++) {
        var angle = (2*Math.PI)/8 * i;
        var x = 200 + 100*Math.cos(angle);
        var y = 400 + 100*Math.sin(angle);
        graphics.lineTo(x,y);
    }
    graphics.closePath();
    graphics.fill();
    

    The function graphics.arc() can be used to draw a circle, with a start angle of 0 and an end angle of 2*Math.PI. Here’s a filled circle with radius 100, centered at 200,300:

    graphics.beginPath();
    graphics.arc( 200, 300, 100, 0, 2*Math.PI );
    graphics.fill();
    

    To draw just the outline of the circle, use graphics.stroke() in place of graphics.fill(). You can apply both operations to the same path. If you look at the details of graphics.arc(), you can see how to draw a wedge of a circle:

    graphics.beginPath();
    graphics.moveTo(200,300); // Move current point to center of the circle.
    graphics.arc(200,300,100,0,Math.PI/4); // Arc, plus line from current point.
    graphics.lineTo(200,300); // Line from end of arc back to center of circle.
    graphics.fill(); // Fill the wedge.
    

    There is no way to draw an oval that is not a circle, except by using transforms. We will cover that later in this section. But JavaScript has the interesting property that it is possible to add new functions and properties to an existing object. The sample program canvas2d/GraphicsPlusStarter.html shows how to add functions to a graphics context for drawing lines, ovals, and other shapes that are not built into the API.

    Stroke and Fill

    Attributes such as line width that affect the visual appearance of strokes and fills are stored as properties of the graphics context. For example, the value of graphics.lineWidth is a number that represents the width that will be used for strokes. (The width is given in pixels for the default coordinate system, but it is subject to transforms.) You can change the line width by assigning a value to this property:

    graphics.lineWidth = 2.5; // Change the current width.

    The change affects subsequent strokes. You can also read the current value:

    saveWidth = graphics.lineWidth; // Save current width.

    The property graphics.lineCap controls the appearance of the endpoints of a stroke. It can be set to “round”, “square”, or “butt”. The quotation marks are part of the value. For example,

    graphics.lineCap = "round";

    Similarly, graphics.lineJoin controls the appearance of the point where one segment of a stroke joins another segment; its possible values are “round”, “bevel”, or “miter”. (Line endpoints and joins were discussed in Subsection 2.2.1.)

    Note that the values for graphics.lineCap and graphics.lineJoin are strings. This is a somewhat unusual aspect of the API. Several other properties of the graphics context take values that are strings, including the properties that control the colors used for drawing and the font that is used for drawing text.

    Color is controlled by the values of the properties graphics.fillStyle and graphics.strokeStyle. The graphics context maintains separate styles for filling and for stroking. A solid color for stroking or filling is specified as a string. Valid color strings are ones that can be used in CSS, the language that is used to specify colors and other style properties of elements on web pages. Many solid colors can be specified by their names, such as “red”, “black”, and “beige”. An RGB color can be specified as a string of the form “rgb(r,g,b)”, where the parentheses contain three numbers in the range 0 to 255 giving the red, green, and blue components of the color. Hexadecimal color codes are also supported, in the form “#XXYYZZ” where XX, YY, and ZZ are two-digit hexadecimal numbers giving the RGB color components. For example,

    graphics.fillStyle = "rgb(200,200,255)"; // light blue
    graphics.strokeStyle = "#0070A0"; // a darker, greenish blue

    The style can actually be more complicated than a simple solid color: Gradients and patterns are also supported. As an example, a gradient can be created with a series of steps such as

    var lineargradient = graphics.createLinearGradient(420,420,550,200);
    lineargradient.addColorStop(0,"red");
    lineargradient.addColorStop(0.5,"yellow");
    lineargradient.addColorStop(1,"green");
    graphics.fillStyle = lineargradient; // Use a gradient fill!
    

    The first line creates a linear gradient that will vary in color along the line segment from the point (420,420) to the point (550,200). Colors for the gradient are specified by the addColorStop function: the first parameter gives the fraction of the distance from the initial point to the final point where that color is applied, and the second is a string that specifies the color itself. A color stop at 0 specifies the color at the initial point; a color stop at 1 specifies the color at the final point. Once a gradient has been created, it can be used both as a fill style and as a stroke style in the graphics context.

    Finally, I note that the font that is used for drawing text is the value of the property graphics.font. The value is a string that could be used to specify a font in CSS. As such, it can be fairly complicated, but the simplest versions include a font-size (such as 20px or 150%) and a font-family (such as serif, sans-serif, monospace, or the name of any font that is accessible to the web page). You can add italic or bold or both to the front of the string. Some examples:

    graphics.font = "2cm monospace"; // the size is in centimeters
    graphics.font = "bold 18px sans-serif";
    graphics.font = "italic 150% serif"; // size is 150% of the usual size

    The default is “10px sans-serif,” which is usually too small. Note that text, like all drawing, is subject to coordinate transforms. Applying a scaling operation changes the size of the text, and a negative scaling factor can produce mirror-image text.

    Transforms

    A graphics context has three basic functions for modifying the current transform by scaling, rotation, and translation. There are also functions that will compose the current transform with an arbitrary transform and for completely replacing the current transform:

    graphics.scale(sx,sy) — scale by sx in the x-direction and sy in the y-direction.
    graphics.rotate(angle) — rotate by angle radians about the origin. A positive rotation is clockwise in the default coordinate system.

    graphics.translate(tx,ty) — translate by tx in the x-direction and ty in the y-direction.

    graphics.transform(a,b,c,d,e,f) — apply the transformation x1 = a*x + c*y + e, and y1 = b*x + d*y + f.

    graphics.setTransform(a,b,c,d,e,f) — discard the current transformation, and set the current transformation to be x1=a*x+c*y+e, and y1=b*x+d*y+f.

    Note that there is no shear transform, but you can apply a shear as a general transform. For example, for a horizontal shear with shear factor 0.5, use

    graphics.transform(1, 0, 0.5, 1, 0, 0)

    To implement hierarchical modeling, as discussed in Section 2.4, you need to be able to save the current transformation so that you can restore it later. Unfortunately, no way is provided to read the current transformation from a canvas graphics context. However, the graphics context itself keeps a stack of transformations and provides methods for pushing and popping the current transformation. In fact, these methods do more than save and restore the current transformation. They actually save and restore almost the entire state of the graphics context, including properties such as current colors, line width, and font (but not the current path):

    • graphics.save() — push a copy of the current state of the graphics context, including the current transformation, onto the stack.
    • graphics.restore() — remove the top item from the stack, containing a saved state of the graphics context, and restore the graphics context to that state.

    Using these methods, the basic setup for drawing an object with a modeling transform becomes:

    graphics.save();           // save a copy of the current state
    graphics.translate(a,b);   // apply modeling transformations
    graphics.rotate(r);
    graphics.scale(s,s);
        .
        .  // Draw the object!
        .
    graphics.restore();        // restore the saved state
    

    Note that if drawing the object includes any changes to attributes such as drawing color, those changes will be also undone by the call to graphics.restore(). In hierarchical graphics, this is usually what you want, and it eliminates the need to have extra statements for saving and restoring things like color.

    To draw a hierarchical model, you need to traverse a scene graph, either procedurally or as a data structure. It’s pretty much the same as in Java. In fact, you should see that the

    basic concepts that you learned about transformations and modeling carry over to the canvas graphics API. Those concepts apply very widely and even carry over to 3D graphics APIs, with just a little added complexity. The demo program c2/cart-and-windmills.html from Section 2.4 implements hierarchical modeling using the 2D canvas API.

    Now that we know how to do transformations, we can see how to draw an oval using the canvas API. Suppose that we want an oval with center at (x,y), with horizontal radius r1 and with vertical radius r2. The idea is to draw a circle of radius 1 with center at (0,0), then transform it. The circle needs to be scaled by a factor of r1 horizontally and r2 vertically. It should then be translated to move its center from (0,0) to (x,y). We can use graphics.save() and graphics.restore() to make sure that the transformations only affect the circle. Recalling that the order of transforms in the code is the opposite of the order in which they are applied to objects, this becomes:

    graphics.save();
    graphics.translate( x, y );
    graphics.scale( r1, r2 );
    graphics.beginPath();
    graphics.arc( 0, 0, 1, 0, Math.PI );  // a circle of radius 1
    graphics.restore();
    graphics.stroke();
    

    Note that the current path is not affected by the calls to graphics.save() and graphics.restore(). So, in the example, the oval-shaped path is not discarded when graphics.restore() is called. When graphics.stroke() is called at the end, it is the oval-shaped path that is stroked. On the other hand, the line width that is used for the stroke is not affected by the scale transform that was applied to the oval. Note that if the order of the last two commands were reversed, then the line width would be subject to the scaling.

    There is an interesting point here about transforms and paths. In the HTML canvas API, the points that are used to create a path are transformed by the current transformation before they are saved. That is, they are saved in pixel coordinates. Later, when the path is stroked or filled, the current transform has no effect on the path (although it can affect, for example, the line width when the path is stroked). In particular, you can’t make a path and then apply different transformations. For example, you can’t make an oval-shaped path, and then use it to draw several ovals in different positions. Every time you draw the oval, it will be in the same place, even if different translation transforms are applied to the graphics context.

    The situation is different in Java, where the coordinates that are stored in the path are the actual numbers that are used to specify the path, that is, the object coordinates. When the path is stroked or filled, the transformation that is in effect at that time is applied to the path. The path can be reused many times to draw copies with different transformations. This comment is offered as an example of how APIs that look very similar can have subtle differences.

    Auxiliary Canvases

    In Subsection 2.5.5, we looked at the sample program java2d/JavaPixelManipulation.java, which uses a BufferedImage both to implement an off-screen canvas and to allow direct manipulation of the colors of individual pixels. The same ideas can be applied in HTML canvas graphics, although the way it’s done is a little different. The sample web application canvas2d/SimplePaintProgram.html does pretty much the same thing as the Java program (except for the image filters).

    The on-line version of this section has a live demo version of the program that has the same functionality. You can try it out to see how the various drawing tools work. Don’t forget to try the “Smudge” tool! (It has to be applied to shapes that you have already drawn.)

    For JavaScript, a web page is represented as a data structure, defined by a standard called the DOM, or Document Object model. For an off-screen canvas, we can use a <canvas> that is not part of that data structure and therefore is not part of the page. In JavaScript, a <canvas> can be created with the function call document.createElement(“canvas”). There is a way to add this kind of dynamically created canvas to the DOM for the web page, but it can be used as an off-screen canvas without doing so. To use it, you have to set its width and height properties, and you need a graphics context for drawing on it. Here, for example, is some code that creates a 640-by-480 canvas, gets a graphics context for the canvas, and fills the whole canvas with white:

    OSC = document.createElement("canvas"); // off-screen canvas
    
    OSC.width = 640; // Size of OSC must be set explicitly.
    OSC.height = 480;
    
    OSG = OSC.getContext("2d"); // Graphics context for drawing on OSC.
    
    OSG.fillStyle = "white"; // Use the context to fill OSC with white.
    OSG.fillRect(0,0,OSC.width,OSC.height);
    

    The sample program lets the user drag the mouse on the canvas to draw some shapes. The off-screen canvas holds the official copy of the picture, but it is not seen by the user. There is also an on-screen canvas that the user sees. The off-screen canvas is copied to the on-screen canvas whenever the picture is modified. While the user is dragging the mouse to draw a line, oval, or rectangle, the new shape is actually drawn on-screen, over the contents of the off-screen canvas. It is only added to the off-screen canvas when the user finishes the drag operation. For the other tools, changes are made directly to the off-screen canvas, and the result is then copied to the screen. This is an exact imitation of the Java program.

    (The demo version mentioned above actually uses a somewhat different technique to accomplish the same thing. It uses two on-screen canvases, one located exactly on top of the other. The lower canvas holds the actual image. The upper canvas is completely transparent, except when the user is drawing a line, oval, or rectangle. While the user is dragging the mouse to draw such a shape, the new shape is drawn on the upper canvas, where it hides the part of the lower canvas that is beneath the shape. When the user releases the mouse, the shape is added to the lower canvas and the upper canvas is cleared to make it completely transparent again. Again, the other tools operate directly on the lower canvas.)

    Pixel Manipulation

    The “Smudge” tool in the sample program and demo is implemented by computing with the color component values of pixels in the image. The implementation requires some way to read the colors of pixels in a canvas. That can be done with the function graphics.getPixelData(x,y,w,h), where graphics is a 2D graphics context for the canvas. The function reads the colors of a rectangle of pixels, where (x,y) is the upper left corner of the rectangle, w is its width, and h is its height. The parameters are always expressed in pixel coordinates. Consider, for example

    colors = graphics.getImageData(0,0,20,10)

    This returns the color data for a 20-by-10 rectangle in the upper left corner of the canvas. The return value, colors, is an object with properties colors.width, colors.height, and colors.data. The width and height give the number of rows and columns of pixels in the returned data. (According to the documentation, on a high-resolution screen, they might not be the same as the width and height in the function call. The data can be for real, physical pixels on the display device, not the “nominal” pixels that are used in the pixel coordinate system on the canvas. There might be several device pixels for each nominal pixel. I’m not sure whether this can really happen.)

    The value of colors.data is an array, with four array elements for each pixel. The four elements contain the red, blue, green, and alpha color components of the pixel, given as integers in the range 0 to 255. For a pixel that lies outside the canvas, the four component values will all be zero. The array is a value of type Uint8ClampedArray whose elements are 8-bit unsigned integers limited to the range 0 to 255. This is one of JavaScript’s typed array datatypes, which can only hold values of a specific numerical type. As an example, suppose that you just want to read the RGB color of one pixel, at coordinates (x,y). You can set

    pixel = graphics.getImageData(x,y,1,1);

    Then the RGB color components for the pixel are R = pixel.data[0], G = pixel.data[1], and B = pixel.data[2].

    The function graphics.putImageData(imageData,x,y) is used to copy the colors from an image data object into a canvas, placing it into a rectangle in the canvas with upper left corner at (x,y). The imageData object can be one that was returned by a call to graphics.getImageData, possibly with its color data modified. Or you can create a blank image data object by calling graphics.createImageData(w,h) and fill it with data.

    Let’s consider the “Smudge” tool in the sample program. When the user clicks the mouse with this tool, I use OSG.getImageData to get the color data from a 9-by-9 square of pixels surrounding the mouse location. OSG is the graphics context for the canvas that contains the image. Since I want to do real-number arithmetic with color values, I copy the color components into another typed array, one of type Float32Array, which can hold 32-bit floating point numbers. Here is the function that I call to do this:

    function grabSmudgeData(x, y) {  // (x,y) gives mouse location
        var colors = OSG.getImageData(x-5,y-5,9,9);
        if (smudgeColorArray == null) {
                // Make image data & array the first time this function is called.
            smudgeImageData = OSG.createImageData(9,9);
            smudgeColorArray = new Float32Array(colors.data.length);
        }
        for (var i = 0; i < colors.data.length; i++) {                  
                // Copy the color component data into the Float32Array.
            smudgeColorArray[i] = colors.data[i];
        }
    }

    The floating point array, smudgeColorArray, will be used for computing new color values for the image as the mouse moves. The color values from this array will be copied into the image data object, smudgeImageData, which will them be used to put the color values into the image. This is done in another function, which is called for each point that is visited as the user drags the Smudge tool over the canvas:

    function swapSmudgeData(x, y) { // (x,y) is new mouse location
        var colors = OSG.getImageData(x-5,y-5,9,9);  // get color data form image
        for (var i = 0; i < smudgeColorArray.length; i += 4) {
            // The color data for one pixel is in the next four array locations.
            if (smudgeColorArray[i+3] && colors.data[i+3]) {
                    // alpha-components are non-zero; both pixels are in the canvas
                for (var j = i; j < i+3; j++) { // compute new RGB values
                    var newSmudge = smudgeColorArray[j]*0.8 + colors.data[j]*0.2;
                    var newImage  = smudgeColorArray[j]*0.2 + colors.data[j]*0.8;
                    smudgeImageData.data[j] = newImage;
                    smudgeColorArray[j] = newSmudge;
                }
                smudgeImageData.data[i+3] = 255;  // alpha component
            }
            else {
                    // one of the alpha components is zero; set the output
                    // color to all zeros, "transparent black", which will have
                    // no effect on the color of the pixel in the canvas.
                for (var j = i; j <= i+3; j++) {
                    smudgeImageData.data[j] = 0;
                }
            }
        }
        OSG.putImageData(smudgeImageData,x-5,y-5); // copy new colors into canvas
    }
    

    In this function, a new color is computed for each pixel in a 9-by-9 square of pixels around the mouse location. The color is replaced by a weighted average of the current color of the pixel and the color of the corresponding pixel in the smudgeColorArray. At the same time, the color in smudgeColorArray is replaced by a similar weighted average.

    It would be worthwhile to try to understand this example to see how pixel-by-pixel processing of color data can be done. See the source code of the example for more details.

    Images

    For another example of pixel manipulation, we can look at image filters that modify an image by replacing the color of each pixel with a weighted average of the color of that pixel and the 8 pixels that surround it. Depending on the weighting factors that are used, the result can be as simple as a slightly blurred version of the image, or it can be something more interesting.

    The on-line version of this section includes an interactive demo that lets you apply several different image filters to a variety of images.

    The filtering operation in the demo uses the image data functions getImageData, createImageData, and putImageData that were discussed above. Color data from the entire image is obtained with a call to getImageData. The results of the averaging computation are placed in a new image data object, and the resulting image data is copied back to the image using putImageData.

    The remaining question is, where do the original images come from, and how do they get onto the canvas in the first place? An image on a web page is specified by an element in the web page source such as

    <img src="pic.jpg" width="400" height="300" >

    The src attribute specifies the URL from which the image is loaded. The optional id can be used to reference the image in JavaScript. In the script,

    image = document.getElementById("mypic");

    gets a reference to the object that represents the image in the document structure. Once you have such an object, you can use it to draw the image on a canvas. If graphics is a graphics context for the canvas, then

    graphics.drawImage(image, x, y);

    draws the image with its upper left corner at (x,y). Both the point (x,y) and the image itself are transformed by any transformation in effect in the graphics context. This will draw the image using its natural width and height (scaled by the transformation, if any). You can also specify the width and height of the rectangle in which the image is drawn:

    graphics.drawImage(image, x, y, width, height);

    With this version of drawImage, the image is scaled to fit the specified rectangle.
    Now, suppose that the image you want to draw onto the canvas is not part of the web page? In that case, it is possible to load the image dynamically. This is much like making an off-screen canvas, but you are making an “off-screen image.” Use the document object to create an img element:

    newImage = document.createElement("img");

    An img element needs a src attribute that specifies the URL from which it is to be loaded. For example,

    newImage.src = "pic2.jpg";

    As soon as you assign a value to the src attribute, the browser starts loading the image. The loading is done asynchronously; that is, the computer continues to execute the script without waiting for the load to complete. This means that you can’t simply draw the image on the line after the above assignment statement: The image is very likely not done loading at that time. You want to draw the image after it has finished loading. For that to happen, you need to assign a function to the image’s onload property before setting the src. That function will be called when the image has been fully loaded. Putting this together, here is a simple JavaScript function for loading an image from a specified URL and drawing it on a canvas after it has loaded:

    function loadAndDraw( imageURL, x, y ) {
        var image = document.createElement("img");
        image.onload = doneLoading;
        image.src = imageURL;
        function doneLoading() {
            graphics.drawImage(image, x, y);
        }
    }

    A similar technique is used to load the images in the filter demo.
    There is one last mystery to clear up. When discussing the use of an off-screen canvas in the SimplePaintProgram example earlier in this section, I noted that the contents of the off-screen canvas have to be copied to the main canvas, but I didn’t say how that can be done. In fact, it is done using drawImage. In addition to drawing an image onto a canvas, drawImage can be used to draw the contents of one canvas into another canvas. In the sample program, the command

    graphics.drawImage( OSC, 0, 0 );

    is used to draw the off-screen canvas to the main canvas. Here, graphics is a graphics context for drawing on the main canvas, and OSC is the object that represents the off-screen canvas.