# 2.3: Application- Programming with Sets

- Page ID
- 6717

On a computer, all data are represented, ultimately, as strings of zeros and ones. At times, computers need to work with sets. How can sets be represented as strings of zeros and ones?

A set is determined by its elements. Given a set \(A\) and an entity \(x\), the fundamental question is, does \(x\) belong to \(A\) or not? If we know the answer to this question for each possible \(x\), then we know the set. For a given \(x\), the answer to the question, “Is \(x\) a member of \(A\),” is either yes or no. The answer can be encoded by letting 1 stand for yes and 0 stand for no. The answer, then, is a single **bit**, that is, a value that can be either zero or one. To represent the set \(A\) as a string of zeros and ones, we could use one bit for each possible member of \(A\). If a possible member \(x\) is in the set, then the corresponding bit has the value one. If \(x\) is not in the set, then the corresponding bit has the value zero.

Now, in cases where the number of possible elements of the set is very large or infinite, it is not practical to represent the set in this way. It would require too many bits, perhaps an infinite number. In such cases, some other representation for the set can be used. However, suppose we are only interested in subsets of some specified small set. Since this set plays the role of a universal set, let’s call it \(U\). To represent a subset of \(U\), we just need one bit for each member of \(U\). If the number of members of \(U\) is \(n\), then a subset of \(U\) is represented by a string of \(n\) zeros and ones. Furthermore, every string of \(n\) zeros and ones determines a subset of \(U\), namely that subset that contains exactly the elements of \(U\) that correspond to ones in the string. A string of \(n\) zeros and ones is called an \(\textbf{n}\)*-bit* **binary number**. So, we see that if \(U\) is a set with \(n\) elements, then the subsets of \(U\) correspond to \(\textbf{n}\)*-bit* binary numbers.

To make things more definite, let \(U\) be the set \({0, 1, 2, . . . , 31}\). This set consists of the 32 integers between 0 and 31, inclusive. Then each subset of \(U\) can be represented by a 32-bit binary number. We use 32 bits because most computer languages can work directly with 32-bit numbers. For example, the programming languages Java, C, and C++ have a data type named *int*. A value of type *int* is a 32-bit binary number.1 Before we get a definite correspondence between subsets of \(U\) and 32-bit numbers, we have to decide which bit in the number will correspond to each member of \(U\). Following tradition, we assume that the bits are numbered from right to left. That is, the rightmost bit corresponds to the element 0 in \(U\), the second bit from the right corresponds to 1, the third bit from the right to 2, and so on. For example, the 32-bit number

\[1000000000000000000001001110110 \nonumber\]

corresponds to the subset \({1, 2, 4, 5, 6, 9, 31}\). Since the leftmost bit of the

Hex. |
Binary |
Hex. |
Binary |

0 | \(0000_{2}\) | 8 | \(1000_{2}\) |

1 | \(0001_{2}\) | 9 | \(1001_{2}\) |

2 | \(0010_{2}\) | A | \(1010_{2}\) |

3 | \(0011_{2}\) | B | \(1011_{2}\) |

4 | \(0100_{2}\) | C | \(1100_{2}\) |

5 | \(0101_{2}\) | D | \(1101_{2}\) |

6 | \(0110_{2}\) | E | \(1110_{2}\) |

7 | \(0111_{2}\) | F | \(1111_{2}\) |

*Figure 2.3: The 16 hexadecimal digits and the corresponding bi- nary numbers. Each hexadecimal digit corresponds to a 4-bit binary number. Longer binary numbers can be written using two or more hexadecimal digits. For example, \(101000011111_{2} = 0xA1F\) .*

number is 1, the number 31 is in the set; since the next bit is 0, the number 30 is not in the set; and so on.

From now on, I will write binary numbers with a subscript of 2 to avoid confusion with ordinary numbers. Furthermore, I will often leave out leading zeros. For example, \(1101_{2}\) is the binary number that would be written out in full as

\[00000000000000000000000000001101 \nonumber\]

and which corresponds to the set {0, 2, 3}. On the other hand 1101 represents the ordinary number one thousand one hundred and one.

Even with this notation, it can be very annoying to write out long bi- nary numbers—and almost impossible to read them. So binary numbers are never written out as sequences of zeros and ones in computer programs. An alternative is to use **hexadecimal numbers**. Hexadecimal numbers are written using the sixteen symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F. These symbols are knows as the hexadecimal digits. Each hexadecimal digit corresponds to a 4-bit binary number, as shown in Fig- ure 2.3. To represent a longer binary number, several hexadecimal digits can be strung together. For example, the hexadecimal number C7 repre- sents the binary number \(11000111_{2}\). In Java and many related languages, a hexadecimal number is written with the prefix “\(0x\)”. Thus, the hexadecimal number C7 would appear in the program as \(0xC7\). I will follow the same convention here. Any 32-bit binary number can be written using eight hexadecimal digits (or fewer if leading zeros are omitted). Thus, subsets of {0,1,2,...,31} correspond to 8-digit hexadecimal numbers. For example, the subset {1, 2, 4, 5, 6, 9, 31} corresponds to \(0x80000276\), which represents the binary number \(1000000000000000000001001110110_{2}\). Similarly, \(0xFF\) corresponds to {0,1,2,3,4,5,6,7} and \(0x1101\) corresponds to the binary number \(0001000100000001_{2}\) and to the set {0, 8, 12}.

Now, if you have worked with binary numbers or with hexadecimal numbers, you know that they have another, more common interpretation. They represent ordinary integers. Just as 342 represents the integer \(3 ·10^{2} + 4 · 10^{1} + 2 · 10^{0}\), the binary number \(1101_{2}\) represents the integer \(1·2^{3} +1·2^{2} +0·2^{1} +1·2^{0},\) or 13. When used in this way, binary numbers are known as base-2 numbers, just as ordinary numbers are base-10 numbers. Hexadecimal numbers can be interpreted as base-16 numbers. For example, \(0x3C7\) represents the integer \(3 · 16^{2} + 12 · 16^{1} + 7 · 16^{0},\) or 874. So, does \(1101_{2}\) really represent the integer 13, or does it represent the set {0, 2, 3} ? The answer is that to a person, \(1101_{2}\) can represent either. Both are valid interpretations, and the only real question is which interpretation is useful in a given circumstance. On the other hand, to the computer, \(1101_{2}\) doesn’t represent *anything*. It’s just a string of bits, and the computer manipulates the bits according to its program, without regard to their interpretation.

Of course, we still have to answer the question of whether it is ever useful to interpret strings of bits in a computer as representing sets.

If all we could do with sets were to “represent” them, it wouldn’t be very useful. We need to be able to compute with sets. That is, we need to be able to perform set operations such as union and complement. Many programming languages provide operators that perform set operations. In Java and related languages, the operators that perform union, intersection, and complement are written as | , &, and ~. For example, if \(x\) and \(y\) are 32-bit integers representing two subsets, \(X\) and \(Y\) , of {0, 1, 2, . . . , 31}, then \(x | y\) is a 32-bit integer that represents the set \(X ∪ Y\) . Similarly, \(x\) & \(y\) represents the set \(X∩Y\) , and \(~x\) represents the complement, \(\overline{X}\).

The operators | , &, and ~ are called **bitwise logical operators** because of the way they operate on the individual bits of the numbers to which they are applied. If 0 and 1 are interpreted as the logical values *false* and *true*, then the bitwise logical operators perform the logical operations \(∨, ∧,\) and \(¬\) on individual bits. To see why this is true, let’s look at the computations that these operators have to perform.

Let \(k\) be one of the members of {0,1,2,...,31}. In the binary numbersx, \(y, x | y, x \& y,\) and \(~x\), the number \(k\) corresponds to the bit in position \(k\).

That is, \(k\) is in the set represented by a binary number if and only if the bit in position \(k\) in that binary number is 1. Considered as sets, \(x\&y\) is the intersection of \(x\) and \(y\), so \(k\) is a member of the set represented by \(x \& y\) if and only if \(k\) is a member of both of the sets represented by \(x\) and \(y\). That is, bit \(k\) is 1 in the binary number \(x & y\) if and only if bit \(k\) is 1 in \(x\) and bit \(k\) is 1 in \(y\). When we interpret 1 as true and 0 as false, we see that bit \(k\) of \(x\&y\) is computed by applying the logical “and” operator, \(∧\), to bit \(k\) of \(x\) and bit \(k\) of \(y\). Similarly, bit \(k\) of \(x | y\) is computed by applying the logical “or” operator, \(∨\), to bit \(k\) of \(x\) and bit \(k\) of \(y\). And bit \(k\) of \(~x\) is computed by applying the logical “not” operator, \(¬\), to bit \(k\) of \(x\). In each case, the logical operator is applied to each bit position separately. (Of course, this discussion is just a translation to the language of bits of the definitions of the set operations in terms of logical operators: \(A∩B = \{x | x ∈ A∧x ∈ B\}\), \(A ∪ B = \{x | x ∈ A ∨ x ∈ B\}\), and \(A = \{x ∈ U | ¬(x ∈ A)\}\).)

For example, consider the binary numbers \(1011010_{2}\) and \(10111_{2}\), which represent the sets {1,3,4,6} and {0,1,2,4}. Then \(1011010_{2}\) & \(10111_{2}\) is \(10010_{2}\). This binary number represents the set {1,4}, which is the intersection {1, 3, 4, 6} ∩ {0, 1, 2, 4}. It’s easier to see what’s going on if we write out the computation in columns, the way you probably first learned to do addition:

1011010 |
{ 6, 4, 3, 1 } |

Note that in each column in the binary numbers, the bit in the bottom row is computed as the logical “and” of the two bits that lie above it in the column. I’ve written out the sets that correspond to the binary numbers to show how the bits in the numbers correspond to the presence or absence of elements in the sets. Similarly, we can see how the union of two sets is computed as a bitwise “or” of the corresponding binary numbers.

1011010 |
{6,4,3, 1 } |

The complement of a set is computed using a bitwise “not” operation. Since we are working with 32-bit binary numbers, the complement is taken with respect to the universal set {0, 1, 2, . . . , 31}. So, for example,

\[~10110102 = 11111111111111111111111110100101_{2} \nonumber\]

Of course, we can apply the operators &, | , and ~ to numbers written in hexadecimal form, or even in ordinary, base-10 form. When doing such calculations by hand, it is probably best to translate the numbers into binary form. For example,

\[0xAB7\&0x168E=101010110111_{2} \&1011010001110_{2} \nonumber \]

\[= 0 0010 1000 0110_{2} \nonumber \]

\[= 0x286 \nonumber \]

When computing with sets, it is sometimes necessary to work with individual elements. Typical operations include adding an element to a set, removing an element from a set, and testing whether an element is in a set. However, instead of working with an element itself, it’s convenient to work with the set that contains that element as its only member. For example, testing whether \(5 ∈ A\) is the same as testing whether \({5} ∩ A \ne ∅\). The set {5} is represented by the binary number \(100000_{2}\) or by the hexadecimal number 0x20. Suppose that the set \(A\) is represented by the number \(x\). Then, testing whether \(5 ∈ A\) is equivalent to testing whether \(0x20 \& x \ne 0\). Similarly, the set \(A\) ∪ \({5}\), which is obtained by adding 5 to \(A\), can be computed as \(x | 0x20\). The set \(A \setminus {5}\), which is the set obtained by removing 5 from \(A\) if it occurs in \(A\), is represented by \(x \& ~0x20\).

The sets {0}, {1}, {2}, {3}, {4}, {5}, {6}, . . . , {31} are represented by the hexadecimal numbers 0x1, 0x2, 0x4, 0x8, 0x10, 0x20, . . . , 0x80000000. In typical computer applications, some of these numbers are given names, and these names are thought of as names for the possible elements of a set (although, properly speaking, they are names for sets containing those elements). Suppose, for example, that \(a, b, c\), and \(d\) are names for four of the numbers from the above list. Then \(a | c\) is the set that contains the two elements corresponding to the numbers \(a\) and \(c\). If \(x\) is a set, then \(x\&~d\) is the set obtained by removing \(d\) from \(x\). And we can test whether \(b\) is in \(x\) by testing if \(x\&b \ne 0\).

Here is an actual example, which is used in the Macintosh operating system. Characters can be printed or displayed on the screen in various sizes and styles. A **font** is a collection of pictures of characters in a particular size and style. On the Macintosh, a basic font can be modified by specifying any of the following style attributes: *bold*, *italic*, *underline*, *outline*, *shadow*,*condense*, and *extend*. The style of a font is a subset of this set of attributes. A style set can be specified by oring together individual attributes. For example, an underlined, bold, italic font has style set *underline* | *bold* | *italic*. For a plain font, with none of the style attributes set, the style set is the empty set, which is represented by the number zero.

The Java programming language uses a similar scheme to specify style attributes for fonts, but currently there are only two basic attributes, **Font.BOLD** and **Font.ITALIC**. A more interesting example in Java is pro- vided by event types. An event in Java represents some kind of user action, such as pressing a key on the keyboard. Events are associated with “com- ponents” such as windows, push buttons, and scroll bars. Components can be set to ignore a given type of event. We then say that that event type is disabled for that component. If a component is set to process events of a given type, then that event type is said to be enabled. Each compo- nent keeps track of the set of event types that are currently enabled. It will ignore any event whose type is not in that set. Each event type has an associated constant with a name such as **AWTEvent.MOUSE EVENT MASK**. These constants represent the possible elements of a set of event types. A set of event types can be specified by or-ing together a number of such constants. If c is a component and \(*x\)* is a number representing a set of event types, then the command “*c.enableEvents(x)*” enables the events in the set \(x\) for the component \(c\). If y represents the set of event types that were already enabled for \(c\), then the effect of this command is to replace \(y\) with the union, \(y|x\). Another command, “*c.disableEvents(x)*”, will disable the event types in \(x\) for the component \(c\). It does this by replacing the current set, \(y\), with \(y\&~x\).

## Exercises

- Suppose that the numbers \(x\) and \(y\) represent the sets \(A\) and \(B\). Show that the set \(A \setminus B\) is represented by \(x & (~y)\).
- Write each of the following binary numbers in hexadecimal:

a) \(10110110_{2}\)

b) \(10_{2}\)

c) \(111100001111_{2}\)

d) \(101001_{2}\) - Write each of the following hexadecimal numbers in binary:

a) 0x123

b) 0xFADE

c) 0x137F

d) 0xFF11 - Give the value of each of the following expressions as a hexadecimal number:

a) 0x73 | 0x56A

b) ~0x3FF0A2FF

c) (0x44 | 0x95) & 0xE7

d) 0x5C35A7 & 0xFF00

e) 0x5C35A7 & ~0xFF00

f) ~(0x1234 & 0x4321) - Find a calculator (or a calculator program on a computer) that can work with hexadecimal numbers. Write a short report explaining how to work with hexadecimal numbers on that calculator. You should explain, in particular, how the calculator can be used to do the previous problem.
- This question assumes that you know how to add binary numbers. Suppose xand y are binary numbers. Under what circumstances will the binary numbers \(x+y\) and \(x|y\) be the same?
- In addition to hexadecimal numbers, the programming languages Java, C, and C++ support octal numbers. Look up and report on octal numbers in Java, C, or C++. Explain what octal numbers are, how they are written, and how they are used.
- In the UNIX (or Linux) operating system, every file has an associated set of permissions, which determine who can use the file and how it can be used. The set of permissions for a given file is represented by a nine-bit binary number. This number is sometimes written as an octal number. Research and report on the UNIX systems of permissions. What set of permissions is represented by the octal number 752? by the octal number 622? Explain what is done by the UNIX commands “chmod g+rw filename” and “chmod o-w filename” in terms of sets. (Hint: Look at the man page for the chmodcommand. To see the page, use the UNIX command “man chmod”. If you don’t know what this means, you probably don’t know enough about UNIX to do this exercise.)
- Java, C, and C++ each have a boolean data type that has the values true andfalse. The usual logical and, or, and not operators on boolean values are rep- resented by the operators &&, | |, and !. C and C++ allow integer values to be used in places where boolean values are expected. In this case, the integer zero represents the boolean value false while any non-zero integer represents the boolean value true. This means that if \(x\) and \(y\) are integers, then both \(x&y\) and \(x&&y\) are valid expressions, and both can be considered to represent boolean values. Do the expressions \(x & y\) and \(x && y\) always represent the same boolean value, for any integers \(x\) and \(y\)? Do the expressions \(x|y\) and \(x | | y\) always represent the same boolean values? Explain your answers.
- Suppose that you, as a programmer, want to write a subroutine that will open a window on the computer’s screen. The window can have any of the following options: a close box, a zoom box, a resize box, a minimize box, a vertical scroll bar, a horizontal scroll bar. Design a scheme whereby the options for the window can be specified by a single parameter to the subroutine. The parameter should represent a set of options. How would you use your subroutine to open a window that has a close box and both scroll bars and no other options? Inside your subroutine, how would you determine which options have been specified for the window?