Skip to main content
Engineering LibreTexts

5.4: Discussion and Exercises

  • Page ID
    8459
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Hash tables and hash codes represent an enormous and active field of research that is just touched upon in this chapter. The online Bibliography on Hashing [10] contains nearly 2000 entries.

    A variety of different hash table implementations exist. The one described in Section 5.1 is known as hashing with chaining (each array entry contains a chain (List) of elements). Hashing with chaining dates back to an internal IBM memorandum authored by H. P. Luhn and dated January 1953. This memorandum also seems to be one of the earliest references to linked lists.

    An alternative to hashing with chaining is that used by open addressing schemes, where all data is stored directly in an array. These schemes include the LinearHashTable structure of Section 5.2. This idea was also proposed, independently, by a group at IBM in the 1950s. Open addressing schemes must deal with the problem of collision resolution: the case where two values hash to the same array location. Different strategies exist for collision resolution; these provide different performance guarantees and often require more sophisticated hash functions than the ones described here.

    Yet another category of hash table implementations are the so-called perfect hashing methods. These are methods in which \(\mathtt{find(x)}\) operations take \(O(1)\) time in the worst-case. For static data sets, this can be accomplished by finding perfect hash functions for the data; these are functions that map each piece of data to a unique array location. For data that changes over time, perfect hashing methods include FKS two-level hash tables [31,24] and cuckoo hashing [57].

    The hash functions presented in this chapter are probably among the most practical methods currently known that can be proven to work well for any set of data. Other provably good methods date back to the pioneering work of Carter and Wegman who introduced the notion of universal hashing and described several hash functions for different scenarios [14]. Tabulation hashing, described in Section 5.2.3, is due to Carter and Wegman [14], but its analysis, when applied to linear probing (and several other hash table schemes) is due to P{\v{a\/}}\kern.05emtra{\c{s\/}}cu and Thorup [60].

    The idea of multiplicative hashing is very old and seems to be part of the hashing folklore [48, Section 6.4]. However, the idea of choosing the multiplier \(\mathtt{z}\) to be a random odd number, and the analysis in Section 5.1.1 is due to Dietzfelbinger et al. [23]. This version of multiplicative hashing is one of the simplest, but its collision probability of \(2/2^{\mathtt{d}}\) is a factor of two larger than what one could expect with a random function from \(2^{\mathtt{w}}\to 2^{\mathtt{d}}\). The multiply-add hashing method uses the function

    \[ h(\mathtt{x}) = ((\mathtt{z}\mathtt{x}+b) \bmod 2^{\mathtt{2w}}) \operatorname{div} 2^{\mathtt{2w}-\mathtt{d}} \nonumber\]

    where \(\mathtt{z}\) and \(\mathtt{b}\) are each randomly chosen from \(\{0,\ldots,2^{\mathtt{2w}}-1\}\). Multiply-add hashing has a collision probability of only \(1/2^{\mathtt{d}}\) [21], but requires \(2\mathtt{w}\)-bit precision arithmetic.

    There are a number of methods of obtaining hash codes from fixed-length sequences of \(\mathtt{w}\)-bit integers. One particularly fast method [11] is the function

    \[\begin{array}{l}
    h(\mathtt{x}_0,\ldots,\mathtt{x}_{r-1}) \\
    \quad=\left(\sum_{i=0}^{r / 2-1} ( (\mathtt{x}_{2 i}+\mathtt{a}_{2 i} ) \bmod 2^\mathtt{w} ) ( (\mathtt{x}_{2 i+1}+\mathtt{a}_{2i+1} ) \bmod 2^\mathtt{w} )\right) \bmod 2^{2\mathtt{w}}
    \end{array}\nonumber\]

    where \(r\) is even and \(\mathtt{a}_0,\ldots,\mathtt{a}_{r-1}\) are randomly chosen from \(\{0,\ldots,2^{\mathtt{w}}\}\). This yields a \(2\mathtt{w}\)-bit hash code that has collision probability \(1/2^{\mathtt{w}}\). This can be reduced to a \(\mathtt{w}\)-bit hash code using multiplicative (or multiply-add) hashing. This method is fast because it requires only \(r/2\) \(2\mathtt{w}\)-bit multiplications whereas the method described in Section 5.3.2 requires \(r\) multiplications. (The \(\bmod\) operations occur implicitly by using \(\mathtt{w}\) and \(2\mathtt{w}\)-bit arithmetic for the additions and multiplications, respectively.)

    The method from Section 5.3.3 of using polynomials over prime fields to hash variable-length arrays and strings is due to Dietzfelbinger et al. [22]. Due to its use of the \(\bmod\) operator which relies on a costly machine instruction, it is, unfortunately, not very fast. Some variants of this method choose the prime \(\mathtt{p}\) to be one of the form \(2^{\mathtt{w}}-1\), in which case the \(\bmod\) operator can be replaced with addition (\(\mathtt{+}\)) and bitwise-and (\(\mathtt{\text{&}}\)) operations [47, Section 3.6]. Another option is to apply one of the fast methods for fixed-length strings to blocks of length \(c\) for some constant \(c>1\) and then apply the prime field method to the resulting sequence of \(\lceil r/c\rceil\) hash codes.

    Exercise \(\PageIndex{1}\)

    A certain university assigns each of its students student numbers the first time they register for any course. These numbers are sequential integers that started at 0 many years ago and are now in the millions. Suppose we have a class of one hundred first year students and we want to assign them hash codes based on their student numbers. Does it make more sense to use the first two digits or the last two digits of their student number? Justify your answer.

    Exercise \(\PageIndex{2}\)

    Consider the hashing scheme in Section 5.1.1, and suppose \(\mathtt{n}=2^{\mathtt{d}}\) and \(\mathtt{d}\le \mathtt{w}/2\).

    1. Show that, for any choice of the muliplier, \(\mathtt{z}\), there exists \(\mathtt{n}\) values that all have the same hash code. (Hint: This is easy, and doesn't require any number theory.)
    2. Given the multiplier, \(\mathtt{z}\), describe \(\mathtt{n}\) values that all have the same hash code. (Hint: This is harder, and requires some basic number theory.)

    Exercise \(\PageIndex{3}\)

    Prove that the bound \(2/2^{\mathtt{d}}\) in Lemma 5.1.1 is the best possible bound by showing that, if \(x=2^{\mathtt{w}-\mathtt{d}-2}\) and \(\mathtt{y}=3\mathtt{x}\), then \(\Pr\{\mathtt{hash(x)}=\mathtt{hash(y)}\}=2/2^{\mathtt{d}}\). (Hint: Look at the binary representations of \(\mathtt{zx}\) and \(\mathtt{z}3\mathtt{x}\) and use the fact that \(\mathtt{z}3\mathtt{x} = \mathtt{z}x\mathtt{+2}z\mathtt{x}\).)

    Exercise \(\PageIndex{4}\)

    Reprove Lemma 5.2.1 using the full version of Stirling's Approximation given in Section 1.3.2.

    Exercise \(\PageIndex{5}\)

    Consider the following simplified version of the code for adding an element \(\mathtt{x}\) to a LinearHashTable, which simply stores \(\mathtt{x}\) in the first \(\mathtt{null}\) array entry it finds. Explain why this could be very slow by giving an example of a sequence of \(O(\mathtt{n})\) \(\mathtt{add(x)}\), \(\mathtt{remove(x)}\), and \(\mathtt{find(x)}\) operations that would take on the order of \(\mathtt{n}^2\) time to execute.

        boolean addSlow(T x) {
            if (2*(q+1) > t.length) resize(); // max 50% occupancy
            int i = hash(x);
            while (t[i] != null) {
                if (t[i] != del && x.equals(t[i])) return false;
                i = (i == t.length-1) ? 0 : i + 1; // increment i
            }
            t[i] = x;
            n++; q++;
            return true;
        }
    

    Exercise \(\PageIndex{6}\)

    Early versions of the Java \(\mathtt{hashCode()}\) method for the String class worked by not using all of the characters found in long strings. For example, for a sixteen character string, the hash code was computed using only the eight even-indexed characters. Explain why this was a very bad idea by giving an example of large set of strings that all have the same hash code.

    Exercise \(\PageIndex{7}\)

    Suppose you have an object made up of two \(\mathtt{w}\)-bit integers, \(\mathtt{x}\) and \(\mathtt{y}\). Show why \(\mathtt{x}\oplus\mathtt{y}\) does not make a good hash code for your object. Give an example of a large set of objects that would all have hash code 0.

    Exercise \(\PageIndex{8}\)

    Suppose you have an object made up of two \(\mathtt{w}\)-bit integers, \(\mathtt{x}\) and \(\mathtt{y}\). Show why \(\mathtt{x}+\mathtt{y}\) does not make a good hash code for your object. Give an example of a large set of objects that would all have the same hash code.

    Exercise \(\PageIndex{9}\)

    Suppose you have an object made up of two \(\mathtt{w}\)-bit integers, \(\mathtt{x}\) and \(\mathtt{y}\). Suppose that the hash code for your object is defined by some deterministic function \(h(\mathtt{x},\mathtt{y})\) that produces a single \(\mathtt{w}\)-bit integer. Prove that there exists a large set of objects that have the same hash code.

    Exercise \(\PageIndex{10}\)

    Let \(p=2^{\mathtt{w}}-1\) for some positive integer \(\mathtt{w}\). Explain why, for a positive integer \(x\)

    \[ (x\bmod 2^{\mathtt{w}}) + (x\operatorname{div} 2^{\mathtt{w}}) \equiv x \bmod (2^{\mathtt{w}}-1) \enspace . \nonumber\]

    (This gives an algorithm for computing \(x \bmod (2^{\mathtt{w}}-1)\) by repeatedly setting

    \[ \mathtt{x = x\text{&}((1\text{<<}w)-1) + x\text{>>>}w} \nonumber\]

    until \(\mathtt{x} \le 2^{\mathtt{w}}-1\).)

    Exercise \(\PageIndex{11}\)

    Find some commonly used hash table implementation such as the (Java Collection Framework HashMap or the HashTable or LinearHashTable implementations in this book, and design a program that stores integers in this data structure so that there are integers, \(\mathtt{x}\), such that \(\mathtt{find(x)}\) takes linear time. That is, find a set of \(\mathtt{n}\) integers for which there are \(c\mathtt{n}\) elements that hash to the same table location.

    Depending on how good the implementation is, you may be able to do this just by inspecting the code for the implementation, or you may have to write some code that does trial insertions and searches, timing how long it takes to add and find particular values. (This can be, and has been, used to launch denial of service attacks on web servers [17].)


    This page titled 5.4: Discussion and Exercises is shared under a CC BY license and was authored, remixed, and/or curated by Pat Morin (Athabasca University Press) .