This post is inspired by Pradeep Mutalik’s Quanta article on coin puzzles.

It is the usual setup. You have eight coins. One of them is heavier than the other. You are also given a scale. How many weighings do you need to identify the bad coin?

Let’s start with the information theoretic perspective on this. The uncertainty here is the position/label of the coin that is wrong. There are eight choices. Your prior probability distribution of a coin being faulty is a uniform one. $$\text{Prob}(\text{coin $i$ is heavier}) = \frac{1}{8}$$ That is three bits of uncertainty $2^3 = 8$.

A measurement on a scale gives you three possible outcomes: left heavy, balanced, right heavy. Each measurement, thus, gives you $\log(3)/\log(2) = 1.58$ bits of information.

You should be able to detect the faulty coin in $3/1.58 \approx 2$ separate weighings.

The way to actually do it is as follows:

  • Measure coin {1, 2, 3} and {4, 5, 6}
    • If they are balanced, the heavy coin is in {7, 8}. Put them on a scale to identify the heavy one.
    • If they are unbalanced, you know which side has the heavier coin. Say it is {1, 2, 3}, weigh 1 and 2.
      • If they balance, 3 is heavier.
      • If they don’t balance, we have our answer.

Non-adaptive protocol?

The protocol I have described above is adaptive, meaning that the one weighing (or a measurement) informs what to weigh next.

One thing I have wondered is whether there is a non-adaptive way of identifying the heavier coin. That is, fix the set of coins you are going to measure and measure them without a complicated decision tree. Then use the measurement data to identify the bad coin.


Error Correction

The error correction view on this problem goes like the following.

You are given 8 bits of data. One of the bit is corrupted. Identify which one. In error-correction theory, this is done using parity check operations.

In our case, the difference is that, while parity check gives you a binary syndrome, the scale we are using gives us a ternary syndrome. How do you reconcile the two bases? One way of doing this is by creating a new family of ternary syndromes. A parity check only gives you a boolean output. Maybe we can do better.

Here is a suggestion. Give $N$ bits of data, representing $N$ coins, construct a set of new syndrome vectors $M^{(i)} \in {0, 1, -1}^N $ that gives the following ternary syndrome. $$S_i(\mathbf{x}) = \sum_{k=1}^N M^{(i)}_k \cdot x_k \mod 2$$

The 1’s in the syndrome string $M$ corresponds to the coins on the left pan of the scale, the -1’s correspond do the coins that go on the right pan and the zeros correspond to coins that we don’t use.

To make it consistent with a sensible weighing operation, we also demand that the number of $1$ equals the number of $-1$. That is, it only makes sense to weigh equal number of coins on both pans. Otherwise, the measurement is silly.

$$\sum_{k} M^{(i)}_k = 0$$

We now need to find a set of vectors $M^{(i)}$ such that two syndrome measurements can uniquely identify the location of the error (or the heavy coin).

The two vectors (among many others) that work are $$ M^{(1)} = (0, 0, 1, 1, 1, -1, -1, -1) \quad M^{(2)} = (1, -1, 0, 1, -1, 0, 1, -1) $$

Both of these correspond to weighing two set of three coins at a time.

We can check how the syndromes we get correspond to the position of the faulty bit (or the heavier coin).

Error Position $\to$01234567
Syndrome 100111222
Syndrome 212012012

Each pair of syndrome uniquely identifies the faulty bit.

So, there you go, you have a way to find the heavier coin using non-adaptive measurements.