As a follow-up to the previous post, I am recording here some details about the (numerous, varied, and deep) revisions that a bunch of my puzzles went through during the production process. These puzzles are in chronological order of when I wrote the puzzle.
Spoiler warning: Unlike the last post, these will completely spoil all of the puzzles below.
Surprisingly, the gist of the puzzle mostly survived revisions: the idea of an encoding-based puzzle with the key step to look at seven-segment displays between the Braille and resistor color encoding. One issue I ran into while constructing was that ABCDEF6 is actually too big in base 16 to store in an eight-bit number, which is part of why I ended up 1337-ing the seven-segment display bits.
However, in the first draft I apparently had the bright idea that resistor colors should include the last digit being a 10^n multiplier as in the electronic color code. Unsurprisingly, nobody actually does this in puzzle hunts, and it made it impossible to extract the hex correctly because there was even “rounding off” of the hexadecimal numbers involved. So the first testsolve session went up in flames, oops.
I over-corrected a bit after this where, in addition to making the colors simple digits, I made the clue phrase for the next step a lot more explicit, something like “resist > 8d int > hex > SSD > hex”. The next test-solves returned a few more comments that the puzzle felt a bit like “here is what to do, now do it”. So over time, the clue phrase became less explicit until it was just “int=>hex=>SSD”. That got better reviews, since it made things a little more rewarding.
The other thing that happened is that the length of the puzzle (in terms of the number of cells) got increasingly short over versions. There was a lot of concern that this puzzle was quite tedious if you don’t know enough spreadsheet/coding to automate transcribing the colors and bits and so on. Earlier versions of the puzzle had 150ish cells; the final version ended up with just 56.
We also agreed that it was better to make the 1337-ing of the LED segments consistent; it used to be that either 4 or A could be used, for example, but it was changed to be always 4. This meant that only 16 color patterns ever appeared, which turned out to be not a big deal.
The title of the puzzle used to be “I See The Light”, but we agreed “A Bit of Light” was better. The flavortext wasn’t really used in any solve, but we kept it because it was nice confirmation. The extraction used to include a crossword type thing, which was quickly deleted because it was completely un-thematic and didn’t add anything.
One thing I’m grateful for was having everything automated at the beginning; I basically had a Python script the whole time for building prototypes and editing them.
The basic idea behind the puzzle was: solve some math problems, insert the proposing country into the problem in some way, get a new country, and extract somehow.
The first idea for how to insert the country was by using the calling code for the country as a way to convert the input to a number. It was immediately pointed out that this was totally unthematic, so I dropped that idea pretty quickly.
It’s at this point I got the idea to use host countries of recent IMO’s, say the year they hosted the IMO or the number of the IMO, etc. I thought it would be cute if we used a different mechanism for each subject, although this was in part necessary because it’s not easy at all to write a reasonable-looking geometry question that gives a function from R to R. So I chose the “three collinear points” gimmick for geometry, and then noticed it lined up with the columns of IMO timeline, so used that for the rest.
Anyways, this gave a way to change each old host year into a new year. My original idea for extraction was that you could take these new years and compare which of them had a real IMO problem attached by that year-number pair, and find out that you get one of P1 through P6 exactly once. The true proposing countries of these six problems could then be diagonalized to extract a six-letter answer.
This extraction crashed and burned during test-solving, for a few reasons. First, the transformation I was specifying was a bijection from a set to itself, and people usually expect this to give a cycle, which I didn’t use. Secondly, there were just too many possibilities to make something this far-fetched reasonable, e.g. whether to use the old year or the new year, the old country or the new country or the true country, etc. And third, data-mining the past shortlists was a royal pain (I did try to plant the relevant data on my website but that was after the fact). We basically agreed that to make this mechanism work, we would need a really explicit shell that would be sort of clunky and not too elegant.
The good news was the feedback about the first part of the puzzle was mostly positive, although the puzzle was possibly too difficult (I remember hearing in feedback that the extraction felt “three steps too long”, and that even without the extraction this would still probably be one of the hardest puzzles in the Hunt.)
As I thought about this a bit more, I started thinking about the cycle idea that was pointed out (before this, I hadn’t known that bijections in puzzles are often cycles), and thinking about what I could do if instead the extraction led to a single cycle. It certainly occurred to me that there was one meme that I could exploit for this: that A/C/G/N is almost exactly the same as A/C/G/T. If I abbreviated “Number Theory” as “NT”, then I could use the resulting cycle to spell out a short phrase using DNA codons, the answer. This wasn’t too thematic, but then again it’s something that most people who have done math contests extensively have noticed at some point or other, and maybe it would be okay to use this idea now.
In building the cycle, there was an obvious bit of extra information: the shortlist numbers themselves are never used (e.g. A1 vs A2). So it was natural to have them be increasing in the correct order of the cycle, i.e. A1 came before A2 in the cycle, A2 came before A3, etc. This provided a strong confirmation that the cycle was the right thing to look at, that the number of the problems is unimportant, and it also gave a lot of error-correction to fix a few mistakes in the math problems.
Anyways, this involved redoing a lot of the math problems, which was “fun” :P
One other thing that happened during test-solving: one group failed to realize that combo was years instead of IMO number (like algebra) until after they had finished the entire puzzle (error-correction works, hooray). I fixed this by rewriting one of the problems (C3) so that it was obvious the input had to be a year rather than a small number to feasibly work.
The flavor-text is almost littered with clues (“stranded”, “timeline”, “official”). The first couple pages of the PDF file were also added to clue a bit more heavily what the solver was expected to do (originally, the proposing countries were alphabetical).
The author’s notes for this puzzle give a summary of some of the thoughts behind this puzzle. In short, Conway’s Game of Life has appeared before, but it never seemed to be used beyond simply “run one generation”. As I mentioned in my last post, I thought the elaborate list of names of still lifes was a silly/fun source of puzzle material, and wanted to use it.
The basic premise of the puzzle (i.e. what I started with) was as follows:
- Solvers realize the silly images are Conway’s game of life structures.
- Solvers run a set of specified life seeds and match it to pictures.
- The life simulations have escaping gliders giving semaphore, so the matching now gives a grid of letters.
The presentation of the seeds for the simulation went some revisions. Initially, I specified the coordinates as just text, but changed this to a pictorial grid with numbers typed in since there was a bit of confusion on which way the axes were indexed. This was quite important for this puzzle — a tiny error makes the simulation completely useless, so it was imperative for solvers that the right way to input things was completely unambiguous. I learned this the hard way! (I also originally didn’t think to specify the orientation of the gliders, since I thought that teams would probably be using a codable simulator and just try all 4^4=256 ways. Ha, ha, ha. We quickly reversed this decision.)
Since the order of the seeds didn’t matter, we sorted them alphabetically (which gave a strong confirmation of the semaphore being correct).
There was some discussion the whole way through of perhaps presenting the seeds in a standard Life format like RLE, because there was some mixed test-solving feedback about how much work it was to plug these into a Life simulator: essentially, some teams did the “right” thing and downloaded golly or similar, but others were using various browser apps that were more clunky, and the latter group of people did not have a good time running the simulations. We eventually kept the human-readable format rather than the machine-readable format because the step of figuring out how to fit the gliders to run towards the center was at least a little interesting, and because machine formats might carry “too much” information (e.g. solvers might start reading into grid size, starting location, etc.) and it was unclear why every scenario had exactly four gliders.
For me the most difficult part of setting this puzzle was what to do after obtaining the grid of letters. I had an idea that you could try and place the 16 things into a grid, and then run more Life so that the gliders collided further with each other. This turned out to be infeasible and the idea didn’t go anywhere.
The better idea I had was something like the following: when the semaphore is matched onto the 4×4 grid, the solver gets
and the cells with the capital letters above are tinted. I was hoping that the solver could read out the letters
T-FIVE which would helpfully tell the solver to run the simulation five steps and end up with the correct answer
There was one problem with this: I could not get any of my simulations to produce two orthogonal gliders, no matter how hard I tried. This was with my gaming PC doing glider collisions overnight.
Alas this meant I could not spell the letter
R using this method. I tried to work around this by having one letter from each word replaced by
X which the solver would have to fill in themselves, hoping that was good enough.
In test-solving, this went up in flames. The random X’s actually convinced the solvers that semaphore did not work, and even after this, the message
T-FIVE was apparently not noticeable. We thought about a fix where we just told the solvers explicitly to mirror the image once they had it, which would at least work, but was not really elegant, in my opinion.
For the next week, I played around with all sorts of terrible alternative mechanisms to try and get something clean. Almost nothing I tried worked; the fundamental problem of being unable to construct the letter
R when my assigned answer was
FLYER made most things I did quite unnatural. I didn’t sleep well those nights, because all I could think about was how I was going to work my way out of this one.
(One approach would have been to just give up on having 4-glider syntheses, which I had decided on from the start. But simulations with more than four gliders were annoying, and the directions would become a lot more confusing.)
It wasn’t until I had the idea of using a word square that I finally made any progress. Word squares have a lot of redundancy, so if I could specify the letters in the “living” cells, the dead cells could also have any letter I wanted. A-ha! I went and scraped Google for a compilation of word squares, and started searching through it for ones that might work. I eventually found one that seemed like it could work.
So eventually, we managed to the get the 6×6 word square that did make it into the final version. I deliberately opted for a symmetric word square, because I wanted to keep the number of simulations less than 20 (because of earlier feedback about the tediousness of using a bad simulator). We added a “t+1” clue because “+1” was apparently not enough during testsolving.
This was one of the few puzzles I wrote that have blanks in them, in the image where the still-lifes were first identified. Technically, I think some people could probably solve this puzzle without that image, if glider synthesis was clued a bit more in a different way. However, I thought this was a case where it’s more fun to break in by figuring out the (in some cases, ludicrous) names of the still-lifes gradually, and then being told explicitly to build the gliders with the resulting message.
To be honest, this puzzle was on the verge of not making it. In short, I had the idea of a Vigenere cipher using Unicode with a foreign language key… and that was pretty much it, lol.
The first draft was based on the idea that you have some famous song (think Taylor Swift) except it had been translated into some language L and back, and you needed to figure out the language L. This didn’t work at all, because there isn’t any good way to do this other than brute-force languages in Google Translate.
The second idea was that each key was the name of a KPop band and the plaintext was the song lyrics of one of their songs. It turns out there are a lot of KPop bands whose names can be read as a number like Iz*One, TWICE, GOT7, etc. Well, it turns out this was done already.
So I kind of gave up and just made a maximally uninspired extraction where you just take the first letters of song titles. We decided to get a test-solve of this, where it naturally got pretty low ratings, but by luck, Brian Chen testsolved it and mentioned Google Translate Sings as possibly thematic puzzle material.
I had never heard of Google Translate Sings, but when I did, I realized I now had puzzle data: there was a canonical episode number for all the songs. After doing a cursory search for “puzzle hunt” and “google translate” to confirm this hadn’t been done already, I went for it.
Anyways, at this point, the parameters I had to play with were:
- Choose a language for the key
- Choose a language for the plaintext
- Choose the content of the key
- Get an episode number for the plaintext
I had the idea that we could run a “do it again”, in which the key itself was a Unicode character, while the episode number from the plaintext was what you subtracted from. That left the languages free to vary, and I wanted to “use up” this information, so I rigged the first letters to spell “FANDOM” and “NUMBER”, to make sure the episode numbers were hinted specifically.
One small wrench in this plan was that there was a Genius listing for Google Translate Sings that was (a) incomplete, (b) had wrong numbers. I had to carefully make sure to select songs that didn’t appear in Genius, in order to make sure anyone who found the Genius page knew immediately that it wasn’t what they were looking for.
The other wrench is that the Malinda Wiki is not that frequently updated, so some of the more recent songs I needed were not there. So, I am the proud author of the Never Gonna Give You Up page.
It was also a bit constraining what Unicode characters I could use for the keys, because I needed to make sure the translation of the key back into English was recognizable (it wasn’t always). One other thing I did here was specifically use NULL (unicode point 0) as one of the characters, as a de-confirmation of several wrong approaches like trying to index by the Unicode number somehow.
We gave the first key to the solver because we wanted the solver to first have a confirmation they understand the Unicode Vigenere correctly before going on to try and do cryptanalysis.
This was pretty straightforward. I saw the answer
PRODUCT TESTING in the pool, and looked at my abacus, and was like, “huh, the abacus has a grid”. So the idea to have a mapping Braille and the abacus was born.
Well, actually, to be pedantic, I own a soroban specifically (rather than the Chinese suanpan). Anyways, in keeping with the Japanese roots of the soroban, I thought it would be appropriate to use Japanese Braille rather than standard Braille, which was nice.
I was a bit worried about the cluephrase seihin tesuto not being unambiguous enough, though test-solvers seemed to get it fine. (There was a suggestion at some point of adding an enumeration 7 7, but by coincidence this is exactly the number of rods, which probably would have led to some red herrings.)
During test-solving, most people figured out Hanabi and the letter encoding pretty quickly (and there is a lot of confirmation both are correct) As I mentioned in the previous post, there used to only be four scenarios, but we added a fifth easy one to make the rules more well-defined after the first test-solve group struggled. In addition, the plan was moved to the bottom and de-emphasized, since the first test-solving group tried to use them too early; we also added “starts” to make it clear that the first player starts.
Quoting further from the Author’s Notes for the original inspiration:
After playing maybe too many hours of Hanabi during quarantine, I knew I wanted to write a Hanabi puzzle for the Mystery Hunt as well, especially after seeing a Hanabi duck konundrum in the inaugural teammate hunt.
When I played Hanabi at summer math camps as a teenager, the only conventions we would have were play left, discard right, (forward) finesses, and the “fake finesses” (a worse form of the bluff). Plays and saves weren’t even clearly differentiated. Thus our win rate was quite low, and it was not until I played with a more well-developed convention set that I started appreciating the complexity (and fun) of Hanabi.
So, I wanted to develop a puzzle that would showcase at least some of that complexity. Since it’s so difficult to use “conventions” as a puzzle mechanic, I settled on the idea of using the notion of pace for a logic-only puzzle (without regards for efficiency, a measure of how much information is communicated by clues). The idea that the number of discards is limited, and clues can be used not just to indicate cards but also to stall for time, gave me a mechanic that I thought would be new even and interesting even to people who may have casually played Hanabi before. It was a lot of fun during test-solving to watch groups slowly realize (and then prove) that no cards could be discarded in the second scenario.
For extraction, choosing the plans was a fun exercise in Nutrimatic, since if I used a “random” string, it would be too easy to figure out which letters overlap just by looking at which cards were still left to play. Various strings that appeared in one version or other, often that barely didn’t work:
TRUST MY LOGIC
A GENIUS LEVEL IQ
AN EXPLOSION FINESSE
CRISIS SAVING SKILLS
I also originally considered using tap code for the encoding of the letters, but this seemed like an un-thematic complication that was quickly scrapped.
I didn’t really want to use the crossword cluephrase
FINAL PROPOSAL, but the current answer has the unfortunate property that it is almost uniquely determined by the first two letters, so this was necessary to prevent solvers from short-circuiting the puzzle too prematurely.
The title of the puzzle was suggested by Danny Bulmash.
One of the things that was hotly debated was the “Chances: m of n!” notation. Technically, the number m is not needed at all, and it seems like it would be more straightforward to say “n left” or similar. However, I wanted to include it anyways for a few reason
- I felt it was thematic.
- Writing n! communicated clearly that it was the deck order, rather than say turns left.
- It emphasizes that the team is super unlikely to win (which I think is important for telling solvers the route is unique). For example, I think test-solve groups used the fact that 2 in 720 is not reduced.
- It helps check-sum that the solution is right.
Incidentally, I’m almost more proud of the solution write-up than the puzzle itself — it took a lot of time for me to put together.
This was also a pretty quick puzzle. One of the remaining puzzle answers was
GOOD FAT, which motivated me to look at some nutrition fact containers and think about how the numbers there could be made into a puzzle. Well, numbers correspond to letters, so those bold numbers could be put to good use…
I was reminded about the 9-4-4 rule I learned in grade school, which meant there was a natural way to link the numbers “hidden in plain sight”. I originally wanted to let calories be the unknown number to compute, since it was the biggest number and if it was missing, everyone would think to compute it right away. However, it was hard to get that to work with A1Z26, since the calories number was usually going to be more than 100, making it awkward to naturally map the calorie numbers to letters, even if I allowed scaling by 10 or 100 or whatever.
That’s when I had the idea to make protein the hidden number instead, because for unknown reasons, the percent on protein is usually missing. That worked much better with A1Z26 and completed the puzzle.
Similar to Le Chiffre, I included one example food, a
HERRING, so that the solvers knew what was going on. This also gave me one extra degree of freedom to let the total number of calories of all the foods equal 2000; not strictly necessary, but it was a nice artistic touch.
Originally, I just had a pretty dorky pie chart with food images pasted on to it, but during the post-prodding I had a bit more fun making an interactive pie chart with HTML that hovered over the image.
8. Divided Is Us, written with Jakob Weisblat
As written in the author’s notes
This was the last puzzle to be written for the 2021 MIT Mystery Hunt. At a meeting in late December, the editors announced there was exactly one answer remaining, and it was a teamwork time puzzle. As it happens, I was thinking earlier that day how we didn’t have our obligatory Baba puzzle yet, and the idea for the puzzle was born.
(I should say now that actually there was one other puzzle that was written after this: Better Bridges, which was a last-minute backup puzzle.)
To look into whether this was feasible, I did some research on open-source Baba Is You libraries. I settled on using baba-is-auto, although the library is still a work-in-progress as far as I can tell, and I needed to do some editing of the library to make it work.
Anyways, yeah, the premise was co-operative Baba puzzles, where there were different blocks for the various players. The obvious options for the win condition would be “win if anyone reaches the goal”, or “win if everyone reaches the goal”, but the intermediate option of having YOU reach the goal turned out to work the best. (This was not immediately obvious to teams during test-solving even a few levels in, so we had to rewrite the intro level to make it more blatant.)
Meanwhile, I had the thought that it would be cool if the game slowly revealed that everyone was seeing parts of the same world, and say, around level 4 or so, you would suddenly realize this when someone moved their icon onto another player’s screen. Accordingly, we hid the “other player” viewports until after this revelation (at which point some people wondered if they had simply not noticed those viewports all along).
I’m pretty happy with the extraction: I think it’s a good example of how you can embed encodings in unexpected places. Here, I took advantage of the fact that the game was secretly one world in order to put semaphore and Braille (conveniently, flags are not too conspicuous) and it was a lot of fun to watch people realize this around level 6 or 7, as they were reaching the final level. However, I also couldn’t resist using the names of the various icons. It was convenient that many of them were four letter words. Anyways, the answer
SECOND GRAND ALLIANCE is pretty long, so there was enough space to use one mechanism per word.
(I did sort of break one puzzle design rule here: you couldn’t finish the puzzle without solving every Baba level. This went okay in test-solving though.)
There were two design choices I wish I had done differently:
- We tried to set the game so that each person got assigned a player, which wouldn’t change if they refreshed the page. This ended up causing issues when websockets got disconnected or the server went down or whatever, causing certain player slots to get locked out. It wasn’t too hard to work around (e.g. using private windows) but it did annoy a bunch of the teams.
- We didn’t show the total number of levels, which I regret a bit. I think it would have been more fun for people to realize they were getting to the end. Also, in one case a team managed to prove to themselves the penultimate level was not solvable, and was unsure what to do from there. Had we put a more obvious level indicator, this could have been a bit clearer.
Finally, while I’m here, I’d like to thank Arvi Teikari for the permission to use the game concept and art for this puzzle.
I was pretty pleasantly surprised by how many puzzles I was able to produce over the span of the hunt-writing process. The larger puzzles here on average took over my life for a week, and Shortlist and Escape From Life took even longer. It helped a lot that I was taking time away from school this year; I think otherwise there isn’t any way I would have been able to find enough time to put these puzzles together.
On the flip side though, I think I’ve just about used up all my puzzle ideas for the foreseeable future. (At least I say that now.) But I don’t have any regrets about this. I’m not sure I’ll have a chance to write for the Mystery Hunt ever again, so I’m glad I was able to make the most of this chance I got, and I couldn’t have asked for a better team to have write the Mystery Hunt with.