Puzzled Pint & Social Interactions

We bill Puzzled Pint as a friendly, social, puzzle-solving event that’s geared toward newbies (as well as experienced puzzlers, especially if they’re happy to handicap themselves with a few pints). The truth is that Puzzled Pint is slightly less social than we make it out to be. Attendees are social within their team and when interacting with Game Control, but teams rarely talk to one another. Over the years, we’ve employed a few “icebreaker” puzzles. They’ve been few and far between and happened back when it was just Portland and a single Seattle site.

This past month, during Wil Zambole’s A-Team set, we had a set of puzzles that required cross-team data gathering and sharing. Wil’s idea gave each team a dossier on one of four A-Team members. Each of the puzzles had a few clues that could only be solved by looking up a personal fact from one of the dossiers. Hannibal’s favorite author. Face’s favorite TV show. Murdoch’s favorite band. B.A.’s favorite flower. Teams had to send a runner out to collect the relevant data.

In the week leading up to the event, the social and interaction angle of these puzzles became quite polarizing behind the scenes. GC in different cities seemed to either love or hate the idea. We ended up allowing regional options — the GC table had a full set of cards, cards hidden around the bar, that sort of thing. Some of the major complaints included:

  • Unfair to early/late teams because there may not be another team present with the correct card.
  • Interruptions during critical solve moments.
  • Accidentally overhearing solve mechanisms from other teams.
  • Artificial bottlenecks that unfairly affect solve times and standings.

On the other hand, many in GC thought that a few minor interactions would bring in something new and novel to the night, possibly bringing us back to our more social roots.

What GC thinks is one thing, but how players feel about the event is what matters most. We added a Question of the Month to help capture player sentiment: How would you rate the interaction puzzles from 1..10 plus room for comments.

At the time of writing, we have data from 13 of 16 locations (no QotM records from Boston & Pittsburgh, no data from DC). I tabulated the data in two different ways. First, we have the raw scores. Only 40% of people wrote in a numeric score. Many more filled in a comment. Because of the disproportionate responses between values and comments, I attempted to rate the comments as positive, negative, or neutral. They broke out like this:

  • Positive
    • If the team gave a score greater than 5
    • If the team gave no score, but a positive comment
    • If the team gave a low score but specifically indicated in the comment that they liked the concept but not the execution
  • Neutral
    • If the team gave a score equal to 5
    • If the team gave no score and no comment or an irrelevant comment (“I like turtles”)
  • Negative
    • If the team gave a score < 5
    • If the team gave an entirely negative comment

Globally, we had 236 responses. For the numeric 1..10 answers we had an average of 4.7 and a median of 5. This indicates that the positives and negatives were split pretty evenly. There weren’t outliers skewing the average out of proportion. The results looked like this:


When you graph the positive/neutral/negative range of the write-in comments, it shows that a lot of people didn’t mind and that there was a fairly even split between the likes and dislikes.


And if we break it down by city, the numbers look like this:

Average Plusses Neutrals Minuses
Austin 5.7 10 4 5
Bay Area, Peninsula 5.2 13 7 7
Bay Area, SF 3.7 9 5 10
Brooklyn 7 3 7 0
Chicago 5 3 20 3
London, Bubble 4 10 1
London, Squeak 3 6 2
Montreal 3.8 1 2 3
Phoenix 6.1 2 1 2
Portland 5.2 3 17 2
Seattle, City 4 7 15 7
Seattle, Eastside 3.8 5 6 13
Toronto 5 2 12 1

I’m not sure what city-by-city conclusions to draw from that, but thought it would be fun for each city to see how their responses compared to others.


What do you think of puzzles requiring social interaction? Have you run into any at a previous event, such as DASH or an earlier Puzzled Pint, that you particularly enjoyed? Although we want to aim for no more than one or two “icebreaker” months a year at Puzzled Pint, how might we improve the player experience for this style of puzzle?

This post was written by:

Brian Enigma

Brian likes to build with bits and atoms when he's not reverse-engineering. Read about his latest cool projects at netninja.com.

This post was written by:

Brian Enigma

Brian likes to build with bits and atoms when he's not reverse-engineering. Read about his latest cool projects at netninja.com.

Leave a Reply

Your email address will not be published. Required fields are marked *