Mathew Sanders /

Making Master: a game of inductive logic for iPhone

Inspiration

Summer, 2012

Four years ago I was finishing up an interview with a small software development company.

Typical for New York tech spaces it was located in the refurbished space of a Jim Henson muppet workshop, with a built in cocktail bar in the back. I was invited to stick around for a drink where some of the employees asked if I wanted to play a game of Zendo with them.

I don’t really think of myself as a ‘game person’ and I’d never heard of Zendo. It turned out to be the most fun I’ve ever had playing a game.

There are some subtleties in the gameplay, but the basics are that for each round, one person takes the role of the master, and everyone else takes the role of the students.

The master creates a secret rule that can be demonstrated with a set of shapes2. The rule could be very specific “the peak of a small yellow pyramid is pointing towards the side of a red pyramid, but they are not touching” to deceiving broad like “no blue pyramids”.

Students then attempt to figure out the rule by building scenarios with their own shapes and the master indicates if their example demonstrates the secret rule or not.

When a student makes a guess for what the rule is, if their guess is wrong, the master needs to build a scenario that demonstrates the real rule, but also shows how the students guess is wrong.

Playing as the master, your role is to choose an aesthetically pleasing rule3 (hard to guess, but simple to state), and when building scenarios that demonstrate the rule try to obfuscate the secret rule.

Playing as the student, your role is to build scenarios to help you figure out the rule first, but not give away too much to the student who has the next turn.

For me, this is the perfect game, matching logic with the flexibility of human language. I left that night having lost every round, but in love with Zendo. That night strongly influenced my decision to join that company, but also left me wondering if it would be possible to create a digital game that combined some of the elements of a Zendo game.

🔷

Early ideas

Sep 16, 2014

At the time I had a pretty basic knowledge of iOS development. I’d been using Quartz Composer to explore animations and transitions, and made some simple apps4 but in an age before storyboards, auto layout, and ARC, even these simple apps were stretching my abilities.

One hot rainy summer afternoon I started sketching out what what a UI might look like.

The pyramid pieces in Zendo allow the possibility of spatial rules in three dimensional space, but I’d already decided that anything I’d attempt would be simplified to two-dimensions.

I’d been recently reading Tufte’s Envisioning Information and inspired by one of the illustrations decided that I’d try making game play based around a set of stylized flags on a grid.

I still had my heart set on spatial rules (e.g. the rule is any red flag above a blue flag) so I satisfied myself by exploring how players playing the student might construct scenarios, and how the master might give feedback, and thinking about how a digital gameplay might play out as either a multi player game, or as a single player playing against the computer.

A big challenge I encountered in sketching out these experiences was the switch between input of making-an-example and guessing-the-rule. Guessing the rule in particular was tricky because I wasn’t sure if I wanted the flexibility of natural language input (easy for humans, harder for computers), or a more abstract and constrained way to communicate the rules (hard for humans, easier for computers).

🔺🔶🔹

Re-inspiration

Early 2016

I’d started meeting once a week with Rachel Hsiung, who interned at ustwo the year before and was learning how to turn her designs into a working web-app.

I didn’t know a whole lot about web-programming5 and was barely a step ahead of Rachel herself, but agreed to spend sometime once a week to help learn with her (btw, Rachel introduced me to Vue, which I’ve found to be easy to learn, and I think has a lot of potential for bringing data into the design process).

At the same time I was invited to a Designer Dinner organize by Francisco Hui, and kindly hosted by the design team at Handy.

Seeing Rachel work on her own idea, and seeing Fransicso’s work on the Design Related podcast and designer dinner gave me a lot of energy to work on something more concrete of my own, and I started thinking back to the idea of a Zendo game.

🔴

Playgrounds

Feb 27, 2016

I’d had lunch with Natasha Murashev a couple times while she’d been visiting New York, and reading her posts about learning swift at the same time that I was exploring how to make animations, and transitions in Swift, and using Xcode storyboards to make prototypes of designs for usability testing6.

Something Natasha wrote about that I’d not spent a lot of time exploring, was the new Xcode feature of a Playground, which is basically a window split into panes with a space to enter code, with the output of each line of code mirrored onto an adjacent panel.

The code you write in a Playground is continuously being evaluated, so you don’t have to compile and run your code after changes to see the output. This makes exploring ideas a lot faster.

Before I started even looking at the basic UI for the app, this is where I started, but figuring out how I was going to represent in code details about the items that were used in rules, and correspondingly how the rules themselves could be represented in code.

I started with a super simple case where items could only be a single type of shape (a square), and the option to be either red, blue, green or yellow.

For the rules, I removed any aspect of spatiality, and instead started with the most simple-but-still interesting form of counting the items to allow for rules like “two yellow squares” or “no blue squares”.

Next I explored how to make the rules more complex by allowing a single conjunction so that rules like “one red square and two blue squares” could also be represented.

Then, using NSLinguisticTagger, I hacked together a parser that would take a sentence and return a rule in the structure that I’d created.

While everything was pretty fragile, and worked only correctly in specific situations, this gave me the confidence that I was going in a direction would probably work.

But I kept working in a Playground, refining what attributes an item could have, and the types of ways that items could be combined to create rules, and how to programmatically create rules randomly, and how to compare rules with each other to check if they matched (for example “more yellow shapes than blue shapes” is functionally equivalent to “less blue shapes than yellow shapes”).

In the end, my playground doubled to around 600 lines before I took the next step of translating this code into the context of an iPhone app.

Iteration 1: Showing rules

Mar 5, 2016

I’m pretty sure that my rule engine and scenario generator are working, but it’s a bit slow (for me) seeing the results in text format.

The right way to do this is probably to write tests, but instead, I’ve made a prototype that I can interact with and see the results visually.

The prototype has three main areas. At the top a label shows the current rule. The middle has a grid to show an example scenario for the rule. And finally at the bottom, I can tap a button to update the rule with another randomly generated example.

To spend as little time as possible, I took a lot of shortcuts here.

Triangles are harder to draw, so I just drew triangles as square with rounded edges with a ‘T’ character.

I’m also ignoring the size of the shapes (small and large are both presented as the same size), and using UIKit’s default colors for red, yellow, green, and blue.

Even with these shortcuts, with this visual representation of the rules I’m getting a lot more feedback from the rule engine, and some logic bugs that I’d not noticed before are coming up.

A good example of this is the scenario for the rule “an even number of green triangles” shown above.

My scenario generator returned an example with zero green triangles as a valid example, which for my definition of even was correct, but for gameplay is a bug.

⚫️⚪️

Interface ideas

Mar 6, 2016

Quartz recently launched an iPhone app with a conversational interface.

I found this approach really interesting, and realized that a conversational UI for my game could be a good way to show the history of experiments that a student makes, and an elegant way to swap between a touch input for creating an experiment, and keyboard input for making a guess for the rule.

So I’ve borrowed heavily from Quartz’s approach of prompting for the next step with a pre-set number of answers, and the use of emoji to represent options and sketched out some ideas for what a conversational UI for this game might look like.

👀

Iteration 2: Refining the scenario view

Mar 6, 2016

I fixed the logic errors I saw from from this first prototype and spent some more time refining how shapes were rendered in the scenario view.

I figured out how to draw a triangle, and show shapes as either a small shape or a large shape which I need to check rules that take that into account.

Like before, this update allowed me to more obviously find, and correct some flaws in my logic of how scenarios were generated from rules.

⚫️⚪️

Iteration 3: Exploring scenarios

Mar 6, 2016

My eyes are getting sore looking at UIKit’s colors so I dropped in the colors from my UI exploration.

I’ve also added a second button at the bottom, so now I can either update the rule, or update the scenario for the current rule.

This is allowing me to refresh scenarios for a particular type of rule as many times as I need until I’m satisfied that all the examples make sense.

⚫️⚪️

Iteration 4: Migrate to a table view

Mar 13, 2016

The next step I made an initial attempt to switch to a table view so that as a new scenario was generated it would be appended as a new row in the table.

As these early screen shots show, this was not an immediate success.

I went back to learn more about how auto layout and table views work together with each other, and made some major improvements to my scenario view that included the position of the items to be randomly scattered around the scenario rather than just starting at the top left position of the grid.

This iteration was also my first taste that I had in experiencing what actual game play might feel like. I could run the prototype, and by tapping the update button I could get a new scenario, and try to guess what the rule might be (the rule itself was printed in the console as a log message).

⚫️⚪️

Iteration 5: Showing options

Mar 19, 2016

Finally I had something that was starting to look like my UI concepts. For my next iteration I added in buttons to represent four possible next steps, although only the 👀 option to show a new scenario example was enabled.

Because I love animations I also took the time at this step to add some initial touches like animating how the buttons were presented, and how a new scenario was added to the screen.

🔺

Iteration 6: Showing help

Mar 20, 2016

Until now I’ve been using the prototype on my phone, but unless my phone was plugged into my laptop there wasn’t an easy way to check what the current rule was.

I enabled the ❔ button so that when tapped it would show some help text, and also the current rule.

This let me use the prototype on the subway and experiment with guessing what the rule might be from the example scenarios. So far my impression is that the rules are maybe a little too difficult to make the gameplay satisfying.

At the same time I worked on how the scenario views were on-staged with a little animation of each shape appearing.

🔹

Iteration 7: Scenario builder

Mar 28, 2016

This iteration I’ve added in a way to create an experiment scenario and master will respond with ⚪️ if the scenario you made satisfies the rule, or ⚫️ if the scenario doesn’t satisfy the rule.

The first step I updated the scenario view so that tapping on the grid fills that spot with a randomly picked shape. Because I didn’t need any UI to pick the shape this was a quick way to explore this new mode for the scenario view, and how the master can display its response.

I sketched some ideas for how to have more control over the shape you’re adding when making a scenario.

The easiest option seemed to be a scrollable area where people could select from all the possible shape combinations. I built this into the app and also made this mode where you’re creating a scenario extend to the full edges of the screen.

🔶

Iteration 8: Targeting 60fps

Apr 3, 2016

Until now I’ve not been worrying about performance, and running in the simulator everything seems pretty smooth. But as I’ve been using the prototype on my device more often, I’ve noticed that animations and scrolling in the table view is pretty jaggy.

I ran the app with the Core Animation profiler and performance drops down to 10-20 fps. For the smoothest animations, the ideal rate would be 60 fps.

I spent some time learning about what slows down the frame rate and made some changes:

  • Switch table cell height from UITableViewAutomaticDimension to a fixed height depending on the type of cell.
  • Set the opaque property of all cells cell.contentView.opaque = true
  • For UI elements that have a mask applied (all the shapes), or a border radius applied (like buttons, and the message bubbles) I’ve set the these elements to rasterize element.layer.rasterizationScale = UIScreen.mainScreen().scale element.layer.shouldRasterize = true

I also compared the difference of animating between using CGAffineTransform and CATransform3D and there didn’t seem to be any difference so I stuck with animating using the 2D animations.

These changes combined and I’ve got nice smooth scrolling and animations that stay in the high 50-60 fps range. There are probably further areas to optimize, but for now I’m happy with this updated performance.

🔷

Iteration 9: Audio feedback

Apr 3, 2016

From the experience of making Ally-oop I’ve become a lot more attuned to how sound can transform the experience.

So in this latest iteration I’ve added in some sounds for when shapes appear, and also in the scenario builder when you pick a new shape, add a shape, or remove an existing shape.

Something I’d like to try exploring in a future iteration is trying slight variations of the sounds depending on the shape, size, or color.

🔴

Iteration 10: Guessing the rule

Apr 24, 2016

Mid-April has some spring weekends that were too nice to ignore so I didn’t spend much time working on Master.

But today I went to Underline and spent a few hours working on a parser that would take a guess that someone writes in a textfield, and converts it to a rule so that it can be compared with the secret rule.

Working with natural language can be complex because ambiguity often means that even a simple sentence can be interpreted in multiple ways (she fed her cat food) and because languages have so many ways that an idea or instruction can be formulated.

Luckily the world of Master is a lot simpler than the real word (two sizes, three types of shapes, and four possible colors) which means that the types of things that can be said is a lot smaller and easier to create rules around.

I’d already created a simple parser that could figure out the most simple rules (two red squares) but failed with non-exact counts (odd number of triangles) and comparative rules (more squares than red circles).

The first thing I tried was experimenting with using a regular expression to check if the sentence matched a structure for a particular rule.

Here’s an example that checks to see if a guess is in the format of more A than B.

It works, but relies on the input being a regular string.

I’m already using NSLinguisticTagger to apply stemming to words (for example substituting smaller with small or circles with circle). The output of this is an array of words and converting back into a string feels ugly so I decided to try and do some pattern matching on an Array.

It’s probably a bad idea to re-invent functionality like this when regular expressions are well-tested and stable solution, but this is as much a learning experience as anything so I’m keeping it for now.

☔️

To allow people to guess the rule in the prototype I activated the 📝 button to activate a textfield, and when a rule is guessed Master responds to let you know if you guessed right.

Here’s a demo clip showing an incorrect guess first, followed by a correct guess:

Something I’m not satisfied with this UI is the way that the keyboard is dismissed after you’ve guessed the rule. In the future I’ll probably explore a flow where the keyboard stays up so that players can make multiple guesses in a row, and then manually exit out of this text-entry mode.

🍋

Iteration 11: Standardizing input models

May 1, 2016

In iOS, events (like taps, gestures, or keyboard input) start on a particular object, if that object can’t handle the event, the event is passed to the next responder (in most cases the containing view) until an object can that can handle that event is found.

For events like a touch, the first object that receives this event is the object directly below the touch coordinates. In the case of text entry where events occur on a virtual keyboard, some object must be assigned to be the first responder.

When you tap on a textfield to give it focus, it sets itself to be the current first responder of the application, and an input view (in most cases a virtual keyboard). Optionally, an input assistant view can also be associated with an object which continues to be displayed when the responder doesn’t have focus.

A common pattern for the input assistant view is to have next, previous, and done buttons to jump focus between different textfields. Alternatively, the textfield could be within the input assistant itself (which is sort of meta) and conveniently keeps an in-progress message pinned to the bottom of the screen when the keyboard isn’t shown (or if a hardware keyboard is in use).

After adding in the feature to make a guess for the rule, I noticed that the input model used to enter a rule guess was very different from the input model used to enter a scenario, so I decided to spend a few hours this afternoon to explore if a scenario input could be made to be more like the input for guessing a rule, and include an input assistant.

The input mode for entering a scenario has no assistant view. Ideally I want to include an assistant view so that the scenario inout can be temporarily minimized to give more of the viewport back to reviewing earlier scenarios.

My first stab simply flipped the shape picker and the scenario grid so that the shape picker could become the input assistant and appear pinned to the bottom of the screen when scenario input wasn’t in ‘focus’:

In these explorations I also made the concession to decrease the size of the grid from five rows to four rows. I’ll need to review my scenario-generation code to make sure that it doesn’t try and create an example scenario with more shapes than can be fitted on the grid.

Something felt wrong with having the a scrollable view in the input assistant so I tried preventing scrolling by breaking shape selection into a modes of shape, and color:

Along with this approach feeling wrong, it also didn’t account for how to switch between the mode of ‘small shape’ and ‘large shape’.

It was obvious after this version that it wasn’t the scrolling in the assistant view that was strange, but that this approach felt strange because a major part of the input was happening in the assistant view, with the input content being created in the input view, which is the opposite of the input model when entering a guess.

So taking a literal approach of the input-model for entering text I tried an approach where I create a keyboard with all the item size, color, and shape permutations present as keys on the keyboard, and items appearing as a list as if they were characters being typed out:

A scenario is limited to the finite number of shapes that can fit on a grid. Either I need some way to show this limit in the assistant view, or I could move away from the model of showing scenarios in a grid. Currently the grid has no essential purpose other than aesthetics, but I have plans for future rules like “three red shapes in a row” where the grid is useful, so for now I want to find a solution that keeps it.

Returning back to the model of a grid I tried using the grid as the assistant view with the assumption that shapes would appear in the top left position first and fill up remaining slots in a left-to-right order:

This approach is starting to feel right, but I wish the input assistant didn’t take up so much of the viewport.

Wanting to incorporate a delete key into the keyboard I looked for inspiration from the emoji keyboard and grouped items by their shape:

This approach uses less vertical space when the scenario-input has focus, but it’s vertical space when scenario-input doesn’t has focus that I’m trying to optimize.

The iteration that I stopped on just compressed the shape keys so that there was room for a delete key in the side margin:

I’m not a huge fan of the delete key being so close to a shape key, but this option feels like the best compromise of all the other options, and at least the majority of the viewport is used to review existing messages when the scenario input doesn’t have focus.
Ideally the next slot in the grid to be filled will be shown with a blinking ‘cursor’ that points to show the direction that new shapes will be added.

👀

Follow me at @permakittens for updates on progress and next steps.

🎈

Extras

  1. Confirmation bias is the tendency to view the world in a way that supports your pre-existing assumptions. The New York Times had a great feature on how this applies to problem solving.

  2. Most people probably play Zendo with plastic pyramids as the pieces, but any uniform set of shapes (like legos) would do.

  3. You can learn more about Zendo, and how to create good rules here: http://www.koryheath.com/zendo/tips-for-the-master/

  4. The first app I made was a Wikipedia reader that attempted an improved reading experience with a minimalist search-based navigation and beautiful typography. It was rejected for not adding enough functionality :(

  5. I do have some web dev experience but it feels like a lifetime ago. In my first job after graduating I was a ColdFusion developer for the University of Auckland Business School. I made a bunch of web apps for the department, and also the first version of a CMS to replace use of MS FrontPage.

  6. From early incarnations of Xcode Storyboards many people have been turned off by bad experiences. Storyboards currently have a great set of features, speed, and reliability, and in many cases I prefer jumping directly from paper sketch to Storyboards in my design process so I can get concepts on my device, as quickly as possible.