Mathew Sanders /

Ally-oop: An experiment in learning by building

When projects often spend two weeks with the team orienting themselves within their problem space, how can a team structure their work with a bias towards action focusing on learning through building rather than learning then building?

Ally-oop was an internal project where we experimented with our approach towards product design.

In this project we made the intentional decision to move away from upfront-planning, and artifacts-as-deliverables, to move towards a just-in-time approach to decision making, and shifting our perspective to the product as the deliverable.

With a team of three generalist designers, in three weeks Pia, Rimar, and myself went from a literal blank slate to internally releasing an iOS app that makes music making collaborative, fun, and accessible.


Setting goals

Unlike most projects where a client briefs us with an expected goal of the project our team had to choose our own direction.

The first step we took was to identify the outcomes that we wanted from the end of the project. As a group we’d already come together around the high level idea of doing something involving sound or music. Here’s what we agreed on:

  • We wanted something tangible that we could actually use, even if it was hacked together, rather than a deck of concepts.
  • We wanted something that made music making fun.
  • We wanted something that made music collaborative.
  • We didn’t want something that was a professional tool.

These outcomes became our mantras that kept our decision making on direction: fun, collaborative, non-professional.

Finally we discussed personal or professional goals that we hoped to achieve from this project.

Our personal and professional goals that identified for ourselves.

Pitching ideas

Next we pitched ideas to each other and picked a single idea that we thought had best opportunities for our project & personal goals.

On this first day we also discussed the structure of the first two weeks (or the next 9 days) for a meta-discussion around whether we wanted to time box explorations of three ideas (with 3 days for each timebox) or focus on a single concept, and if we would focus on a native iOS prototype (which I had more experience with) or an HTML5 prototype (which Pia had more experience with).

The first concept that I presented to our team was the idea of recording the soundscape of an acoustically interesting location (e.g. Washington Square Park) and allowing people to experience that soundscape through an iPhone app.

The second concept that I presented focused on the idea of making music collaboratively. Individuals could have their own digital instruments (where interaction could be based on physical instruments, or be more abstract) and individual contributions combined on some hub (e.g. an AppleTV).

Mapping our concepts onto an attributes 4x4 to help us review and choose a concept.

Planning the first iteration

On the second day we met to figure out our first step.

Our concept was quite complex: multiple instruments, instrument design, streaming & merging of instruments, and visualization of the merged tracks were all components of the initial concept that was pitched.

A traditional approach would assume this initial concept as the final outcome, and an early step would be a top-down approach to sketch out a high-level view of the entire experience to see how everything fitted together holistically, then breaking this design into component parts for detailed exploration and refined design.

We wanted to avoid this approach because we could easily spend the entire time of the project simply on this activity and have nothing tangible to show for it.

Instead of this top-down approach, we decided on a bottom up approach where we’d build something simple and one step at a time decide what to do next.

To figure out what our first step would be, we sketched out ideas for what a first iteration might look like that both captured the essence of our concept, but would also be simple to build.

This exercise stripped away many components and ended up being a simple key-based instrument (like a piano) but instead of playing a note, would play a short music file.

The team divided with separate goals: Rimar to create or collect 4 music files to use, Pia to start looking at how to layout the four keys, and myself to figure out how to play a music file on an iOS device.

The project was starting to feel quite different from other projects. It was interesting that we were already prioritizing the sourcing of tangible content (music files), even if it was only to be used as a placeholder, and (for the time being) not worrying about the big picture of what we were building.

Sketches that the team worked on collaboratively to explore the first step that we wanted to build.

Here’s the sketch that we agreed on - which was simply a screen divided into four buttons.

The first iteration

The simplicity of our initial concept meant that we had a working prototype before lunch. Rimar had sourced some sound samples from an earlier project, Pia had reused a color palette from our internal brand guidelines, and I had cobbled together a simple prototype that played the samples.

A screen shot of the first prototype that we built. It only took a couple hours to put together, and although incredibly simple, having something that we could actually use shifted the team dynamic of our regular process, and what we prioritized spending our time on.

With something tangible in our hands to react to three things we obvious:

  1. there was a short, but obvious delay between tapping a key and hearing the sample;
  2. there was no visual feedback when tapping a key; and
  3. the sound files that we’d sourced from an earlier project just didn’t sound good.

Rimar went off to create some more interesting samples to use. We decided that we wanted to quickly explore both some more traditional instrument sounds (like piano keys) and also more quirky ideas (like snippets from Dolly Parton songs).

Pia and I quickly added a highlight to show when a key was tapped, and investigated the delay more.

What we quickly discovered was simply that the default trigger for a button event was when the touch event ends, we changed the prototype to play the sample when the touch event was first detected.

This was an eye-opening moment for me because on previous projects where design and development sprints are staggered I’d been frustrated when small but important details (like when a sample should be triggered) were overlooked in documentation and, by no fault to to engineering teams, implemented inconsistently.

The alternative to this was obvious: don’t try and document to precision how something should work, but instead shorted the time it takes to complete a design/development cycles and add what you’d overlooked in the next iteration.

This act felt like an important step in the project because despite the simplicity of the prototype, it was magical to have something running on all our phones, and felt like we’d built both momentum, and ownership of what we were building.

It also set the expectation for how we’d work on this project: build something, use it, react to what we’d made, and figure out the next step to take.

The first collaboration

Another benefit to our prototype-first approach that wasn’t obvious until we’d started, was the ability that the prototype could be used to bootstrap the exploration of future features, and to get continuous ongoing feedback from a wider group of people.

Once our prototype was updated with custom-build samples, and we’d fixed the latency problem between hitting a key and hearing a sample we set up our first collaboration.

Although we ultimately wanted to merge the output from each device and play from a common device (e.g. an Apple TV) we could still simulate this experience simply by turning the volume up to max and playing together, which we did for our first collaboration with some people we recruited from outside of the project team.

The feedback from this first collaboration was so useful was we set it up as a regular routine of our day with an early morning team jam using the latest prototype, and ongoing evaluations from people within the studio.

Another benefit of this approach was the use of our prototype across a range of device sizes, from an iPhone 4 to the latest iPads. This constant use of the prototype across different devices helped us shift our mindset away from designing for a fixed viewport to thinking about adaptive layouts.

Rimar, Pia, Ashley, and Andrea after our first collaboration.

Team tools & habits

We started each day with the habit of a morning jam session where we collaborated with the latest prototype. Then with a fresh experience in mind we discussed ideas about the direction we wanted to go in the future, and the features that would be needed to support that direction.

Every idea was added to a post it note, and added to our backlog, which was simply a big piece of paper with ideas loosely ordered with easier ideas at the top, and harder ideas near the bottom.

Then we made two decisions: what was the next major feature we would work on adding today? As the team prototyper I had some idea about how long a feature might take, but I tried an approach where we prioritized choosing that next feature with the context that it might take the entire rest of the project for me to implement it.

Secondly, we talked about what sort of mini changes we could make that would only take a short amount of time to complete (for example updating or adding sound files, or simple UI layout updates).

The rest of the day was spent working towards those major changes, and we ended the day with another jam session trying out any new features that we’d added.

In addition to our ‘backlog and next step’ I also kept a team diary which again was just a big sheet of paper where we made a note of the decisions, or progress that we’d made that day.

Our team board where we kept an analog team diary of achievements, a simple backlog, and pinned concepts and research to share with the wider studio.

A detail of our team board showing a section of our backlog, and Dolly Parton who took on a role of something between patron saint and mascot for our project.

What’s the most interesting thing we can do next?

We started each morning with a collaboration using the latest iteration of the prototype and asking ourselves “what’s the most interesting thing we can do next?”.

In addition, I asked the team to frame this question with the assumption that whatever next thing we decided to do, could potentially take up all the remaining time we had left on the project, and so this next step we make could also be the last step we make.

This was partially because as the team prototyper I didn’t have the ability to accurately estimate how long it would take to build something, but also because I wanted to encourage the team to make tough decisions about prioritizing what was the most important thing to work on next.

What can’t be seen from screen captures of many early iterations of the prototype is that the next step we took often was a large visual change, but was instead focused on a behavior of how the prototype worked rather than what it looked like.

Areas that we explored in these early prototypes included:

  • further decreasing the latency of tapping a key and hearing the sample play (which took us not only in a technical exploration of iOS’s CoreAudio library and a number of third part libraries, but also into the physiology of hearing);
  • the ability to shift the pitch of a sample that was played depending on the area of the key that you tapped; and
  • allowing polyphonic sound that allowed us to play multiple samples at different pitches at the same time.

Reacting to technology

An early step that we took to explore the possibility of multiple instruments being merged into a single collaboration was an exploration on iOS’s MultiPeerConnectivity framework, which allows apps to create a shared peer-to-peer network rather than using a client-server model.

We explored this as an option first because we didn’t have access to a server environment, but we soon realized that this model could be a benefit and unlock the ability for collaborations to occur in places where internet access wasn’t available (like on our regular daily commute in the New York subway system).

Although we initially envisioned a collaboration where our instruments merged in real-time, we soon discovered that latency from wireless protocols would make this impossible (with some investigation we discovered that our brains are hardwired to perceive even very small delays with aural input).

Instead of using this limitation as a blocker, we simply used this constraint as an input into our solution and shifted our experience from a real-time collaboration to a near real time collaboration based creating short loops of samples.

Looping was never identified as a feature in our initial concepts, and in a traditional top down approach a lot would have been invested in the design of a system that assumed real-time collaboration to be feasible.

By focusing on our outcomes (fun, collaborative, not professional) rather than a specific output, our bottom up approach where we built our prototype one step at a time allowed us to adapt and change direction as we encountered new constraints that were unknown when we started the project.

Screen captures of iterations where we explored adding more keys, simple loop record and playback mechanisms, explored polyphonic pitch-shifting of samples and worked on improving latency.

Screen captures where we moved away from continuous pitch-shifting on a key to limiting to an octave of specific pitches on each key, and where we jumped from having all samples visible at once to grouping similar samples into ‘sound packs’ and showing a single pack at once. An initial game mechanic we explored around this was the random assignment of a sound pack to a player, and no-other person could use the same sound pack in a collaboration.

Screen captures where we explored snapping tap events to a beat to help people stay on time, and a debug screen that allowed us to explore different parameters for the beat-rounding.

A field trip to Chinatown

Because of our bias towards action, we didn’t invest a lot of upfront time at the start of the project with either general research of existing experiences, or benchmarking potential competitors.

That didn’t mean that the team discounted the benefits of research, but instead deferred it until a time we felt it could add most value.

That time came when as a team we started to question our goal of making the collaborative music experience as fun as possible.

We started to discuss possible game mechanics that could be involved around enjoying music, and to understand this better we took a mid-day trip to a Chinatown arcade to investigate and play video games associated with music.

In this trip we experienced that many games associated with music were so challenging, that while they were fun, they were also not entirely satisfying.

We left without a clear idea of what we wanted to do, but with a clearer idea of experiences that we wanted to avoid

Rimar playing solo on Dance Dance Revolution because we couldn’t figure out how to start a multi-player game.

Our score when playing against each other on beginner mode.

Limiting loops

In our exploration of switching to a loop-based structure of making music we’d added in a temporary constraint that limited the number of loops at any given time to 5 loops in a first-in-first-out queue.

We’d added this constraint because our simple loop playback engine was starting to struggle with a larger number of loops, and it was more convenient for debugging to remove an earlier loop rather than set a hard limit of 5 loops for each collaboration.

In our collaborations with this prototype we soon decided that this temporary constraint was actually an interesting game mechanic that matched combined elements from both the childhood game telephone and a philosophy of creative akido as taught by KaosPilots.

Having decided to embrace this constraint and explore it as a feature we explored more deliberate visual feedback around how many loops were currently playing, who had added each loop, the sound pack each loop used, and which loop would be removed next.

We also used the prototype to simulate more complex game mechanics (like enforcing a turn-by-turn game play) by role playing with different rules using the prototype without having to actually build out those features.

Screen capture of the app with no loops added.

Screen capture showing the app after three loops had been added. Each loop is represented by an icon with the same color of the sound pack that the loop is using, and the initial of the person who added the loop (in early iterations we took the device name, so in this case it was ‘i’ for ‘iPhone Simulator’).

From ‘Noisy’ to ‘Ally-Oop’

With the addition of a loop-based game mechanic the team decided that we’d reached a point where we felt that the most interesting thing we could do next would be to take a pause on our iterative approach, and take a holistic look at the entire experience.

One aspect that we decided we wanted to address was the name of our app. We’d been calling our prototype with the placeholder name ‘noisy’ but decided it was time to choose a real time.

We spent a few hours one afternoon fueled on Halloween candy on a free-association activity to explore possible names that could fit the app.

One branch of thinking led us from ‘loops’ & ‘collaboration’ to basketball, which led us to ‘allyoop’, which then led to the possibility of a color theme within the app inspired from the different basketball team colors.

Pia explored the team color direction which turned out to be a dead end, but we decided to stick with the basketball inspired name.

A section of our free-association showing tap tap as a possible contender for our app name.

Another section of our free-association showing cloop as a possible name (from combining ‘collaborative loop’), and alyoop as our final favorite.

Visual design explorations pinned on our team board with notes from a feedback session.

Switching to a vertical layout

One concept that Pia explored was a screen layout optimized for a vertical form-factor.

From the visual designs, the vertical layout looked like a promising direction, but there was some concern from all about four columns being presented in a narrower screen width.

Luckily the prototype made this easy to evaluate. We made a quick iteration where we didn’t change any functionality, but instead adapted the layout for the vertical orientation.

Being able to try the new layout in a new orientation gave us the confidence to move forward with exploring this direction further.

The first iteration that switched to a vertical layout. There were no functional changes from the previous iteration, this was a quick change to see how we felt about a portrait form-factor.

An iteration where we explored adding a visual feedback for a loop length, a control to manually choose a sound pack instead of having one randomly assigned, and the ability to delete individual loops from the current track.

Iterations where we explored from ideas in the icons used, and the presentation of the individual loops and the loop queue.

New flows

Until recently the app had been an experience that occurred on a single screen.

The detection and connection of other nearby devices running the allyoop app had been occurring automagically but we knew that at some point we wanted to add specific UI so that people had more control over this step of the experience.

A recent addition to toggling between a mode where a person could either add a new loop, or delete a single loop started to get us thinking about additional screens or modes that could make sense to explore.

An iteration that added an initial step allowing people to connect with nearby devices, or to play solo (for now this was just laying out the screen, the elements were non-functional), and a capture loop mode that allowed people to preview a look before committing it to the shared track. This iteration also removed all sound packs to start fresh with three new packs that Rimar had been curating.

An iteration that added a loading screen using the confetti brand Pia was working on, and also explored an option to choose from adding 2, 4, or 8 second loops.

An iteration that included new more new sound packs from Rimar, and a switch in our philosophy where we decided that a sound pack didn’t need to have a fixed number of keys, but could have anywhere between 1 to 16 keys arranged in whatever layout made the most sense for that pack.

An iteration that removed the automagical connection to nearby devices to a more deliberate action of either hosting or joining a collaborative session. This iteration also added in an option to create a 1 second loop.

An iteration that added in an updated color palette that Pia was creating, and explored an animation to show the tempo of individual loops in the current track.

An iteration that added an animated transition from the loading screen and the options screen, and explored using marco/polo instead of host/join to manage connecting to a collaborative session.

Final steps

Originally our sound packs had seen structured so that each pack had the same number of sounds. As Rimar worked on curating the sound packs we realized that it would be a lot more interesting (and easier) if different sound packs had a different number of sounds, and if they keys were organized in whatever grid made most sense for that sound pack.

So, one of our final steps was the curation of all the sounds we had created into sensible sound packs, to figure out how the sounds should be arranged in the gird, what the pack should be called, and how they should be arranged in relation to each other.

Like many of our actives in this project we took a very analog approach for this drawing possible sound combinations on sheets of paper, and rearranging them until the order felt correct.

Finally, Pia and I spent an afternoon creating a landing page for the prototype where people in the studio could sideload the prototype onto their device.

Sound pack curation was another analog process where we grouped related sound packs, figured out their mapping to keys in the app, gave the pack a name, and moved them around until the arrangement felt right.

The landing page that Pia and I built for an internal distribution of the app to studio members.

The final iteration

We’d been adding various animations as part of our daily ‘quick changes’ but for the final version we added a lot of refinements.

Compared to the first iteration we came a long way in a short time:

The final iteration had the following features:

  • 12 packs, each containing between 2 and 20 samples created and curated for the app.
  • A collaborative game-mechanic where a song is built up of 5 individual loops.
  • Ability to connect with nearby people and sync loops playback with each other.
  • An option to record loops either ½, 1, 2, or 4 seconds long.
  • Beat-rounding to lower the barrier of creating loops that play in sync.
  • Visual feedback of the loop you’re creating and the ability to preview it before committing to the shared loop with other people.
  • A custom-built loop engine with low-latency coordination of audio and visual events.
  • Ability to delete individual loops, or the entire song.
  • Visual feedback with subtle animations to lead people though the on boarding experience of creating their first loop.

This was our final iteration, we were done, but not finished, we had a large backlog with a mixture of UI, content and interaction refinements, some known bugs, and several ambitious major features.

The first iteration.

The final iteration.


The process of making something is an amazing process to learn.
Designers are typically perfectionists, and we’ve trained ourselves to gather as much upfront information about a topic before we actually start building something. while this is a sensible approach, it limits a lot of opportunities for learning, and the opportunities that come with learning.
Framing our next step as ‘the next thing could be the last thing’ helped focus prioritization.
However, this perspective probably also biased our decisions towards solutions that were quicker or easier to build, and this ultimately shaped the direction we went.
For example, we de-prioritized some early concepts of a non-key based instruments, and non-sample based interactions for playing instruments. Both of these could have taken our app in a completely different direction.
Framing decisions as questions “what’s the most interesting thing we can do next? how can we make music more collaborative?” is a powerful technique.
Framing projects as questions was a powerful design tool in that it kept us focused on outcomes rather than outputs, and encouraged a mindset where we focused on “what’s the best we can do in this time?”, rather than “how long will it take to do this?”.
Using ‘technology as a medium’ takes you in directions you’d probably never imagine on your own.
The tools that we use to make digital products have advanced rapidly in the last decade, and the ease that proof-of-concepts can be made has greatly improved. We need to learn to take better advantage of the speed that these tools allow in part of our design process.
Not everyone is motivated by the same things. Some people get excited by seeing the progress we’ve made, others get excited about the possibility of where we might be going.
It’s important to understand what motivates a team, and to make sure that the energy from this motivation is directed in a useful direction.
Self-organizing still requires structure.
We took a naive, or perhaps too literal approach to self-organizing teams where we didn’t set up specific expectations for regular team habits and routines (like a regular morning standup).
We worked in a public space, but could have given more specific updates to the studio about our process and progress.
When working on an internal project, your stakeholders aren’t just your internal team, but also the wider company that you’re working in. Internal projects are great opportunities to explore ideas that we’re not ready to use on client projects, but if the wider company doesn’t have an opportunity to learn from the project then a huge learning opportunity is lost.