When projects often spend two weeks with the team orienting themselves within their problem space, how can a team structure their work with a bias towards action focusing on learning through building rather than learning then building?
Ally-oop was an internal project where we experimented with our approach towards product design.
In this project we made the intentional decision to move away from upfront-planning, and artifacts-as-deliverables, to move towards a just-in-time approach to decision making, and shifting our perspective to the product as the deliverable.
With a team of three generalist designers, in three weeks Pia, Rimar, and myself went from a literal blank slate to internally releasing an iOS app that makes music making collaborative, fun, and accessible.
Unlike most projects where a client briefs us with an expected goal of the project our team had to choose our own direction.
The first step we took was to identify the outcomes that we wanted from the end of the project. As a group we’d already come together around the high level idea of doing something involving sound or music. Here’s what we agreed on:
These outcomes became our mantras that kept our decision making on direction: fun, collaborative, non-professional.
Finally we discussed personal or professional goals that we hoped to achieve from this project.
Next we pitched ideas to each other and picked a single idea that we thought had best opportunities for our project & personal goals.
On this first day we also discussed the structure of the first two weeks (or the next 9 days) for a meta-discussion around whether we wanted to time box explorations of three ideas (with 3 days for each timebox) or focus on a single concept, and if we would focus on a native iOS prototype (which I had more experience with) or an HTML5 prototype (which Pia had more experience with).
On the second day we met to figure out our first step.
Our concept was quite complex: multiple instruments, instrument design, streaming & merging of instruments, and visualization of the merged tracks were all components of the initial concept that was pitched.
A traditional approach would assume this initial concept as the final outcome, and an early step would be a top-down approach to sketch out a high-level view of the entire experience to see how everything fitted together holistically, then breaking this design into component parts for detailed exploration and refined design.
We wanted to avoid this approach because we could easily spend the entire time of the project simply on this activity and have nothing tangible to show for it.
Instead of this top-down approach, we decided on a bottom up approach where we’d build something simple and one step at a time decide what to do next.
To figure out what our first step would be, we sketched out ideas for what a first iteration might look like that both captured the essence of our concept, but would also be simple to build.
This exercise stripped away many components and ended up being a simple key-based instrument (like a piano) but instead of playing a note, would play a short music file.
The team divided with separate goals: Rimar to create or collect 4 music files to use, Pia to start looking at how to layout the four keys, and myself to figure out how to play a music file on an iOS device.
The project was starting to feel quite different from other projects. It was interesting that we were already prioritizing the sourcing of tangible content (music files), even if it was only to be used as a placeholder, and (for the time being) not worrying about the big picture of what we were building.
The simplicity of our initial concept meant that we had a working prototype before lunch. Rimar had sourced some sound samples from an earlier project, Pia had reused a color palette from our internal brand guidelines, and I had cobbled together a simple prototype that played the samples.
With something tangible in our hands to react to three things we obvious:
Rimar went off to create some more interesting samples to use. We decided that we wanted to quickly explore both some more traditional instrument sounds (like piano keys) and also more quirky ideas (like snippets from Dolly Parton songs).
Pia and I quickly added a highlight to show when a key was tapped, and investigated the delay more.
What we quickly discovered was simply that the default trigger for a button event was when the touch event ends, we changed the prototype to play the sample when the touch event was first detected.
This was an eye-opening moment for me because on previous projects where design and development sprints are staggered I’d been frustrated when small but important details (like when a sample should be triggered) were overlooked in documentation and, by no fault to to engineering teams, implemented inconsistently.
The alternative to this was obvious: don’t try and document to precision how something should work, but instead shorted the time it takes to complete a design/development cycles and add what you’d overlooked in the next iteration.
This act felt like an important step in the project because despite the simplicity of the prototype, it was magical to have something running on all our phones, and felt like we’d built both momentum, and ownership of what we were building.
It also set the expectation for how we’d work on this project: build something, use it, react to what we’d made, and figure out the next step to take.
Another benefit to our prototype-first approach that wasn’t obvious until we’d started, was the ability that the prototype could be used to bootstrap the exploration of future features, and to get continuous ongoing feedback from a wider group of people.
Once our prototype was updated with custom-build samples, and we’d fixed the latency problem between hitting a key and hearing a sample we set up our first collaboration.
Although we ultimately wanted to merge the output from each device and play from a common device (e.g. an Apple TV) we could still simulate this experience simply by turning the volume up to max and playing together, which we did for our first collaboration with some people we recruited from outside of the project team.
The feedback from this first collaboration was so useful was we set it up as a regular routine of our day with an early morning team jam using the latest prototype, and ongoing evaluations from people within the studio.
Another benefit of this approach was the use of our prototype across a range of device sizes, from an iPhone 4 to the latest iPads. This constant use of the prototype across different devices helped us shift our mindset away from designing for a fixed viewport to thinking about adaptive layouts.
We started each day with the habit of a morning jam session where we collaborated with the latest prototype. Then with a fresh experience in mind we discussed ideas about the direction we wanted to go in the future, and the features that would be needed to support that direction.
Every idea was added to a post it note, and added to our backlog, which was simply a big piece of paper with ideas loosely ordered with easier ideas at the top, and harder ideas near the bottom.
Then we made two decisions: what was the next major feature we would work on adding today? As the team prototyper I had some idea about how long a feature might take, but I tried an approach where we prioritized choosing that next feature with the context that it might take the entire rest of the project for me to implement it.
Secondly, we talked about what sort of mini changes we could make that would only take a short amount of time to complete (for example updating or adding sound files, or simple UI layout updates).
The rest of the day was spent working towards those major changes, and we ended the day with another jam session trying out any new features that we’d added.
In addition to our ‘backlog and next step’ I also kept a team diary which again was just a big sheet of paper where we made a note of the decisions, or progress that we’d made that day.
We started each morning with a collaboration using the latest iteration of the prototype and asking ourselves “what’s the most interesting thing we can do next?”.
In addition, I asked the team to frame this question with the assumption that whatever next thing we decided to do, could potentially take up all the remaining time we had left on the project, and so this next step we make could also be the last step we make.
This was partially because as the team prototyper I didn’t have the ability to accurately estimate how long it would take to build something, but also because I wanted to encourage the team to make tough decisions about prioritizing what was the most important thing to work on next.
What can’t be seen from screen captures of many early iterations of the prototype is that the next step we took often was a large visual change, but was instead focused on a behavior of how the prototype worked rather than what it looked like.
Areas that we explored in these early prototypes included:
An early step that we took to explore the possibility of multiple instruments being merged into a single collaboration was an exploration on iOS’s MultiPeerConnectivity framework, which allows apps to create a shared peer-to-peer network rather than using a client-server model.
We explored this as an option first because we didn’t have access to a server environment, but we soon realized that this model could be a benefit and unlock the ability for collaborations to occur in places where internet access wasn’t available (like on our regular daily commute in the New York subway system).
Although we initially envisioned a collaboration where our instruments merged in real-time, we soon discovered that latency from wireless protocols would make this impossible (with some investigation we discovered that our brains are hardwired to perceive even very small delays with aural input).
Instead of using this limitation as a blocker, we simply used this constraint as an input into our solution and shifted our experience from a real-time collaboration to a near real time collaboration based creating short loops of samples.
Looping was never identified as a feature in our initial concepts, and in a traditional top down approach a lot would have been invested in the design of a system that assumed real-time collaboration to be feasible.
By focusing on our outcomes (fun, collaborative, not professional) rather than a specific output, our bottom up approach where we built our prototype one step at a time allowed us to adapt and change direction as we encountered new constraints that were unknown when we started the project.
Because of our bias towards action, we didn’t invest a lot of upfront time at the start of the project with either general research of existing experiences, or benchmarking potential competitors.
That didn’t mean that the team discounted the benefits of research, but instead deferred it until a time we felt it could add most value.
That time came when as a team we started to question our goal of making the collaborative music experience as fun as possible.
We started to discuss possible game mechanics that could be involved around enjoying music, and to understand this better we took a mid-day trip to a Chinatown arcade to investigate and play video games associated with music.
In this trip we experienced that many games associated with music were so challenging, that while they were fun, they were also not entirely satisfying.
We left without a clear idea of what we wanted to do, but with a clearer idea of experiences that we wanted to avoid
In our exploration of switching to a loop-based structure of making music we’d added in a temporary constraint that limited the number of loops at any given time to 5 loops in a first-in-first-out queue.
We’d added this constraint because our simple loop playback engine was starting to struggle with a larger number of loops, and it was more convenient for debugging to remove an earlier loop rather than set a hard limit of 5 loops for each collaboration.
In our collaborations with this prototype we soon decided that this temporary constraint was actually an interesting game mechanic that matched combined elements from both the childhood game telephone and a philosophy of creative akido as taught by KaosPilots.
Having decided to embrace this constraint and explore it as a feature we explored more deliberate visual feedback around how many loops were currently playing, who had added each loop, the sound pack each loop used, and which loop would be removed next.
We also used the prototype to simulate more complex game mechanics (like enforcing a turn-by-turn game play) by role playing with different rules using the prototype without having to actually build out those features.
With the addition of a loop-based game mechanic the team decided that we’d reached a point where we felt that the most interesting thing we could do next would be to take a pause on our iterative approach, and take a holistic look at the entire experience.
One aspect that we decided we wanted to address was the name of our app. We’d been calling our prototype with the placeholder name ‘noisy’ but decided it was time to choose a real time.
We spent a few hours one afternoon fueled on Halloween candy on a free-association activity to explore possible names that could fit the app.
One branch of thinking led us from ‘loops’ & ‘collaboration’ to basketball, which led us to ‘allyoop’, which then led to the possibility of a color theme within the app inspired from the different basketball team colors.
Pia explored the team color direction which turned out to be a dead end, but we decided to stick with the basketball inspired name.
One concept that Pia explored was a screen layout optimized for a vertical form-factor.
From the visual designs, the vertical layout looked like a promising direction, but there was some concern from all about four columns being presented in a narrower screen width.
Luckily the prototype made this easy to evaluate. We made a quick iteration where we didn’t change any functionality, but instead adapted the layout for the vertical orientation.
Being able to try the new layout in a new orientation gave us the confidence to move forward with exploring this direction further.
Until recently the app had been an experience that occurred on a single screen.
The detection and connection of other nearby devices running the allyoop app had been occurring automagically but we knew that at some point we wanted to add specific UI so that people had more control over this step of the experience.
A recent addition to toggling between a mode where a person could either add a new loop, or delete a single loop started to get us thinking about additional screens or modes that could make sense to explore.
Originally our sound packs had seen structured so that each pack had the same number of sounds. As Rimar worked on curating the sound packs we realized that it would be a lot more interesting (and easier) if different sound packs had a different number of sounds, and if they keys were organized in whatever grid made most sense for that sound pack.
So, one of our final steps was the curation of all the sounds we had created into sensible sound packs, to figure out how the sounds should be arranged in the gird, what the pack should be called, and how they should be arranged in relation to each other.
Like many of our actives in this project we took a very analog approach for this drawing possible sound combinations on sheets of paper, and rearranging them until the order felt correct.
Finally, Pia and I spent an afternoon creating a landing page for the prototype where people in the studio could sideload the prototype onto their device.
We’d been adding various animations as part of our daily ‘quick changes’ but for the final version we added a lot of refinements.
Compared to the first iteration we came a long way in a short time:
The final iteration had the following features:
This was our final iteration, we were done, but not finished, we had a large backlog with a mixture of UI, content and interaction refinements, some known bugs, and several ambitious major features.