Project Neon

One of our clients at Made by Fire was a sports company. They were building running insoles that could be paired to a companion app, providing real-time insights into a runner’s technique. For privacy purposes, in this case study I’ll refer to this product as Neon.

I worked alongside our creative director to produce UI for the companion app. This project was challenging due to the complexity of the running data and the rapid nature of delivery. We worked in tandem with another team working on the hardware, so our deadlines were tight.

My role

UX/UI Designer, User researcher

Business impact

Rapid delivery of interface designs that met the business criteria.

Product impact

Explorations into how to show complex data in clear and meaningful ways, validating design assumptions with users.


Currently, if a runner wants to find out more about their running technique, they need to go into a lab for analysis. This presents a couple of problems:

  • This is an expensive process that’s only realistically accessible for professionals with funding.
  • It’s not convenient, and doesn’t reflect how they would be running in their usual circumstances.
  • If you wanted to keep a close eye on your progress you’d need to do this frequently. As described above, this isn’t practical.

Our client wanted to build a system that provided a decent amount of running analysis data without needing to go to a lab to get it. They wanted their product to give runners real-time feedback on their technique at an accuracy not possible with existing tools.


Target users

Our clients had already done extensive market research and knew exactly who their product was aimed at:

🏃 Avid runners

People who run frequently and invest a significant amount of time and money into the sport. We assumed that the tech’s price point was high enough to only draw people already invested in running.

👟 Kitted up

To use the insoles, the runner needed to have a mobile device, running headphones, and running shoes. We assumed that anyone using Neon already had these things.

🧠 Science minded

People who have a keen interest in the science behind their legs and how their feet affect their running times. We assumed that the data was complex enough that only those with existing knowledge would want to investigate.

💪 Motivated to improve

Neon was designed to help runners improve over a long period of time by tracking their running metrics precisely. We assumed that runners who bought the system would use it frequently and be dedicated enough to keep their targets and settings up to date.

Information architecture

The client asked us to look at the three different stages of a ‘run’.

  • Performance. What happens to the runner during their run and how the app responds to their movements.
  • Analysis. Once the run is finished, what information does the runner want to see? How should it be presented?
  • Improvement. How can the runner set a realistic target to they can aim for over their following sessions?


We spent time sorting the client’s feature requests into those categories, and this formed the overall structure of the app.

We included a fourth category for neutral tasks like user settings and device pairing.

Taking the time to do this gave us a clear structure that helped us know where we could insert new features in the future.


Once we had the categories sorted, we needed to think about the secondary hierarchy. The amount of data Neon provided was dense and we wanted to show it clearly and with context.

As someone dives deeper into a session, they can see more fine details about a particular aspect of a run. They could filter by route or individual session, then dive into either their technique or physical effort.

Filtering views in this way would allow us to design screens for those specific queries.

Representing data

Watch the feet in this video. Pay attention to what parts of the sole hit the treadmill first, and how the rest of the foot rolls in to contact. See how the weight transfers from the heel to the ball of the foot as the runner moves forward.

Now how would you represent this information graphically?

This data can probably be plotted and interpreted easily by gait analysts, but we wanted to abstract this information into a format the everyday athlete could understand.

We started by analysing some existing graph types, and seeing how they’d look using the data we were getting from the insoles. Throughout our explorations we were looking for the following:

  • Which graph types show a clear change in a runner’s stance over time, and how drastic that change is.
  • How closely the data being shown maps to the runner’s mental model of the movement of their feet.
  • Whether or not the graph could be nicely ported into a mobile view. This included any interactions.


Throughout the project we designed several graphs for a variety of scenarios. The graph below demonstrates how a runner’s pronation - how the foot rolls from one side to the other as it hits the ground - changes over the course of a run.

The blue dashed lines show a target someone has set. When their foot pronates correctly, the darker coloured lines fall within the goal zone. Along the bottom axis is distance into the run. You can see that the runner’s pronation falters as they get further into a run and they begin to miss their target.


This next graph shows a runner’s speed over the course of a run. The pink line is the session being analysed and the blue area shows the average over previous sessions. The app could automatically generate insights on where the runner had performed better or worse than average.



Once we'd nailed down the structure and learned more about the data, we were able to start sketching out individual screens. Depending on the runner's context, they would want to view the data from their running sessions in different ways. Rapid wireframing like this on paper helped us to converge on ideas much more quickly.


UI Design

The biggest challenge we faced during the UI design was showing enough relevant and useful data without making the interface too dense. Also balancing the colours used for various metrics (for example, the different heart rate zones) without confusing it with the core branding.




User testing

Over the course of the project, we conducted user testing sessions in our offices. We recruited athletes that fit the persona description and asked them to take part in a variety of tests.


User interviews

Surprisingly, none of the design team members were professional runners! We’d made some assumptions about runners that we wanted to validate, and we did this by asking runners questions about their habits and lifestyles. Anything new we discovered was added into our personas.


Card sorting

We initially referenced other running products to see what runners wanted to measure. However, we thought that asking the professional runners to create their ‘ideal feedback’ would show us what would be most suitable.


Prototype testing

Our developer team had built interfaces for some of the core features. We asked the runners to perform simple tasks (eg, how would you start a running session) to see how they used and reacted to the design.


Usability report

After testing, we compiled a report. This included detailed descriptions as well as quotes and highlights, making is easy for anyone to scan.

The report also included suggestions for next steps in the design iteration, and gave our clients an idea of how their product was being perceived by their target market.


Although this project was ultimately picked up by another agency, our work formed the foundation of the product as it exists today. I’m proud that our small team was able to deliver this amount of work given the timeline and resource constraints, and I picked up a lot of skills. This project also taught me the importance of creating a strong product architecture; adding in new feature requests would have been impossible if we hadn’t thought about the overall structure before starting on the interface designs.