I Strapped Sensors to My Legs to See If I Run Like Kipchoge

Brian Stever

2023 · Expo, React Native, BLE, ESP32, motion data

Abstract. Gaitr was an unfinished hardware/software system intended to compare a runner's live movement against a reference gait modeled from Eliud Kipchoge footage. The idea made it further than a sketch: there is a full Expo/React Native app with BLE device pairing, live graphs, a performance report flow, and a bundled kipchoge_gait_data.json reference file. It did not become a polished product, but it did become a very efficient way to learn that “hardware plus biomechanics plus mobile plus wireless” is not actually one project.

1.Research Question

I'd started running that summer, badly but consistently, and watched enough Kipchoge footage to develop opinions about stride mechanics I had absolutely no right to hold. The original question was simple to state and unwise to pursue: could I measure my running form in real time and compare it to Eliud Kipchoge's? This sounds like the sort of idea that should end with a notebook sketch and a quiet laugh. Instead it became my first serious hardware project.

The appeal was obvious. Running form is difficult to assess while running, and expert coaching is expensive. If a wearable system could capture enough body motion to say, in real time, “your stride is asymmetric” or “your hips are collapsing,” then the phone in your pocket becomes a very opinionated coach.

The problem is that this requires simultaneous work in electronics, Bluetooth communication, biomechanics, mobile development, and motion analysis. I had very little prior experience in any of those domains, which in retrospect explains why the idea felt so clean and why my confidence was so wildly miscalibrated.

2.Proposed System

The plan was to strap seven BLE-enabled ESP32 sensor nodes to key body locations, stream accelerometer data to a React Native app, and compare that stream against a reference representation of Kipchoge's gait extracted from video and modeled in Blender. The app would then calculate a similarity score and deliver live feedback.

I called that score the Kipchoge Similarity Index. It is difficult to overstate how much confidence is contained in the act of naming a metric before you have fully built the pipeline that produces it.

The part I'm still proudest of is the reference data. I imported race footage of Kipchoge into Blender, stabilized the video, ran motion tracking on his body, and exported the joint positions as a dataset that could be compared against in real time. I had never used Blender for anything before this. The learning curve was vertical, but by the end I had a usable representation of what the best marathon runner on earth looks like mid-stride, stored as JSON on a phone. That felt like a real thing.

Live Sensor Feed

Reference Model

171°158°163°
pelvisleft/right hipsleft/right shinsleft/right feet
Figure 1. Conceptual visualization of the wearable sensor layout and reference skeleton approach. The left figure represents live instrumented motion; the right figure represents the reference model derived from video analysis.

3.What Actually Got Built

I tend to remember Gaitr as a wild concept with a few sensors attached. Going back to the code says otherwise. There is a device connection flow, a BLE manager, dedicated screens for real-time analysis and performance reporting, charting libraries, and a serialized reference dataset for Kipchoge. In other words: enough structure that the app had started becoming a system instead of a mood board.

The hardware side meant buying soldering equipment, a pile of ESP32 boards, MPU6050 accelerometers, a glue gun, velcro straps, and 3D printing cases to hold everything together. My desk looked like a RadioShack had a minor incident. Getting the accelerometers reading on the ESP32s went surprisingly well. That part clicked fast. Connecting them to the phone over Bluetooth was another story. BLE pairing, reconnection, and data throughput were all harder than I expected, and most of the early debugging sessions ended with me staring at a serial monitor wondering why the phone refused to acknowledge a device that was clearly broadcasting.

Rather than building a voice-enabled feedback system, I went with something simpler: beeps. Different tones for different states: connected, reading, out of range. It was crude, but it meant I could get audible feedback while actually running without needing to look at a screen. The beeps were enough to know the system was alive. Whether they were enough to actually coach you was a different question.

Table 1. System modules.

ModulePurpose
BLE device managerDiscover, pair, and track sensor nodes from the mobile app
Kipchoge reference dataProvide a machine-readable comparison target derived from study footage
Real-time analysis screensVisualize live motion streams and session state while running
Performance report flowCompress a messy movement comparison into one alarming but motivating summary

The stack itself is revealing. The app uses Expo/React Native, react-native-ble-plx for device communication, and multiple charting libraries because apparently I was not content with one graphing problem at a time. That all tracks. When you are building a prototype that tries to make invisible body motion legible, visualization stops being decoration and becomes half the product.

4.What Actually Happened

I did not finish the full system. I did, however, get far enough to validate the premise that sensor data could be captured and moved through the early stages of the stack. That matters. A large project often dies in the gap between idea and first believable signal. Gaitr at least made it across that gap.

More importantly, the project changed what I considered buildable. Before this, microcontrollers and wearable sensors felt like a separate category of engineering reserved for people with actual hardware benches and reasonable plans. After this, they felt difficult but accessible, which is much more useful.

The remaining gap is where things get honest. I don't know biomechanics. Comparing my gait to Kipchoge's sounds straightforward until you start thinking about all the things that actually differ: his height versus mine, cadence differences, stride length, hip drop, the fact that his body has been optimized over decades of elite training and mine has been optimized for sitting in a desk chair. A beep that says “you don't match” is not useful when the mismatch is partly anatomical. The feedback system needed to be smarter than what I had, and at the time I didn't have the tools to make it smarter.

5.Reflection

Gaitr is unfinished. I don't think it's a failure, but I also don't want to oversell it. What I got out of it was less a product and more a set of skills I didn't have before: soldering, sensor handling, Bluetooth constraints, and how to think about movement as data instead of just video. I was making consistent progress the whole time I was working on it, and for these projects, the learnings along the way are the point.

I do plan on coming back to this. The piece I was missing, the ability to interpret noisy biomechanical data in a way that accounts for individual differences, is now something AI can actually help with. When I started Gaitr, I had beeps and a JSON file. Now there are models that can reason about movement patterns, normalize for body proportions, and generate feedback that goes beyond “you don't match.” The hardware works. The reference data exists. The gap that stopped me has gotten meaningfully smaller.

Also, if you ever need a reminder that you are not Eliud Kipchoge, I can recommend several cheaper options than building a multi-sensor motion analysis system from scratch.