Comparing tracking methods


Category: Experimentation, TouchDesigner

Exploring Motion Tracking Methods

To determine the best approach for my project, I researched various motion tracking techniques and their applications in interactive installations:

Tracking MethodTechnology UsedProsCons
Webcam-based TrackingDifference tracking (CV)Simple to set up, requires no extra hardwareInaccurate, highly dependent on lighting
AI-powered TrackingMediapipe (Google)Recognizes body parts & gestures, cross-platformRequires ML integration, sometimes slow
Kinect TrackingDepth sensors (Microsoft)Accurate body recognition, 3D depth trackingRequires Windows, setup complexity

I also studied existing interactive installations for inspiration, focusing on how artists and designers create compelling experiences using body movement.

Key Insights & Benchmarks

  • TouchDesigner is widely used for interactive installations, often in combination with depth sensors or AI-powered tracking.
  • Lighting conditions heavily affect motion tracking; natural light changes can disrupt webcam-based tracking.
  • The choice of tracking method should match the intended interaction – for full-body movement, Kinect is superior, while for hand gestures, AI tracking might be more effective.

(Placeholder for any additional research or inspiration you might want to add)


3. Comparing 3 webcam bodytracking techniques

2. AI based Body Tracking -> Mediapipe

3. Bodytrack CHOP based Tracking

Exploring Kinect’s Potential

Given these challenges, I’ve started researching Kinect as an alternative. The Kinect’s built-in infrared sensor and body-tracking capabilities make it far superior for my project’s needs, especially in a low-light setting with a projector.

However, Kinect’s body tracking is only supported on Windows, which means I’ll need access to a Windows machine to experiment further.

Next Steps

  • Continue exploring TouchDesigner’s body tracking features with the webcam.
  • Test Kinect-based tracking to compare accuracy and feasibility for my final installation.
  • Investigate projection mapping techniques that complement motion tracking.

Reflection

This phase of experimentation was eye-opening. While the webcam allowed for quick tests, its limitations reaffirmed I need to further explore the different technology and get a hold of a Windows based computer. Eventually maybe requiring the need for an external tool like a Kinect. The coming weeks will focus on bridging this gap and hopefully, building a prototype capable of functioning in real-world conditions.

Experiment 2: Movement-Based Tracking (Webcam)

Goal: Explore motion tracking without additional hardware.

  • Implemented real-time movement detection.
  • Adjusted thresholds to capture only significant movements.
  • Findings: Highly sensitive to light changes and unstable results.

(Detailed breakdown to be added in a separate post)

Experiment 3: Body Tracking with AI (Mediapipe)

Goal: Test AI-based tracking for detecting body parts.

  • Integrated Google’s Mediapipe with TouchDesigner.
  • Experimented with pose estimation for interaction control.
  • Findings: More stable than webcam tracking but required optimization.

(Detailed breakdown to be added in a separate post)

Experiment 4: Kinect-Based Tracking

Goal: Test depth-based tracking for accurate body recognition.

  • Used Microsoft Kinect v2 with TouchDesigner.
  • Mapped depth data to interactive visuals.
  • Findings: Provided the best accuracy but required extra hardware setup.

(Detailed breakdown to be added in a separate post)


4. Challenges & Adaptations

While experimenting, I faced multiple challenges that shaped my learning process:

Technical Challenges

  • Webcam tracking was unreliable due to sensitivity to lighting changes.
  • AI tracking required high processing power, slowing down real-time interactions.
  • Kinect setup was complex, as it required Windows and external drivers.

Problem-Solving Approach

  • Switched to Kinect tracking for accuracy after failing with webcam methods.
  • Adjusted frame rate and smoothing parameters for real-time AI tracking.
  • Developed a hybrid solution where different tracking methods are used depending on the environment.

(Placeholder for additional unexpected challenges and solutions you may want to document)


5. Current Prototype & Interaction

Final Setup

  • Projection-based interactive installation.
  • Users trigger visual changes with body movement.
  • Smooth real-time tracking with Kinect and Mediapipe.

User Interaction

  • Movement reveals hidden layers in the visuals.
  • Users can manipulate generative elements by changing their position.
  • Encourages exploration and playfulness, aligning with my research question.

(Placeholder for final setup screenshots, video demos, or additional interaction details)


6. Future Improvements & Next Steps

  • Improve tracking accuracy by refining detection thresholds.
  • Incorporate gesture-based controls for more intuitive interactions.
  • Explore real-world applications, such as interactive museum exhibits.
  • Continue refining the prototype after submission, possibly incorporating machine learning for adaptive interactions.

(Placeholder for additional reflections or possible future directions)


7. Conclusion & Reflections

Key Takeaways

  • TouchDesigner is a powerful tool but has a steep learning curve.
  • Different tracking methods have different strengths and weaknesses.
  • Motion tracking opens up new possibilities for interactive experiences, but choosing the right method is crucial.

What I Learned

  • The importance of iterative development—many ideas didn’t work at first but evolved through experimentation.
  • How to critically assess tracking technologies and choose the best fit for an interactive installation.
  • That real-time interaction is highly dependent on environmental factors, such as lighting and user positioning.

(Placeholder for final thoughts, possible refinements, and additional feedback requests)


Next Steps

I will now document each experiment in separate, more detailed blog posts, providing a deeper breakdown of each prototype and technical approach. If you have any questions, feedback, or suggestions, feel free to share!