hohm.studio

The Science Behind hohm.studio

Understanding the technology, research, and methodology that powers our AI-driven yoga and posture guidance

How Our AI Works

hohm.studio uses cutting-edge computer vision technology to analyze your body position in real-time. Here's a non-technical overview of how it works:

1. Camera Captures Your Movement

Your webcam captures video at up to 30 frames per second. This video stream stays entirely on your device and is never uploaded or stored anywhere.

2. AI Detects Body Landmarks

Google's MediaPipe Pose model identifies 33 key points on your body: joints like shoulders, elbows, wrists, hips, knees, and ankles, plus points on your face and feet. This creates a "skeleton" overlay of your pose.

3. Angles Are Calculated

Our software calculates the angles between your joints. For example, the angle at your elbow when your arm is bent, or the angle at your hip in a standing pose. These angles are the mathematical foundation of pose comparison.

4. Comparison with Reference

Your current angles are compared against reference angles from correctly performed poses. The closer your angles match the reference, the higher your pose match score.

5. Real-Time Feedback

Based on the comparison, you receive instant visual and audio feedback to help you adjust your position. The entire process happens in milliseconds, creating a responsive, interactive experience.

MediaPipe Pose

Google's state-of-the-art pose detection ML model, optimized for real-time performance in browsers.

WebGL Acceleration

GPU-accelerated processing ensures smooth, responsive pose detection without draining your battery.

Local Processing

All AI inference runs locally in your browser. Zero data is sent to external servers.

Angle Analysis

Proprietary algorithms calculate and compare joint angles for accurate pose matching.

Technical Deep Dive

For those interested in the technical details, here's how the pose detection pipeline works under the hood:

MediaPipe Pose Landmarker

We use MediaPipe's Pose Landmarker task, which employs a two-stage ML pipeline:

  • BlazePose Detector: A lightweight neural network that locates the general region of a person in the frame
  • BlazePose Tracker: A more detailed model that precisely locates 33 body landmarks with sub-pixel accuracy

Landmark Coordinates

Each landmark is returned with normalized x, y coordinates (0-1 range relative to image size) plus a z coordinate representing depth relative to the hip midpoint. We also receive visibility and presence scores for each landmark.

Angle Calculation

Joint angles are calculated using vector mathematics. For a joint J connected to points A and B, we compute:

angle = arccos((JA · JB) / (|JA| × |JB|))

This gives us the angle in degrees, which we compare against reference poses stored in our database.

Pose Matching Algorithm

Our matching algorithm weights different joints based on the specific pose. For example, in Tree Pose, the angle of the lifted leg at the hip is weighted more heavily than arm position. We use an exponential scoring function that provides more granular feedback in the "close but not quite" range where small adjustments matter most.

Accuracy & Limitations

We believe in transparency about what our technology can and cannot do. Here's an honest assessment:

33
Body landmarks tracked
~90%
Landmark accuracy (optimal conditions)
30
Frames per second
100%
Local processing (nothing uploaded)

Known Limitations

  • Lighting: Low light or strong backlighting can reduce detection accuracy
  • Clothing: Very loose or flowing clothing may obscure body landmarks
  • Occlusion: If parts of your body are hidden from the camera, those landmarks cannot be tracked
  • Camera angle: Extreme angles (very high or low) may reduce accuracy
  • Multiple people: The system tracks one person at a time
  • 3D estimation: Depth (z-axis) estimation is less accurate than x/y position

For best results, we recommend:

Research & Citations

The benefits of yoga and good posture are supported by extensive peer-reviewed research. Here are some key studies that inform our approach:

Our Methodology

Reference Pose Creation

Our reference poses are created by capturing landmark data from correctly performed poses, validated against traditional yoga alignment principles. Each pose includes multiple acceptable ranges to accommodate natural variation in body proportions and flexibility levels.

Session Design

Yoga sessions are designed following established sequencing principles:

  • Gradual warm-up before challenging poses
  • Bilateral symmetry - poses performed on both sides
  • Counter-poses to balance the body
  • Appropriate rest periods between holds
  • Cool-down and integration at session end

Difficulty Progression

Poses are categorized by difficulty level based on balance requirements, flexibility demands, and strength needs. Our algorithms suggest appropriate progressions based on your performance history.

Safety First

We prioritize safety by:

  • Never encouraging users to push beyond their comfortable range
  • Providing clear contraindications for each pose
  • Including modification options for different ability levels
  • Reminding users to consult healthcare providers when appropriate

Frequently Asked Questions

How does AI pose detection work?
hohm.studio uses Google's MediaPipe Pose technology, which employs a machine learning model trained on millions of images to detect 33 body landmarks in real-time. The AI identifies key points like shoulders, elbows, hips, and knees, then calculates joint angles to compare your pose against reference positions. All processing happens locally in your browser - your video never leaves your device.
How accurate is the pose detection?
MediaPipe Pose achieves high accuracy for typical webcam conditions. Under optimal lighting and positioning, landmark detection accuracy exceeds 90%. However, factors like poor lighting, loose clothing, partial visibility, or unusual camera angles can reduce accuracy. We recommend practicing in well-lit areas with your full body visible to the camera.
Is there scientific evidence that yoga and good posture improve health?
Yes, numerous peer-reviewed studies support the benefits of yoga and posture correction. A 2024 systematic review in Frontiers in Psychiatry found yoga significantly reduces stress. Research from Mayo Clinic and NIH links good posture to reduced back pain and improved mood. Studies in the Journal of Physical Therapy Science show yoga improves flexibility and balance.
Does the AI store or upload my video?
No. All pose detection processing happens entirely within your web browser using WebGL acceleration. Your webcam feed is never uploaded, stored, or transmitted to any server. We cannot see your video, and no images or recordings are created unless you explicitly choose to save them locally.
Can the AI replace a yoga teacher?
No. While our AI provides helpful real-time feedback on pose alignment, it cannot replace the personalized guidance, hands-on adjustments, and safety oversight that a qualified yoga instructor provides. We recommend using hohm.studio as a complement to, not a replacement for, professional instruction, especially for beginners or those with health concerns.
What should I do if I have a medical condition?
Always consult with a healthcare professional before starting any new exercise program, including yoga. Our app provides general wellness guidance and is not a substitute for medical advice. If you have injuries, chronic conditions, or are pregnant, please work with qualified professionals who can provide personalized recommendations.

Ready to Experience It Yourself?

Try our AI-powered yoga sessions or posture tracking - completely free, no account required.

Start Yoga Session