SpinePose Inference Library

sellapi sellfeatured
devices  Python Library

Lightweight CLI and Python API for running SpinePose models today, with SimSpine inference support coming next.

SpinePose Inference Library

Lightweight CLI and Python API for spine-aware human pose estimation in the

[![SpinePose Homepage](https://img.shields.io/badge/Project-Home-334155.svg)](https://saifkhichi.com/projects/spinepose-inference/) [![Documentation](https://img.shields.io/badge/docs-passing-15803D.svg)](https://spinepose.readthedocs.io/) [![PyPI version](https://img.shields.io/pypi/v/spinepose.svg)](https://pypi.org/project/spinepose/) ![PyPI - License](https://img.shields.io/pypi/l/spinepose) Image Demo Image Demo

SpinePose is an inference library for spine-aware 2D human pose estimation in the wild. It provides a simple CLI and Python API for running inference on images and videos using pretrained models presented in our papers “Towards Unconstrained 2D Pose Estimation of the Human Spine” (CVPR Workshops 2025) and “SIMSPINE: A Biomechanics-Aware Simulation Framework for 3D Spine Motion Annotation and Benchmarking” (CVPR 2026). Our models predict the SpineTrack skeleton hierarchy comprising 37 keypoints, including 9 directly along the spine chain in addition to the standard body joints.

Getting Started

Recommended Python Version: 3.9–3.12

For quick spinal keypoint estimation, we release optimized ONNX models via the spinepose package on PyPI:

pip install spinepose

On Linux/Windows with CUDA available, install the GPU version:

pip install spinepose[gpu]

Using the CLI

usage: spinepose [-h] (--version | --input_path INPUT_PATH) [--vis-path VIS_PATH] [--save-path SAVE_PATH] [--mode {xlarge,large,medium,small}] [--nosmooth] [--spine-only]

SpinePose Inference

options:
  -h, --help            show this help message and exit
  --version, -V         Print the version and exit.
  --input_path INPUT_PATH, -i INPUT_PATH
                        Path to the input image or video
  --vis-path VIS_PATH, -o VIS_PATH
                        Path to save the output image or video
  --save-path SAVE_PATH, -s SAVE_PATH
                        Save predictions in OpenPose format (.json for image or folder for video).
  --mode {xlarge,large,medium,small}, -m {xlarge,large,medium,small}
                        Model size. Choose from: xlarge, large, medium, small (default: medium)
  --nosmooth            Disable keypoint smoothing for video inference (default: enabled)
  --spine-only          Only use 9 spine keypoints (default: use all 37 keypoints)
  --model-version MODEL_VERSION
                        Model version to use. One of: 'latest', 'v2', 'v1' (default: latest)

For example, to run inference on a video and save only spine keypoints in OpenPose format:

spinepose --input_path path/to/video.mp4 --save-path output_path.json --spine-only

This automatically downloads the model weights (if not already present) and outputs the annotated image or video. Use spinepose -h to view all available options, including GPU usage and confidence thresholds.

Using the Python API

import cv2
from spinepose import SpinePoseEstimator

# Initialize estimator (downloads ONNX model if not found locally)
estimator = SpinePoseEstimator(device='cuda')

# Perform inference on a single image
image = cv2.imread('path/to/image.jpg')
keypoints, scores = estimator(image)
visualized = estimator.visualize(image, keypoints, scores)
cv2.imwrite('output.jpg', visualized)

Or, for a simplified interface:

from spinepose.inference import infer_image, infer_video

# Single image inference
results = infer_image('path/to/image.jpg', vis_path='output.jpg')

# Video inference with optional temporal smoothing
results = infer_video('path/to/video.mp4', vis_path='output_video.mp4', use_smoothing=True)

[!TIP] New in v2.0.1: You can select pretrained model families in both CLI and Python API.

  • CLI: use --model-version v1|v2|latest (for example, --model-version v1).
  • Python API: use model_version='v1'|'v2'|'latest' (for example, SpinePoseEstimator(model_version='v1')).

v1 loads models trained on SpineTrack, while v2 and latest load the SIMSPINE-trained V2 models (latest is the default).

Model Zoo

SpinePose V2

Method Training Data SpineTrack SIMSPINE Usage
APB ARB APS ARS AUC
spinepose_v2_smallSpineTrack
+ SIMSPINE
0.7880.8150.9200.9290.790--mode small --model-version v2
spinepose_v2_medium0.8210.8460.9280.9370.798--mode medium --model-version v2
spinepose_v2_large0.8400.8620.9170.9270.803--mode large --model-version v2

SpinePose V1

Method Training Data SpineTrack SIMSPINE Usage
APB ARB APS ARS AUC
spinepose_v1_smallSpineTrack0.7920.8210.8960.9080.611--mode small --model-version v1
spinepose_v1_medium0.8400.8640.9140.9260.633--mode medium --model-version v1
spinepose_v1_large0.8540.8770.9100.9220.633--mode large --model-version v1
spinepose_v1_xlarge0.7590.8010.8930.910---mode xlarge --model-version v1