Skip to content

Home

Python PyPI version Downloads License OS Support


🤔 Why x.infer?

So, a new computer vision model just dropped last night. It's called GPT-54o-mini-vision-pro-max-xxxl. It's a super cool model, open-source, open-weights, open-data, all the good stuff.

You're excited. You want to try it out.

But it's written in a new framework, TyPorch that you know nothing about. You don't want to spend a weekend learning TyPorch just to find out the model is not what you expected.

This is where x.infer comes in.

x.infer is a simple library that allows you to run inference with any computer vision model in just a few lines of code. All in Python.

Out of the box, x.infer supports the following frameworks:

Transformers TIMM Ultralytics vLLM Ollama

Combined, x.infer supports over 1000+ models from all the above frameworks.

Tasks supported:

Image Classification Object Detection Image to Text

Run any supported model using the following 4 lines of code:

1
2
3
4
5
6
import xinfer

model = xinfer.create_model("vikhyatk/moondream2")
model.infer(image, prompt)         # Run single inference
model.infer_batch(images, prompts) # Run batch inference
model.launch_gradio()              # Launch Gradio interface

Have a custom model? Create a class that implements the BaseModel interface and register it with x.infer. See Add Your Own Model for more details.

🌟 Key Features

x.infer
  • Unified Interface: Interact with different computer vision frameworks through a single, consistent API.
  • Modular Design: Integrate and swap out models without altering the core framework.
  • Extensibility: Add support for new models and libraries with minimal code changes.