Ynnova is an open inference network. Contribute your idle GPU resources and earn per token served — or consume fast, cheap LLM inference via our OpenAI-compatible API.
Have an idle GPU? Connect it to the Ynnova network and start earning per token your hardware serves.
Access distributed GPU inference with a single API key. Drop-in replacement for the OpenAI SDK.
api.ynnova.eu/v1/chat/completions as usualGPU owners deploy a worker agent that tunnels into the Ynnova network. No port forwarding required.
Incoming API calls are load-balanced to the best available node based on latency and capacity.
Token throughput is metered and settled. Payouts are calculated per million tokens served.