Introduction to LLM Link

LLM Link is a universal LLM gateway that normalizes different providers and protocols into a single service. This page explains why it exists, what problems it solves, and how it fits into your stack.

Introduction

Modern editors, agents, and plugins all speak slightly different "OpenAI-compatible" dialects and expect you to copy API keys and endpoints into each of them. Switching providers or models usually means touching multiple configs, rotating secrets in many places, and dealing with protocol quirks.

LLM Link sits in the middle as a single proxy between your tools and upstream LLM providers. You configure providers and routing once, then point Zed.dev, Codex CLI, and other clients to LLM Link. It unifies protocols, centralizes API key management, and makes it easy to experiment with different providers without rebuilding your setup.

Universal LLM Proxy

Run a single service that speaks OpenAI, Ollama, and Anthropic-style APIs to your favorite tools.

Multi-Provider Routing

Connect to 10+ providers (OpenAI, Anthropic, Zhipu, Volcengine, Moonshot, Minimax, Tencent, Aliyun, Longcat, Ollama, and more).

Editor & Agent Integrations

First-class support for Zed.dev, Codex CLI and other dev tools via presets and protocols.

Hot-Reload Configuration

Update API keys and routing rules at runtime using REST APIs, without restarting the service.

Supported LLM Providers

LLM Link supports 8 major LLM providers with unified API access

Total Providers

8

Unified API access

Native APIs

4

Custom implementations

OpenAI Compatible

4

Standard protocol

Next steps

Continue with installation and architecture details in the dedicated guides.

Quick Start

Step-by-step installation, key configuration, and running your first proxy.

Open Quick Start guide

Architecture

High-level diagram of clients, protocol adapters, and provider connectors.

Open Architecture overview