Skip to main content

LiteLLM

LiteLLM is the model gateway layer for OpenAI-compatible access to multiple model providers.

Responsibilities

  • Centralize provider routing.
  • Manage model aliases and policies.
  • Support usage tracking and budget controls where configured.
  • Keep provider credentials away from application source code.

Operational Notes

  • Restrict admin UI access.
  • Rotate master keys and provider keys when exposure is suspected.
  • Prefer internal service-to-service URLs for private runtime calls.
  • Keep public docs conceptual and avoid exposing gateway topology.

Related pages: Data Plane and Infisical.