OL
Ollama
Run large language models locally. Simple CLI to download, run, and manage LLMs on your machine.
About Ollama
Ollama is a tool for running large language models locally. It bundles model weights, configuration, and data into a single package defined by a Modelfile. It supports Llama, Mistral, Gemma, Phi, and many other models.
Key Features
✓ One-command model download & run
✓ Llama, Mistral, Gemma, Phi support
✓ REST API for integration
✓ Modelfile for customization
✓ GPU acceleration (NVIDIA, Apple Silicon)
✓ Library import in Python/JS
Why choose Ollama?
Ollama is an open source alternative to OpenAI API, Claude API. Licensed under MIT, it gives you full access to the source code and the freedom to modify, self-host, and contribute. You can deploy it on your own servers for complete data ownership and privacy.