Skip to content

Quick Start

Prefer deploying to the cloud?

Sign Up for Zep Cloud  

Starting a Zep server locally is simple.

1. Clone the Zep repo

git clone


2. Add your OpenAI API key to a .env file in the root of the repo:

ZEP_OPENAI_API_KEY=<your key here>


Zep uses OpenAI for chat history summarization, intent analysis, and, by default, embeddings. You can get an Open AI Key here.


3. Start the Zep server:

docker compose pull
docker compose up

This will start a Zep server on port 8000, and NLP and database server backends.

Secure Your Zep Deployment

If you are deploying Zep to a production environemnt or where Zep APIs are exposed to the public internet, please ensure that you secure your Zep server.

Review the Security Guidelines and configure authentication. Failing to do so will leave your server open to the public.


4. Get started with the Zep SDKs!

Docker on Macs: Embedding is slow!

For docker compose deployment we default to using OpenAI's embedding service.

Zep relies on PyTorch for embedding inference. On MacOS, Zep's NLP server runs in a Linux ARM64 container. PyTorch is not optimized to run on Linux ARM64 and does not have access to MacBook M-series acceleration hardware.

Want to use local embeddings? See Selecting Embedding Models.

5. Access the Zep Web UI at http://localhost:8000/admin (assuming you are running Zep locally)

Zep Web UI

Zep Web UI

Next Steps