Skip to content

Deploying to Render

Secure Your Zep Deployment

If you are deploying Zep to a production environemnt or where Zep APIs are exposed to the public internet, please ensure that you secure your Zep server.

Review the Security Guidelines and configure authentication. Failing to do so will leave your server open to the public.

1. Click on the button below to deploy to Render using the Zep blueprint

Deploy to Render

2. Configure your deploy

Enter a Blueprint Name (we suggest zep) and provide your OpenAI API key, which will be stored as a secret.

Click Apply.

OpenAI or Anthropic API Key Required

An OpenAI API key is required to run Zep. Please ensure that you enter it in the step above. To configure Zep to use Anthropic or other LLM services, please see Configuring LLM Services.

3. Wait for your deploy to complete

This takes a few minutes.

4. Configure authentication

Follow the server authentication instructions here. Do not skip this step. Failing to do so will leave your server open to the public.

5. Point your client SDK to your new Zep server

Retrieve your Zep server URL from the Render web console.

from zep_python import ZepClient

zep = ZepClient("", api_key) # Replace with Zep API URL
import { ZepClient } from "@getzep/zep-js";

const zep = new ZepClient.init("", apiKey); // Replace with Zep API URL

Next steps: Using Zep's Python and TypeScript/JS SDKs

Web UI disabled for Render deploys

For security reasons, Zep deployments to Render default to disabling the web UI. The Zep web UI is not secured by JWT authentication and should only be enabled if you deploy Zep as a private service.

What this blueprint does

Three services are deployed:

  • zep - the Zep server
  • nlp - a back-end private service responsible for several NLP tasks
  • zep-postgres - a Postgres database

The blueprint defaults to the standard tier. You can change these settings in the Render web console.

This blueprint is not optimized for production

This blueprint by default deploys in the smallest possible configuration.

Depending on the embedding model you use, you may need to increase the memory and CPU allocated to the nlp service.

Please see the production deployment guide for more information.

Next Steps