An OpenAI-compatible API proxy with LLM trace visualization using Phoenix and OpenInference.
- OpenAI API compatible endpoints
- Streaming and non-streaming responses
- LLM trace visualization using Phoenix
- Lightweight local deployment
- OpenTelemetry-based instrumentation
- Python 3.9 - 3.12
- Poetry (for dependency management)
- Clone the repository:
git clone https://github.com/pengjeck/LocalLLMTrace
cd LocalLLMTrace
- Create a
.env
file based on.env-example
:
cp .env-example .env
- Update the
.env
file with your API keys:
# OpenAI/DeepSeek API
OPENAI_API_KEY=your-api-key
OPENAI_API_URL=https://api.deepseek.com
- Install dependencies:
poetry install
- Start the development server:
poetry run uvicorn main:app --reload
- Start Phoenix tracing UI:
phoenix serve
{
"model": "string",
"messages": [
{
"role": "string",
"content": "string"
}
],
"temperature": "number",
"stream": "boolean",
"max_tokens": "number",
"stream_options": {
"include_usage": "boolean"
}
}
curl -X POST "http://localhost:8000/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Hello!"
}
],
"temperature": 0.7,
"stream": false
}'
- Install Supervisor:
pip install supervisor
- Start services:
supervisord -c supervisord.conf
- Check service status:
supervisorctl -c supervisord.conf status
- Common commands:
# Restart a service
supervisorctl -c supervisord.conf restart [service_name]
# Stop all services
supervisorctl -c supervisord.conf shutdown
# View logs
tail -f /tmp/phoenix_out.log
tail -f /tmp/main_out.log
- Install dependencies:
poetry install
- Start the development server:
poetry run uvicorn main:app --reload
- Start Phoenix tracing UI:
poetry run phoenix
The following environment variables are required:
OPENAI_API_KEY
: Your DeepSeek API keyOPENAI_API_URL
: DeepSeek API endpoint
MIT License
Copyright (c) 2024 Your Name
Permission is hereby granted...