This is a LlamaIndex project bootstrapped with create-llama
and instrumented using OpenInference.
This example integrates three components:
This application is instrumented using OpenInference with one instrumentation call:
from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
def instrument():
resource = Resource(attributes={})
tracer_provider = trace_sdk.TracerProvider(resource=resource)
span_exporter = OTLPSpanExporter(endpoint="http://phoenix:6006/v1/traces")
span_processor = SimpleSpanProcessor(span_exporter=span_exporter)
tracer_provider.add_span_processor(span_processor=span_processor)
trace_api.set_tracer_provider(tracer_provider=tracer_provider)
LlamaIndexInstrumentor().instrument()
By calling instrument()
prior to starting the FastAPI server, traces for LlamaIndex will be sent to the phoenix server, providing deep observability into the underlying behavior of the application. This includes information about all of the retrieved context for a given query, and other relevant information such as reranking and synthesis steps that might occur prior to returning the final LLM response to the user.
To learn more about LlamaIndex, take a look at the following resources:
You can check out the LlamaIndexTS GitHub repository - your feedback and contributions are welcome!
First, startup the backend as described in the backend README.
OPENAI_API_KEY
environment variable or a .env
fileSecond, run the development server of the frontend as described in the frontend README.
Open http://localhost:3000 with your browser to use the chat interface to your RAG application.
Traces can be viewed using the Phoenix UI.
./backend/data
to build indexes over.backend/README.md
to install LlamaIndex using poetry and generate an index.OPENAI_API_KEY
environment variable or a .env
file alongside compose.yml
.docker compose up --build
to spin up services for the frontend, backend, and Phoenix.docker compose down
to spin down the services.