
General Introduction
Laminar is an open source AI engineering optimization platform focused on AI engineering from first principles. It helps users collect, understand and use data to improve the quality of LLM (Large Language Model) applications.Laminar provides comprehensive observability, text analytics, evaluation and cue chain management capabilities to support users in building and optimizing complex AI products. Whether it's data tracking, online evaluation, or dataset construction, Laminar provides powerful support to help users achieve efficient AI development and deployment.
Its modern, open source technology stack includes Rust, RabbitMQ, Postgres, Clickhouse, and more to ensure high performance and low overhead. Users can deploy quickly with Docker Compose or enjoy full functionality using a hosted platform.

DEMO: https://www.lmnr.ai/


Function List
- Data tracking: Document each step of the execution of the LLM application, collecting valuable data that can be used for better evaluation and fine-tuning.
- Online Assessment: Set up LLM as a rater or use a Python script evaluator for each received span.
- Data set construction: Constructing datasets from tracking data for evaluating, fine-tuning, and prompting engineering.
- Cue Chain Management: Support for building and hosting complex cue chains, including agent hybrid or self-reflexive LLM pipelines.
- Open source and self-hosted: Completely open source, easily self-hosted, and ready to go with just a few commands.
Using Help
Installation process
- Cloning GitHub repositories:
git clone https://github.com/lmnr-ai/lmnr
- Go to the project catalog:
cd lmnr
- Use Docker Compose to start:
docker compose up -d
Function Operation Guide
Data tracking
- initialization: Import Laminar in the code and initialize the project API key.
from lmnr import Laminar, observe Laminar.initialize(project_api_key="...")
- comment function: Use
@observe
Annotate functions that need to be traced.@observe() def my_function(): ...
Online Assessment
- Setting up the Evaluator: The LLM can be set up to act as a rater or use a Python script evaluator to evaluate and label each received span.
# Example Code evaluator = LLMJudge() evaluator.evaluate(span)
Data set construction
- Creating Data Sets: Construct datasets from tracking data for subsequent evaluation and fine-tuning.
dataset = create_dataset_from_traces(traces)
Cue Chain Management
- Build a cue chain: Support for building complex cue chains, including agent mixing or self-reflective LLM pipelines.
chain = PromptChain() chain.add_prompt(prompt)
self-hosted
- Self-hosting Steps: To start self-hosting with just a few commands, make sure Docker and Docker Compose are installed in your environment.
git clone https://github.com/lmnr-ai/lmnr cd lmnr docker compose up -d
data statistics
Relevant Navigation

An open source framework for building large-scale language modeling application designs, providing modular components and toolchains to support the entire application lifecycle from development to production.

Mistral Small 3
Open source AI model with 24 billion parameters featuring low-latency optimization and imperative task fine-tuning for conversational AI, low-latency automation, and domain-specific expertise applications.

FacePoke
Open source real-time facial expression editing tool that allows users to adjust facial expressions and head orientation in static images in real time with simple operations.

Grok-1
xAI released an open source large language model based on hybrid expert system technology with 314 billion parameters designed to provide powerful language understanding and generation capabilities to help humans acquire knowledge and information.

Open-Sora 2.0
Lucent Technologies has launched a new open source video generation model with high performance and low cost, leading the open source video generation technology into a new stage.

OpenHands
Open source software development agent platform designed to improve developer efficiency and productivity through features such as intelligent task execution and code optimization.

ChatTTS
An open source text-to-speech model optimized for conversational scenarios, capable of generating high-quality, natural and smooth conversational speech.

InternLM
Shanghai AI Lab leads the launch of a comprehensive big model research and development platform, providing an efficient tool chain and rich application scenarios to support multimodal data processing and analysis.
No comments...