Product
Use Cases
Text-to-SQL
Classification
Function Calling
Pricing
Blog
Docs
Careers
Login
Contact us
Login
Contact us
Building High-Performance LLM Applications on AMD GPUs with Lamini
LLM Security: Lamini's Air-Gapped Solution for Government and High-Security Deployments
Accelerating Lamini Memory Tuning on NVIDIA GPUs
Meta x Lamini: Tune Llama 3 to query enterprise data safely and accurately
Introducing Lamini Memory Tuning: 95% LLM Accuracy, 10x Fewer Hallucinations
How a Fortune 500 slashed hallucinations to create 94.7% accurate LLM agents for SQL
Introducing Lamini Inference with 52x more RPM than vLLM
Copy.ai Automates Content Categorization with Lamini
Evaluating Your LLM in Three Simple Steps
Previous
Next
Collaboration
6 min
Multi-node LLM Training on AMD GPUs
Collaboration
Lamini & AMD: Paving the Road to GPU-Rich Enterprise LLMs
Collaboration
The Battle Between Prompting and Finetuning
Collaboration
8 min
Introducing Lamini, the LLM Platform for Rapidly Customizing Models
Management
Guarantee Valid JSON Output with Lamini
Management
Finetuning LLMs with our CEO Sharon Zhou & Andrew Ng
Management
6 min
Free, Fast and Furious Finetuning
Productivity
Lamini LLM Finetuning on AMD ROCm™: A Technical Recipe
Productivity
One Billion Times Faster Finetuning with Lamini PEFT
Productivity
How to specialize general LLMs to private data