ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., …

Stars

5973

Forks

1201

Language

Python

Last Updated

May 01, 2024

Similar Repos