# IPEX-LLM - LLM을 인텔 CPU/GPU에서 실행하기 위한 PyTorch 라이브러리

> Clean Markdown view of GeekNews topic #14138. Use the original source for factual precision when an external source URL is present.

## Metadata

- GeekNews HTML: [https://news.hada.io/topic?id=14138](https://news.hada.io/topic?id=14138)
- GeekNews Markdown: [https://news.hada.io/topic/14138.md](https://news.hada.io/topic/14138.md)
- Type: news
- Author: [xguru](https://news.hada.io/@xguru)
- Published: 2024-04-04T09:46:01+09:00
- Updated: 2024-04-04T09:46:01+09:00
- Original source: [github.com/intel-analytics](https://github.com/intel-analytics/ipex-llm)
- Points: 8
- Comments: 0

## Topic Body

- Intel Extension for PyTorch (IPEX) 기반  
- 50개 이상 모델에 최적화/검증완료 (LLaMA2, Mistral, Gemma, LLaVa, Whisper 등 )  
- 로컬 iGPU 나 Arc/Flex/Max 등 분산 GPU등에서 Low Latency로 실행  
- llama.cpp, HuggingFace, LangChain, LlamaIndex 등과 잘 연동

## Comments



_No public comments on this page._
