# OpenLLaMA - LLaMA의 개방형 복제본

> Clean Markdown view of GeekNews topic #9112. Use the original source for factual precision when an external source URL is present.

## Metadata

- GeekNews HTML: [https://news.hada.io/topic?id=9112](https://news.hada.io/topic?id=9112)
- GeekNews Markdown: [https://news.hada.io/topic/9112.md](https://news.hada.io/topic/9112.md)
- Type: news
- Author: [xguru](https://news.hada.io/@xguru)
- Published: 2023-05-05T10:16:01+09:00
- Updated: 2023-05-05T10:16:01+09:00
- Original source: [github.com/openlm-research](https://github.com/openlm-research/open_llama)
- Points: 20
- Comments: 1

## Topic Body

- 다른 용도의 이용이 가능하도록 아파치 라이센스로 만들어진 LLaMA의 복제본   
- Together가 릴리즈한 RedPajama 데이터셋을 활용   
- JAX 기반의 파이프라인인 EasyLM을 통해 훈련   
- 200B/300B 토큰 기반의 OpenLLAMA 7B를 공개

## Comments



### Comment 15937

- Author: xguru
- Created: 2023-05-05T10:17:02+09:00
- Points: 2

HN댓글에 "llama.cpp + 8GB RAM에서 OpenLLaMA 사용하기" 코맨드를 올려놨네요  
https://news.ycombinator.com/item?id=35798888  
  
```bash  
  git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && cmake -B build && cmake --build build  
  python3 -m pip install -r requirements.txt  
  
  cd models && git clone https://huggingface.co/openlm-research/open_llama_7b_preview_200bt/ && cd -  
  python3 convert-pth-to-ggml.py models/open_llama_7b_preview_200bt/open_llama_7b_preview_200bt_transformers_weights 1  
  ./build/bin/quantize models/open_llama_7b_preview_200bt/open_llama_7b_preview_200bt_transformers_weights/ggml-model-f16.bin models/open_llama_7b_preview_200bt_q5_0.ggml q5_0  
  ./build/bin/main -m models/open_llama_7b_preview_200bt_q5_0.ggml --ignore-eos -n 1280 -p "Building a website can be done in 10 simple steps:" --mlock  
```
