# LLaMA 모델의 간략한 역사

> Clean Markdown view of GeekNews topic #9104. Use the original source for factual precision when an external source URL is present.

## Metadata

- GeekNews HTML: [https://news.hada.io/topic?id=9104](https://news.hada.io/topic?id=9104)
- GeekNews Markdown: [https://news.hada.io/topic/9104.md](https://news.hada.io/topic/9104.md)
- Type: news
- Author: [xguru](https://news.hada.io/@xguru)
- Published: 2023-05-04T10:31:01+09:00
- Updated: 2023-05-04T10:31:01+09:00
- Original source: [agi-sphere.com](https://agi-sphere.com/llama-models/)
- Points: 20
- Comments: 0

## Topic Body

- 2023년 2월에 출시된 LLaMA는 이미 다양한 파인튜닝 모델이 출시됨  
- 각 모델을 살펴보고 비교   
  - LLaMA : (7B, 13B, 33B, 65B), CommonCrawl/C4/GitHub/Wikipedia/Gutenberg & Book3/ArXiv/StackExchange  
  - Alpaca : 52k GPT-3 instructions  
  - Vicuna : 70k ChatGPT conversations  
  - Koala : 117k cleaned ChatGPT conversations  
  - GPT4-x-Alpaca : 20k GPT4 instructions  
  - WizardLM : 70k instructions synthesized with ChatGPT/GPT-3  
  - OpenAssistant : 600k human interactions (OpenAssistant Conversations)  
- LLaMA 모델을 로컬에서 돌리기 위한 소프트웨어들

## Comments



_No public comments on this page._
