# CS 886: 파운데이션 모델의 최신 연구 동향

> Clean Markdown view of GeekNews topic #14324. Use the original source for factual precision when an external source URL is present.

## Metadata

- GeekNews HTML: [https://news.hada.io/topic?id=14324](https://news.hada.io/topic?id=14324)
- GeekNews Markdown: [https://news.hada.io/topic/14324.md](https://news.hada.io/topic/14324.md)
- Type: news
- Author: [xguru](https://news.hada.io/@xguru)
- Published: 2024-04-15T10:14:02+09:00
- Updated: 2024-04-15T10:14:02+09:00
- Original source: [cs.uwaterloo.ca](https://cs.uwaterloo.ca/~wenhuche/teaching/cs886/)
- Points: 15
- Comments: 0

## Topic Body

- "캐나다의 MIT" 워털루 대학교의 AI 강의 중 하나  
- 챕터당 설명 슬라이드와 유튜브 강의영상 및 레퍼런스들 링크 포함  
- 토픽과 레퍼런스 리스트 만으로도 좋은 참조가 될 것 같음  
### 강의 목차  
#### Introduction to Foundation Models  
1. Foundation Model 소개  
2. 수업 안내   
3. RNN & CNN  
4. NLP & CV  
#### Transformer Architecture  
5. Self-Attention & Transformer  
6. Efficient Transformer  
7. Parameter-Efficient Tuning  
8. Language Model Pretraining  
#### Large Language Models  
9. Large Language Model  
10. Scaling Law  
11. Instruction Tuning & RLHF  
12. Efficient LLM Training  
13. Efficient LLM Inference  
14. Compress and Sparsify LLM  
15. LLM Prompting  
#### (Large) Multimodal Models  
16. Vision Transformer   
17. Diffusion Model  
18. Image Generation  
19. Multimodal Model Pre-training  
20. Large Multimodal Model  
#### Augmenting Foundation Models  
21. Tool Augmentation   
22. Retrieval Augmentation

## Comments



_No public comments on this page._
