The Full Stack Blog
Call for posts!
We're just getting started with blogging, as we branch out from courses and live events.
Contact us
via email (team
at fullstackdeeplearning
dot com
),
via Twitter DM,
or message charles_irl
on Discord
if you're interested in contributing!
Implement single node pipeline parallelism from scratch
xrsrke ·
10/21/23, 8:28 AM
Breaking down parallelism in Megatron-LM
xariusrke and Charles Frye ·
6/21/23, 8:54 PM
Notes on parallel training of large language models, Megatron-LM style.
Vanilla GPT-3 quality from an open source model on a single machine: GLM-130B
Charles Frye ·
1/13/23, 2:43 AM
Notes from deploying GLM-130B, a large language model from Tsinghua KEG
Total 3 posts.
We are excited to share this course with you for free.
We have more upcoming great content. Subscribe to stay up to date as we release it.
We take your privacy and attention very seriously and will never spam you. I am already a subscriber