Skip to content

pashanitw/llama3-and-friends-from-scratch

Repository files navigation

Implementation of llama3 and friends from scratch

Work in Progress

Introduction

This repository is dedicated to implementing state-of-the-art language learning models (LLMs) from scratch. Our focus is not only on building these models but also on demonstrating various advanced training techniques.

Current Techniques Planning to Include:

  • SFT (Supervised Fine-Tuning)
  • Instruction Fine-tuning
  • RLHF (Reinforcement Learning with Human Feedback)
  • DPO (Direct Preference Optimization)
  • LORA (Low-Rank Adaptation of Large Language Models)

Project Status

🚧 Work in Progress 🚧

Inference Demonstration

Check out the Inference Notebook for llama3 model inference.

This project is currently under active development. We are in the process of adding more models and enhancing our training methods. Please note that changes may occur frequently as we improve and expand our implementations.

Stay Tuned

Watch this space for updates as we continue to push the boundaries of what's possible with language models!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published