Skip to content

rohan-paul/LLM-FineTuning-Large-Language-Models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LLM (Large Language Models) FineTuning Projects and notes on common practical techniques

Find me here..


Fine-tuning LLM (and YouTube Video Explanations)

Notebook 🟠 YouTube Video
Finetune Llama-3-8B with unsloth 4bit quantized with ORPO Youtube Link
Llama-3 Finetuning on custom dataset with unsloth Youtube Link
CodeLLaMA-34B - Conversational Agent Youtube Link
Inference Yarn-Llama-2-13b-128k with KV Cache to answer quiz on very long textbook Youtube Link
Mistral 7B FineTuning with_PEFT and QLORA Youtube Link
Falcon finetuning on openassistant-guanaco Youtube Link
Fine Tuning Phi 1_5 with PEFT and QLoRA Youtube Link
Web scraping with Large Language Models (LLM)-AnthropicAI + LangChainAI Youtube Link

Fine-tuning LLM

Notebook Colab
πŸ“Œ Gemma_2b_finetuning_ORPO_full_precision Open In Colab
πŸ“Œ Jamba_Finetuning_Colab-Pro Open In Colab
πŸ“Œ Finetune codellama-34B with QLoRA Open In Colab
πŸ“Œ Mixtral Chatbot with Gradio
πŸ“Œ togetherai api to run Mixtral Open In Colab
πŸ“Œ Integrating TogetherAI with LangChain πŸ¦™ Open In Colab
πŸ“Œ Mistral-7B-Instruct_GPTQ - Finetune on finance-alpaca dataset πŸ¦™ Open In Colab
πŸ“Œ Mistral 7b FineTuning with DPO Direct_Preference_Optimization Open In Colab
πŸ“Œ Finetune llama_2_GPTQ
πŸ“Œ TinyLlama with Unsloth and_RoPE_Scaling dolly-15 dataset Open In Colab
πŸ“Œ Tinyllama fine-tuning with Taylor_Swift Song lyrics Open In Colab

LLM Techniques and utils - Explained

LLM Concepts
πŸ“Œ DPO (Direct Preference Optimization) training and its datasets
πŸ“Œ 4-bit LLM Quantization with GPTQ
πŸ“Œ Quantize with HF Transformers
πŸ“Œ Understanding rank r in LoRA and related Matrix_Math
πŸ“Œ Rotary Embeddings (RopE) is one of the Fundamental Building Blocks of LlaMA-2 Implementation
πŸ“Œ Chat Templates in HuggingFace
πŸ“Œ How is Mixtral 8x7B is a dense 47Bn param model
πŸ“Œ The concept of validation log perplexity in LLM training - a note on fundamentals.
πŸ“Œ Why we need to identify target_layers for LoRA/QLoRA
πŸ“Œ Evaluate Token per sec
πŸ“Œ traversing through nested attributes (or sub-modules) of a PyTorch module
πŸ“Œ Implementation of Sparse Mixtures-of-Experts layer in PyTorch from Mistral Official Repo
πŸ“Œ Util method to extract a specific token's representation from the last hidden states of a transformer model.
πŸ“Œ Convert PyTorch model's parameters and tensors to half-precision floating-point format
πŸ“Œ Quantizing πŸ€— Transformers models with the GPTQ method
πŸ“Œ Quantize Mixtral-8x7B so it can run in 24GB GPU
πŸ“Œ What is GGML or GGUF in the world of Large Language Models ?

Other Smaller Language Models

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy