Skip to content
View aryanchauhan31's full-sized avatar
🎯
Focusing
🎯
Focusing
  • New York University
  • Brooklyn New York
  • 09:26 (UTC -12:00)
  • LinkedIn in/aryanchauhan31

Block or report aryanchauhan31

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
aryanchauhan31/README.md

πŸ‘‹ Hi, I'm Aryan

I'm a graduate student at NYU pursuing a Master's in Computer Engineering, passionate about building efficient and scalable AI systems. I focus on LLM optimization, multimodal models, and open-source contributionsβ€”most recently to the πŸ€— transformers library.

LinkedIn GitHub Email


πŸ› οΈ Technical Stack

  • Languages: Python (PyTorch, DeepSpeed, NumPy, Scikit-Learn, PySpark, TensorFlow), CUDA C++,C/C++, SQL
  • Domains: LLMs, Vision-Language Models, Quantization, Distributed Training (DDP), Recommender Systems
  • Tools: Docker, Slurm, Hugging Face Transformers, LangChain, Ollama, GCP, AWS, Spark, Airflow

🧠 Specializations

  • Quantization Techniques

    • SmoothQuant, Dynamic Quantization, Quantization-Aware Training (QAT)
    • Frameworks: PyTorch FX, ONNX Runtime, Hugging Face Optimum
  • Pruning Strategies

    • Filter/channel pruning, magnitude pruning, NetAdapt-style structured pruning
    • Latency-aware model slimming via FLOPs/accuracy trade-offs
  • Distributed Training

    • PyTorch Distributed Data Parallel (DDP), Deepspeed
    • Mixed precision (FP16), gradient accumulation, multi-node cluster scaling
  • Multimodal Systems

    • CLIP-like ViT-BERT architectures
    • Vision-Language alignment, CLIPScore evaluation, knowledge distillation

πŸ› οΈ Comfortable implementing research papers from scratch and profiling performance with tools like Weights & Biases and PyTorch Profiler.


πŸš€ Recent Work

πŸ€— Hugging Face Transformers


πŸ“Έ Multimodal VQA Optimization

  • Developed ViT+BERT architecture for Visual Question Answering.
  • Trained with QAT + DDP over 4Γ—L4 GPUs for 1.8Γ— speed-up and 60% model compression.

🧠 DistilBERT Compression Pipeline

  • Reduced size by 64% with dynamic quantization.
  • Automated finetuning and benchmarking via Hugging Face tools.

πŸ“ˆ What I'm Looking For

I'm currently open to:

  • Remote internships or research collaborations in LLM efficiency, model compression, or AI infrastructure
  • Open-source projects focused on cutting-edge ML research

πŸ“« Contact


Let’s build something impactful together.

Pinned Loading

  1. DistilBert-Optimization DistilBert-Optimization Public

    Python

  2. gpt gpt Public

    Python

  3. SimCLR SimCLR Public

  4. MultiModal-LLM MultiModal-LLM Public

    Jupyter Notebook

  5. transformers transformers Public

    Forked from huggingface/transformers

    πŸ€— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

    Python

  6. GAN GAN Public

    Python

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy