SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
-
Updated
May 29, 2024 - Python
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
A curated list for Efficient Large Language Models
A coding-free framework built on PyTorch for reproducible deep learning studies. 🏆25 knowledge distillation methods presented at CVPR, ICLR, ECCV, NeurIPS, ICCV, etc are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy and benchmark.
A treasure chest for visual classification and recognition powered by PaddlePaddle
multi-teacher cross-modal knowledge distilaltion for unimodal brain tumor segmentation
The implementation code of our paper "Learning Generalizable Models for Vehicle Routing Problems via Knowledge Distillation", accepted at NeurIPS2022.
Gather research papers, corresponding codes (if having), reading notes and any other related materials about Hot🔥🔥🔥 fields in Computer Vision based on Deep Learning.
Collection of AWESOME vision-language models for vision tasks
[CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"
An Extendible (General) Continual Learning Framework based on Pytorch - official codebase of Dark Experience for General Continual Learning
Awesome Knowledge Distillation
A curated list of awesome NLP, Computer Vision, Model Compression, XAI, Reinforcement Learning, Security etc Paper
Deep Multimodal Guidance for Medical Image Classification: https://arxiv.org/pdf/2203.05683.pdf
Official PyTorch Code for "Dynamic Temperature Knowledge Distillation"
Full Wiki enables seamless access to Wikipedia content in multiple languages. It translates English Wikipedia the most comprensive knowledge base into other languages. The user do not need to know the translated search term. This project should be a concept of how LLMs will tear down language barriers.
Distill knowledge from in-context learning into efficient LoRA adapters, enabling expert LLM performance with smaller context windows.
[AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"
AI book for everyone
Code for CVPR'24 Paper: Segment Any Event Streams via Weighted Adaptation of Pivotal Tokens
[CVPR 2024 Highlight] Logit Standardization in Knowledge Distillation
Add a description, image, and links to the knowledge-distillation topic page so that developers can more easily learn about it.
To associate your repository with the knowledge-distillation topic, visit your repo's landing page and select "manage topics."