Biggest VIT Vellore Previous Year Question Papers Bank | Question Papers | PYQ | CATs | FATs | VIT Chennai | VIT Bhopal | VIT AP | 650+ Papers | 150+ Courses
-
Updated
Jun 10, 2024
Biggest VIT Vellore Previous Year Question Papers Bank | Question Papers | PYQ | CATs | FATs | VIT Chennai | VIT Bhopal | VIT AP | 650+ Papers | 150+ Courses
Open-source evaluation toolkit of large vision-language models (LVLMs), support GPT-4v, Gemini, QwenVLPlus, 50+ HF models, 20+ benchmarks
A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
A paper list of some recent Transformer-based CV works.
A curated list of efficient deep learning.
Implementation of the paper: "Audio Mamba: Bidirectional State Space Model for Audio Representation Learning" in pytorch
pix2tex: Using a ViT to convert images of equations into LaTeX code.
Code for "V1T: Large-scale mouse V1 response prediction using a Vision Transformer"
Mimix: A Text Generation Tool and Pretrained Chinese Models
This is a list of question papers (CAT & FAT) related to MCA at VIT Chennnai 2023 Batch
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
The Brain Tumor MRI Dataset from Kaggle is employed for automated brain tumor detection and classification research. Investigated methods include using pre-trained models (VGG16, ResNet50, and ViT). 🧠🔍
Generative models nano version for fun. No STOA here, nano first.
Open source implementation of "Vision Transformers Need Registers"
My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"
Add a description, image, and links to the vit topic page so that developers can more easily learn about it.
To associate your repository with the vit topic, visit your repo's landing page and select "manage topics."