Implementation of popular deep learning networks with TensorRT network definition API
-
Updated
May 15, 2024 - C++
Implementation of popular deep learning networks with TensorRT network definition API
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
Rankings include: Depth Anything DPT FutureDepth GBDMF GenPercept LeReS LightedDepth LFVRT Marigold Metric3D MiDaS NeWCRFs PatchFusion UniDepth ZoeDepth
InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
《Pytorch实用教程》(第二版)无论是零基础入门,还是CV、NLP、LLM项目应用,或是进阶工程化部署落地,在这里都有。相信在本书的帮助下,读者将能够轻松掌握 PyTorch 的使用,成为一名优秀的深度学习工程师。
BoxMOT: pluggable SOTA tracking modules for segmentation, object detection and pose estimation models
TensorRT Extension for Stable Diffusion Web UI (Enhanced)
A library for training, compressing and deploying computer vision models (including ViT) with edge devices
Function unified C/C++ API for running Python functions on desktop, mobile, web, and in the cloud. Register at https://fxn.ai
Efficient CPU/GPU/Vulkan ML Runtimes for VapourSynth (with built-in support for waifu2x, DPIR, RealESRGANv2/v3, Real-CUGAN, RIFE, SCUNet and more!)
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
TensorRT Extension for Stable Diffusion Web UI (Enhanced)
Python Computer Vision & Video Analytics Framework With Batteries Included
Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
This repo contains model compression(using TensorRT) and documentation of running various deep learning models on NVIDIA Jetson Orin, Nano (aarch64 architectures)
Nitro is an C++ inference server on top of TensorRT-LLM. OpenAI-compatible API. Run blazing fast inference on Nvidia GPUs. Used in Jan
Add a description, image, and links to the tensorrt topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt topic, visit your repo's landing page and select "manage topics."