Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
-
Updated
Jun 10, 2024 - C++
Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.
C++/C TensorRT Inference Example for models created with Pytorch/JAX/TF
Based on tensorrt v8.0+, deploy detect, pose, segment of YOLOv8 with C++ and python api.
Convert yolo models to ONNX, TensorRT add NMSBatched.
Base on tensorrt version 8.2.4, compare inference speed for different tensorrt api.
Based on TensorRT v8.2, build network for YOLOv5-v5.0 by myself, speed up YOLOv5-v5.0 inferencing
Using TensorRT for Inference Model Deployment.
不同backend的模型转换与推理代码
VitPose without MMCV dependencies
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
Dockerized TensorRT inference engine with ONNX model conversion tool and ResNet50 and Ultraface preprocess and postprocess C++ implementation
A CLI tool to convert Keras models to ONNX models and TensorRT engines
Yolov5 TensorRT Implementations
The real-time Instance Segmentation Algorithm SparseInst running on TensoRT and ONNX
tensorrt-toy code
Export (from Onnx) and Inference TensorRT engine with Python
This project is a notebook of learning TensorRT.
Convenient Convert CRAFT Text detection pretrain Pytorch model into TensorRT engine directly, without ONNX step between
Advance inference performance using TensorRT for CRAFT Text detection. Implemented modules to convert Pytorch -> ONNX -> TensorRT, with dynamic shapes (multi-size input) inference.
Add a description, image, and links to the tensorrt-conversion topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-conversion topic, visit your repo's landing page and select "manage topics."