Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quantization occurs with RuntimeError: zero_point must be between quant_min and quant_max. #126266

Open
XiudingCai opened this issue May 15, 2024 · 0 comments
Labels
oncall: quantization Quantization support in PyTorch

Comments

@XiudingCai
Copy link

XiudingCai commented May 15, 2024

馃悰 Describe the bug

The strange thing is that when I train only 100 epochs with FP32, the model can quantize normally, when I train 200 or more epochs and then try to do the quantization - the model reports the following error.

It says: RuntimeError: zero_point must be between quant_min and quant_max.

I double checked #89619 but got no help.

QConfig(activation=functools.partial(<class 'torch.ao.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize'>, observer=<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>, quant_min=0, quant_max=255, reduce_range=True){}, weight=functools.partial(<class 'torch.ao.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize'>, observer=<class 'torch.ao.quantization.observer.MovingAveragePerChannelMinMaxObserver'>, quant_min=-128, quant_max=127, dtype=torch.qint8, qscheme=torch.per_channel_symmetric){})
/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/ao/quantization/observer.py:220: UserWarning: Please use quant_min and quant_max to specify the range for observers.                     reduce_range will be deprecated in a future release of PyTorch.
  warnings.warn(
Training QAT Model...
/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/ao/quantization/fake_quantize.py:353: UserWarning: _aminmax is deprecated as of PyTorch 1.11 and will be removed in a future release. Use aminmax instead. This warning will only appear once per process. (Triggered internally at ../aten/src/ATen/native/ReduceAllOps.cpp:72.)
  return torch.fused_moving_avg_obs_fake_quant(
/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/ao/quantization/fake_quantize.py:353: UserWarning: _aminmax is deprecated as of PyTorch 1.11 and will be removed in a future release. Use aminmax instead. This warning will only appear once per process. (Triggered internally at ../aten/src/ATen/native/TensorCompare.cpp:677.)
  return torch.fused_moving_avg_obs_fake_quant(
Epoch: 000 Eval Loss: 25907442409923040.000 Eval Acc: 0.393
Traceback (most recent call last):
  File "/media/cas/c0e01fa5-7817-4191-9174-219590e5d093/EXPLogNo.2/SnoringDetection/QAT/1.train_qat.py", line 599, in <module>
    main(QAT_only=True)
  File "/media/cas/c0e01fa5-7817-4191-9174-219590e5d093/EXPLogNo.2/SnoringDetection/QAT/1.train_qat.py", line 506, in main
    train_model(model=quantized_model,
  File "/media/cas/c0e01fa5-7817-4191-9174-219590e5d093/EXPLogNo.2/SnoringDetection/QAT/1.train_qat.py", line 301, in train_model
    outputs = model(inputs)
              ^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torchvision/models/quantization/mobilenetv2.py", line 54, in forward
    x = self._forward_impl(x)
        ^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torchvision/models/mobilenetv2.py", line 166, in _forward_impl
    x = self.features(x)
        ^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/nn/modules/container.py", line 217, in forward
    input = module(input)
            ^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/nn/modules/container.py", line 217, in forward
    input = module(input)
            ^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1561, in _call_impl
    result = forward_call(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/ao/nn/intrinsic/qat/modules/conv_fused.py", line 585, in forward
    return F.relu(ConvBn2d._forward(self, input))
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/ao/nn/intrinsic/qat/modules/conv_fused.py", line 101, in _forward
    return self._forward_approximate(input)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/ao/nn/intrinsic/qat/modules/conv_fused.py", line 114, in _forward_approximate
    scaled_weight = self.weight_fake_quant(self.weight * scale_factor.reshape(weight_shape))
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ccc/miniconda3/envs/luckfox/lib/python3.11/site-packages/torch/ao/quantization/fake_quantize.py", line 353, in forward
    return torch.fused_moving_avg_obs_fake_quant(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: `zero_point` must be between `quant_min` and `quant_max`.

Here is my code,

import os
import random
import time
import numpy as np
from PIL import Image
import torch
import torch.nn as nn
from torchvision import datasets, transforms

from torchvision.models import resnet18, mobilenet_v2
from torchvision.models.quantization import resnet18 as QuantizedResNet18
from torchvision.models.quantization import mobilenet_v2 as QuantizedMobileNetV2
from torch.utils.data import Dataset

import onnx
import onnxsim

crop_size = (400, 400)
input_size = (1, 3, 400, 400)
# model_name = 'resnet18'
model_name = 'mobilenet_v2'

max_epoch = 200
qat_epoch = 10
# learning_rate = 0.005
learning_rate = 0.01
train_batch_size = 32
eval_batch_size = 2

train_path = "./dataset/JPEG/train_list.txt"
test_path = "./dataset/JPEG/test_list.txt"

optimizer_conf = {
    "optimizer_name": "AdamW",
    "learning_rate": learning_rate,
    "weight_decay": 1e-6
}
scheduler_conf = {
    "scheduler_name": "WarmupCosineSchedulerLR",
    "max_epoch": max_epoch,
    "min_lr": 1e-5,
    "max_lr": learning_rate,
    "warmup_epoch": max_epoch // 10
}

if 'resnet18' == model_name:
    ModelFloat32 = resnet18
    ModelInt8 = QuantizedResNet18
else:
    ModelFloat32 = mobilenet_v2
    ModelInt8 = QuantizedMobileNetV2


class JPEGDataset(Dataset):
    def __init__(self, txt_file, root_path=None, transform=None):
        self.transform = transform
        self.samples = []
        root_path = os.path.join(os.getcwd(), 'dataset') if root_path is None else root_path
        with open(txt_file, 'r') as file:
            lines = file.readlines()
            for line in lines:
                image_path, label = line.strip().split('\t')
                image_path = os.path.join(root_path, image_path)
                label = int(label.strip('\n'))
                self.samples.append((image_path, label))

    def __len__(self):
        return len(self.samples)

    def __getitem__(self, idx):
        image_path, label = self.samples[idx]
        image = Image.open(image_path) 

        if self.transform:
            image = self.transform(image)

        return image, label


def get_optimizer(model, optimizer_name="AdamW", learning_rate=0.01, weight_decay=1e-6):
    if optimizer_name == 'Adam':
        optimizer = torch.optim.Adam(params=model.parameters(),
                                     betas=(0.9, 0.999),
                                     lr=learning_rate,
                                     weight_decay=weight_decay,
                                     eps=1e-8,
                                     amsgrad=False)
    elif optimizer_name == 'AdamW':
        optimizer = torch.optim.AdamW(params=model.parameters(),
                                      lr=learning_rate,
                                      weight_decay=weight_decay)
    elif optimizer_name == 'SGD':
        optimizer = torch.optim.SGD(params=model.parameters(),
                                    momentum=0.9,
                                    lr=learning_rate,
                                    weight_decay=weight_decay)
    else:
        raise Exception
    return optimizer


def get_scheduler(optimizer, train_loader, scheduler_name="WarmupCosineSchedulerLR", max_epoch=200, **scheduler_args):
    from torch.optim.lr_scheduler import CosineAnnealingLR
    from macls.utils.scheduler import WarmupCosineSchedulerLR
    if scheduler_name == 'CosineAnnealingLR':
        max_step = int(max_epoch * 1.2) * len(train_loader)
        scheduler = CosineAnnealingLR(optimizer=optimizer,
                                      T_max=max_step,
                                      **scheduler_args)
    elif scheduler_name == 'WarmupCosineSchedulerLR':
        scheduler = WarmupCosineSchedulerLR(optimizer=optimizer,
                                            fix_epoch=max_epoch,
                                            step_per_epoch=len(train_loader),
                                            **scheduler_args)
    else:
        raise Exception
    return scheduler


def set_random_seeds(random_seed=0):
    torch.manual_seed(random_seed)
    torch.backends.cudnn.deterministic = True
    torch.backends.cudnn.benchmark = False
    np.random.seed(random_seed)
    random.seed(random_seed)


def prepare_dataloader(num_workers=8,
                       train_batch_size=128,
                       eval_batch_size=256):
    train_transform = transforms.Compose([
        transforms.RandomCrop(crop_size),
        transforms.ToTensor(),
    ])

    test_transform = transforms.Compose([
        transforms.CenterCrop(crop_size),
        transforms.ToTensor(),
    ])

    train_set = JPEGDataset(txt_file=train_path, transform=train_transform)
    test_set = JPEGDataset(txt_file=test_path, transform=test_transform)

    train_sampler = torch.utils.data.RandomSampler(train_set)
    test_sampler = torch.utils.data.SequentialSampler(test_set)

    train_loader = torch.utils.data.DataLoader(dataset=train_set,
                                               batch_size=train_batch_size,
                                               sampler=train_sampler,
                                               num_workers=num_workers)

    test_loader = torch.utils.data.DataLoader(dataset=test_set,
                                              batch_size=eval_batch_size,
                                              sampler=test_sampler,
                                              num_workers=num_workers)

    return train_loader, test_loader


def evaluate_model(model, test_loader, device, criterion=None):
    model.eval()
    model.to(device)

    running_loss = 0
    running_corrects = 0

    for inputs, labels in test_loader:

        inputs = inputs.to(device)
        labels = labels.to(device)

        outputs = model(inputs)
        _, preds = torch.max(outputs, 1)

        if criterion is not None:
            loss = criterion(outputs, labels).item()
        else:
            loss = 0

        # statistics
        running_loss += loss * inputs.size(0)
        running_corrects += torch.sum(preds == labels.data)

    eval_loss = running_loss / len(test_loader.dataset)
    eval_accuracy = running_corrects / len(test_loader.dataset)

    return eval_loss, eval_accuracy


def train_model(model,
                train_loader,
                test_loader,
                device,
                learning_rate=1e-1,
                num_epochs=200):
    # The training configurations were not carefully selected.

    criterion = nn.CrossEntropyLoss()

    model.to(device)

    optimizer = get_optimizer(model, **optimizer_conf)
    scheduler = get_scheduler(optimizer, train_loader, **scheduler_conf)

    # Evaluation
    model.eval()
    eval_loss, eval_accuracy = evaluate_model(model=model,
                                              test_loader=test_loader,
                                              device=device,
                                              criterion=criterion)
    print("Epoch: {:03d} Eval Loss: {:.3f} Eval Acc: {:.3f}".format(0, eval_loss, eval_accuracy))

    for epoch in range(num_epochs):
        epoch_start = time.time()

        # Training
        model.train()

        running_loss = 0
        running_corrects = 0

        for inputs, labels in train_loader:
            inputs = inputs.to(device)
            labels = labels.to(device)

            # zero the parameter gradients
            optimizer.zero_grad()

            # forward + backward + optimize
            outputs = model(inputs)
            _, preds = torch.max(outputs, 1)
            loss = criterion(outputs, labels)
            loss.backward()
            optimizer.step()

            # statistics
            running_loss += loss.item() * inputs.size(0)
            running_corrects += torch.sum(preds == labels.data)

            # Set learning rate scheduler
            scheduler.step()

        train_loss = running_loss / len(train_loader.dataset)
        train_accuracy = running_corrects / len(train_loader.dataset)

        # Evaluation
        model.eval()
        eval_loss, eval_accuracy = evaluate_model(model=model,
                                                  test_loader=test_loader,
                                                  device=device,
                                                  criterion=criterion)

        print(
            "Epoch: {:03d} Train Loss: {:.3f} Train Acc: {:.3f} Eval Loss: {:.3f} Eval Acc: {:.3f}, lr: {:.8f}, ETA: {}hrs {:.1f}mins"
            .format(
                epoch + 1, train_loss, train_accuracy, eval_loss, eval_accuracy,
                scheduler.get_last_lr()[0],
                (num_epochs - epoch - 1) * (time.time() - epoch_start) // 3600,
                ((num_epochs - epoch - 1) * (time.time() - epoch_start) % 3600) / 60,
            ),
        )
        if (epoch + 1) % 10 == 0:
            save_model(model=model.to('cpu:0'), model_dir='./saved_models',
                       model_filename=f'ep_{epoch + 1}_{eval_accuracy:.1f}.pt')
        model.to('cuda:0')

    return model


def calibrate_model(model, loader, device=torch.device("cpu:0")):
    model.to(device)
    model.eval()

    for inputs, labels in loader:
        inputs = inputs.to(device)
        labels = labels.to(device)
        _ = model(inputs)


def measure_inference_latency(model,
                              device,
                              input_size=input_size,
                              num_samples=100,
                              num_warmups=10):
    model.to(device)
    model.eval()

    x = torch.rand(size=input_size).to(device)

    with torch.no_grad():
        for _ in range(num_warmups):
            _ = model(x)
    torch.cuda.synchronize()

    with torch.no_grad():
        start_time = time.time()
        for _ in range(num_samples):
            _ = model(x)
            torch.cuda.synchronize()
        end_time = time.time()
    elapsed_time = end_time - start_time
    elapsed_time_ave = elapsed_time / num_samples

    return elapsed_time_ave


def save_model(model, model_dir, model_filename):
    if not os.path.exists(model_dir):
        os.makedirs(model_dir)
    model_filepath = os.path.join(model_dir, model_filename)
    torch.save(model.state_dict(), model_filepath)


def load_model(model, model_filepath, device):
    model.load_state_dict(torch.load(model_filepath, map_location=device))

    return model


def save_torchscript_model(model, model_dir, model_filename):
    if not os.path.exists(model_dir):
        os.makedirs(model_dir)
    model_filepath = os.path.join(model_dir, model_filename)
    torch.jit.save(torch.jit.script(model), model_filepath)


def load_torchscript_model(model_filepath, device):
    model = torch.jit.load(model_filepath, map_location=device)

    return model


def create_model(num_classes=10):
    model = ModelFloat32(num_classes=num_classes, weights=None)
    return model


def main():
    random_seed = 3407
    num_classes = 2
    cuda_device = torch.device("cuda:0")
    cpu_device = torch.device("cpu:0")

    model_dir = "saved_models"
    os.makedirs(model_dir, exist_ok=True)
    model_filename = model_name + "_deepsleep.pt"
    quantized_model_filename = model_name + "_quantized_deepsleep.pt"
    model_filepath = os.path.join(model_dir, model_filename)
    quantized_model_filepath = os.path.join(model_dir,
                                            quantized_model_filename)

    set_random_seeds(random_seed=random_seed)

    # Create an untrained model.
    model = create_model(num_classes=num_classes)

    train_loader, test_loader = prepare_dataloader(num_workers=8,
                                                   train_batch_size=train_batch_size,
                                                   eval_batch_size=eval_batch_size)
    # Train model.
    print("Training Model...")
    model = train_model(model=model,
                        train_loader=train_loader,
                        test_loader=test_loader,
                        device=cuda_device,
                        learning_rate=learning_rate,
                        num_epochs=max_epoch)
    # Save model.
    save_model(model=model.to(cpu_device), model_dir=model_dir, model_filename=model_filename)

    # Prepare the model for quantization aware training. This inserts observers in
    # the model that will observe activation tensors during calibration.
    quantized_model = ModelInt8(num_classes=num_classes)
    quantized_model.load_state_dict(torch.load(model_filepath))
    quantized_model.fuse_model()
    quantization_config = torch.ao.quantization.get_default_qat_qconfig("x86")
    quantized_model.qconfig = quantization_config

    # Print quantization configurations
    print(quantized_model.qconfig)

    torch.ao.quantization.prepare_qat(quantized_model, inplace=True)

    # # Use training data for calibration.
    print("Training QAT Model...")
    quantized_model.train()
    train_model(model=quantized_model,
                train_loader=train_loader,
                test_loader=test_loader,
                device=cuda_device,
                learning_rate=learning_rate,
                num_epochs=qat_epoch)
    quantized_model.to(cpu_device)
    print("Training QAT Model..., Done!")
    # Using high-level static quantization wrapper

    quantized_model = torch.quantization.convert(quantized_model.eval(), inplace=True)

    quantized_model.eval()

    # quantized model export to onnx
    onnx_path = quantized_model_filepath.replace('.pt', '.onnx')
    img = torch.rand(*input_size).float()
    print(f"quantized model export to onnx..., onnx_path is {onnx_path}")
    torch.onnx.export(quantized_model, img, onnx_path, input_names=['input'], output_names=['output'], opset_version=17)

    # check onnx model
    onnx_model = onnx.load(onnx_path)
    onnx.checker.check_model(onnx_model)
    # simplify onnx model
    try:
        print('Starting to simplify ONNX...')
        onnx_model, check = onnxsim.simplify(onnx_model)
        assert check, 'assert check failed'
    except Exception as e:
        print('Simplifier failure:', e)
    onnx.save(onnx_model, onnx_path)

    print(f"save_torchscript_model into {os.path.join(model_dir, quantized_model_filename)}...")
    # Save quantized model.
    save_torchscript_model(model=quantized_model,
                           model_dir=model_dir,
                           model_filename=quantized_model_filename)

    # Load quantized model.
    quantized_jit_model = load_torchscript_model(
        model_filepath=quantized_model_filepath, device=cpu_device)
    print(f"loading quantized model from {quantized_model_filepath}...")

    _, fp32_eval_accuracy = evaluate_model(model=model,
                                           test_loader=test_loader,
                                           device=cpu_device,
                                           criterion=None)
    _, int8_eval_accuracy = evaluate_model(model=quantized_jit_model,
                                           test_loader=test_loader,
                                           device=cpu_device,
                                           criterion=None)

    print("FP32 evaluation accuracy: {:.3f}".format(fp32_eval_accuracy))
    print("INT8 evaluation accuracy: {:.3f}".format(int8_eval_accuracy))

    fp32_cpu_inference_latency = measure_inference_latency(model=model,
                                                           device=cpu_device,
                                                           input_size=input_size,
                                                           num_samples=100)
    int8_cpu_inference_latency = measure_inference_latency(
        model=quantized_model,
        device=cpu_device,
        input_size=input_size,
        num_samples=100)
    int8_jit_cpu_inference_latency = measure_inference_latency(
        model=quantized_jit_model,
        device=cpu_device,
        input_size=input_size,
        num_samples=100)
    fp32_gpu_inference_latency = measure_inference_latency(model=model,
                                                           device=cuda_device,
                                                           input_size=input_size,
                                                           num_samples=100)

    print("FP32 CPU Inference Latency: {:.2f} ms / sample".format(
        fp32_cpu_inference_latency * 1000))
    print("FP32 CUDA Inference Latency: {:.2f} ms / sample".format(
        fp32_gpu_inference_latency * 1000))
    print("INT8 CPU Inference Latency: {:.2f} ms / sample".format(
        int8_cpu_inference_latency * 1000))
    print("INT8 JIT CPU Inference Latency: {:.2f} ms / sample".format(
        int8_jit_cpu_inference_latency * 1000))


if __name__ == "__main__":
    main()

Versions

Here is the version info.

Package                      Version
---------------------------- -----------------
absl-py                      2.1.0
astunparse                   1.6.3
audioread                    3.0.1
av                           11.0.0
Babel                        2.14.0
bce-python-sdk               0.9.5
blinker                      1.7.0
cachetools                   5.3.3
certifi                      2024.2.2
cffi                         1.16.0
charset-normalizer           3.3.2
click                        8.1.7
coloredlogs                  15.0.1
contourpy                    1.2.0
cycler                       0.12.1
decorator                    5.1.1
einops                       0.7.0
fast-histogram               0.12
filelock                     3.13.1
Flask                        3.0.2
flask-babel                  4.0.0
flatbuffers                  24.3.7
fonttools                    4.49.0
fsspec                       2024.2.0
future                       1.0.0
gast                         0.5.4
google-auth                  2.28.2
google-auth-oauthlib         1.0.0
google-pasta                 0.2.0
grpcio                       1.62.1
h5py                         3.10.0
huggingface-hub              0.22.2
humanfriendly                10.0
idna                         3.6
imageio                      2.34.0
itsdangerous                 2.1.2
Jinja2                       3.1.3
joblib                       1.3.2
keras                        2.14.0
kiwisolver                   1.4.5
lazy_loader                  0.3
libclang                     16.0.6
librosa                      0.9.2
llvmlite                     0.42.0
loguru                       0.7.2
macls                        0.4.2
Markdown                     3.5.2
markdown-it-py               3.0.0
MarkupSafe                   2.1.5
matplotlib                   3.8.3
mct-quantizers               1.4.0
mdurl                        0.1.2
ml-dtypes                    0.2.0
model-compression-toolkit    1.11.0
mpmath                       1.3.0
networkx                     3.2.1
numba                        0.59.0
numpy                        1.26.4
nvidia-cublas-cu11           11.10.3.66
nvidia-cublas-cu12           12.1.3.1
nvidia-cuda-cupti-cu12       12.1.105
nvidia-cuda-nvrtc-cu11       11.7.99
nvidia-cuda-nvrtc-cu12       12.1.105
nvidia-cuda-runtime-cu11     11.7.99
nvidia-cuda-runtime-cu12     12.1.105
nvidia-cudnn-cu11            8.5.0.96
nvidia-cudnn-cu12            8.9.2.26
nvidia-cufft-cu12            11.0.2.54
nvidia-curand-cu12           10.3.2.106
nvidia-cusolver-cu12         11.4.5.107
nvidia-cusparse-cu12         12.1.0.106
nvidia-nccl-cu12             2.19.3
nvidia-nvjitlink-cu12        12.4.99
nvidia-nvtx-cu12             12.1.105
oauthlib                     3.2.2
onnx                         1.14.1
onnxoptimizer                0.3.8
onnxruntime                  1.16.0
onnxruntime-gpu              1.17.1
onnxscript                   0.1.0.dev20240315
onnxsim                      0.4.36
onnxslim                     0.1.22
opencv-python                4.9.0.80
opt-einsum                   3.3.0
packaging                    23.2
pandas                       2.2.1
pillow                       10.2.0
pip                          23.3.1
platformdirs                 4.2.0
pooch                        1.8.1
protobuf                     3.20.3
psutil                       5.9.8
PuLP                         2.8.0
pyasn1                       0.5.1
pyasn1-modules               0.3.0
pycparser                    2.21
pycryptodome                 3.20.0
pydub                        0.25.1
Pygments                     2.17.2
pyparsing                    3.1.2
python-dateutil              2.9.0.post0
pytz                         2024.1
PyYAML                       6.0.1
rarfile                      4.1
requests                     2.31.0
requests-oauthlib            1.3.1
resampy                      0.2.2
rich                         13.7.1
rknn-toolkit2                1.6.0+81f21f4d
rsa                          4.9
ruamel.yaml                  0.18.6
ruamel.yaml.clib             0.2.8
safetensors                  0.4.2
scikit-image                 0.22.0
scikit-learn                 1.4.1.post1
scipy                        1.12.0
setuptools                   68.2.2
six                          1.16.0
SoundCard                    0.4.2
soundfile                    0.12.1
sympy                        1.12
tensorboard                  2.14.1
tensorboard-data-server      0.7.2
tensorflow                   2.14.0
tensorflow-estimator         2.14.0
tensorflow-io-gcs-filesystem 0.36.0
tensorrt                     8.6.1.post1
tensorrt-bindings            8.6.1
tensorrt-libs                8.6.1
termcolor                    2.4.0
threadpoolctl                3.3.0
tifffile                     2024.2.12
timm                         0.9.16
torch                        2.2.1+cu121
torchaudio                   2.2.1+cu121
torchinfo                    1.8.0
torchsummary                 1.5.1
torchvision                  0.17.1+cu121
tqdm                         4.66.2
triton                       2.2.0
typeguard                    2.13.3
typing_extensions            4.10.0
tzdata                       2024.1
urllib3                      2.2.1
visualdl                     2.5.3
Werkzeug                     3.0.1
wheel                        0.41.2
wrapt                        1.14.1

cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel

@mikaylagawarecki mikaylagawarecki added the oncall: quantization Quantization support in PyTorch label May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: quantization Quantization support in PyTorch
Projects
None yet
Development

No branches or pull requests

2 participants