Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple CPU instances results in decreasing of infer speed. #7213

Closed
Voveka98 opened this issue May 14, 2024 · 1 comment
Closed

Multiple CPU instances results in decreasing of infer speed. #7213

Voveka98 opened this issue May 14, 2024 · 1 comment
Labels
question Further information is requested

Comments

@Voveka98
Copy link

Hi!
I created a simple resnet classification model and converted it to ONNX format. I want to measure inference speed on GPU and CPU to select best for me.

I use nvcr.io/nvidia/tritonserver:22.03-py.

I use perf_analyzer to measure this speed and i faced a problem that creating multiple instances on CPU results in decreasing inference speed.

When i create model with such parameters:

instance_group [
   {
       count: 4
       kind: KIND_CPU
   }
]

and runs perf_analyzer i get next results:

Terminal output 4 Instances
root@host:/workspace# perf_analyzer -m onnx_infer --concurrency-range 1:4:1 -b 4 -i grpc --async
*** Measurement Settings ***
  Batch size: 4
  Using "time_windows" mode for stabilization
  Measurement window: 5000 msec
  Latency limit: 0 msec
  Concurrency limit: 4 concurrent requests
  Using synchronous calls for inference
  Stabilizing using average latency

Request concurrency: 1
  Client: 
    Request count: 135
    Throughput: 108 infer/sec
    Avg latency: 37143 usec (standard deviation 28958 usec)
    p50 latency: 26363 usec
    p90 latency: 83537 usec
    p95 latency: 93620 usec
    p99 latency: 105187 usec
    Avg gRPC time: 37227 usec ((un)marshal request/response 411 usec + response wait 36816 usec)
  Server: 
    Inference count: 644
    Execution count: 161
    Successful request count: 161
    Avg request latency: 33856 usec (overhead 25401 usec + queue 29 usec + compute input 4 usec + compute infer 8417 usec + compute output 5 usec)

Request concurrency: 2
  Client: 
    Request count: 129
    Throughput: 103.2 infer/sec
    Avg latency: 77839 usec (standard deviation 36214 usec)
    p50 latency: 81644 usec
    p90 latency: 118801 usec
    p95 latency: 128695 usec
    p99 latency: 165175 usec
    Avg gRPC time: 77547 usec ((un)marshal request/response 550 usec + response wait 76997 usec)
  Server: 
    Inference count: 620
    Execution count: 155
    Successful request count: 155
    Avg request latency: 73775 usec (overhead 55339 usec + queue 31 usec + compute input 4 usec + compute infer 18396 usec + compute output 5 usec)

Request concurrency: 3
  Client: 
    Request count: 153
    Throughput: 122.4 infer/sec
    Avg latency: 98394 usec (standard deviation 35303 usec)
    p50 latency: 97195 usec
    p90 latency: 140413 usec
    p95 latency: 155608 usec
    p99 latency: 195517 usec
    Avg gRPC time: 98269 usec ((un)marshal request/response 505 usec + response wait 97764 usec)
  Server: 
    Inference count: 728
    Execution count: 182
    Successful request count: 182
    Avg request latency: 94679 usec (overhead 71017 usec + queue 27 usec + compute input 4 usec + compute infer 23627 usec + compute output 4 usec)

Request concurrency: 4
  Client: 
    Request count: 165
    Throughput: 132 infer/sec
    Avg latency: 119731 usec (standard deviation 32711 usec)
    p50 latency: 120603 usec
    p90 latency: 160393 usec
    p95 latency: 171578 usec
    p99 latency: 192587 usec
    Avg gRPC time: 120656 usec ((un)marshal request/response 504 usec + response wait 120152 usec)
  Server: 
    Inference count: 796
    Execution count: 199
    Successful request count: 199
    Avg request latency: 116883 usec (overhead 87676 usec + queue 23 usec + compute input 4 usec + compute infer 29175 usec + compute output 5 usec)

Inferences/Second vs. Client Average Batch Latency
Concurrency: 1, throughput: 108 infer/sec, latency 37143 usec
Concurrency: 2, throughput: 103.2 infer/sec, latency 77839 usec
Concurrency: 3, throughput: 122.4 infer/sec, latency 98394 usec
Concurrency: 4, throughput: 132 infer/sec, latency 119731 usec

When i create model with 1 CPU instance:

instance_group [
    {
        count: 1
        kind: KIND_CPU
    }
]

perf_analyzer returns next:

Terminal output 1 CPU Instance
root@host:/workspace# perf_analyzer -m onnx_infer --concurrency-range 1:4:1 -b 4 -i grpc --sync 
*** Measurement Settings ***
  Batch size: 4
  Using "time_windows" mode for stabilization
  Measurement window: 5000 msec
  Latency limit: 0 msec
  Concurrency limit: 4 concurrent requests
  Using synchronous calls for inference
  Stabilizing using average latency

Request concurrency: 1
  Client: 
    Request count: 436
    Throughput: 348.8 infer/sec
    Avg latency: 11470 usec (standard deviation 204 usec)
    p50 latency: 11431 usec
    p90 latency: 11596 usec
    p95 latency: 11682 usec
    p99 latency: 12290 usec
    Avg gRPC time: 11449 usec ((un)marshal request/response 388 usec + response wait 11061 usec)
  Server: 
    Inference count: 2092
    Execution count: 523
    Successful request count: 523
    Avg request latency: 7910 usec (overhead 5941 usec + queue 30 usec + compute input 3 usec + compute infer 1932 usec + compute output 4 usec)

Request concurrency: 2
  Client: 
    Request count: 627
    Throughput: 501.6 infer/sec
    Avg latency: 15958 usec (standard deviation 235 usec)
    p50 latency: 15917 usec
    p90 latency: 16270 usec
    p95 latency: 16398 usec
    p99 latency: 16553 usec
    Avg gRPC time: 15930 usec ((un)marshal request/response 446 usec + response wait 15484 usec)
  Server: 
    Inference count: 3008
    Execution count: 752
    Successful request count: 752
    Avg request latency: 11771 usec (overhead 5993 usec + queue 3815 usec + compute input 3 usec + compute infer 1956 usec + compute output 4 usec)

Request concurrency: 3
  Client: 
    Request count: 618
    Throughput: 494.4 infer/sec
    Avg latency: 24275 usec (standard deviation 303 usec)
    p50 latency: 24259 usec
    p90 latency: 24671 usec
    p95 latency: 24763 usec
    p99 latency: 25079 usec
    Avg gRPC time: 24242 usec ((un)marshal request/response 446 usec + response wait 23796 usec)
  Server: 
    Inference count: 2964
    Execution count: 741
    Successful request count: 741
    Avg request latency: 19965 usec (overhead 6078 usec + queue 11896 usec + compute input 3 usec + compute infer 1984 usec + compute output 4 usec)

Request concurrency: 4
  Client: 
    Request count: 620
    Throughput: 496 infer/sec
    Avg latency: 32266 usec (standard deviation 378 usec)
    p50 latency: 32267 usec
    p90 latency: 32730 usec
    p95 latency: 32829 usec
    p99 latency: 33211 usec
    Avg gRPC time: 32181 usec ((un)marshal request/response 467 usec + response wait 31714 usec)
  Server: 
    Inference count: 2984
    Execution count: 746
    Successful request count: 746
    Avg request latency: 27899 usec (overhead 6049 usec + queue 19870 usec + compute input 4 usec + compute infer 1971 usec + compute output 5 usec)

Inferences/Second vs. Client Average Batch Latency
Concurrency: 1, throughput: 348.8 infer/sec, latency 11470 usec
Concurrency: 2, throughput: 501.6 infer/sec, latency 15958 usec
Concurrency: 3, throughput: 494.4 infer/sec, latency 24275 usec
Concurrency: 4, throughput: 496 infer/sec, latency 32266 usec

Can you please explain why increasing cpu instances results in increasing inference time?
Thanks a lot in advance!

@dyastremsky
Copy link
Contributor

dyastremsky commented May 15, 2024

You might not sufficient CPU resources to do the work. You can use top or similar calls to see what's happening with your CPU, your RAM, and whatever else your model needs. Note that PA also runs on CPU, so it can compete for resources. More discussion here: #5108

If you are trying to figure out the optimal number of GPU or CPU instances, that is best answered by Model Analyzer.

@dyastremsky dyastremsky added the question Further information is requested label May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Development

No branches or pull requests

2 participants