Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU memory usage greatly increases when moving model from cpu to gpu #542

Open
lpkoh opened this issue Aug 13, 2022 · 0 comments
Open

CPU memory usage greatly increases when moving model from cpu to gpu #542

lpkoh opened this issue Aug 13, 2022 · 0 comments

Comments

@lpkoh
Copy link

lpkoh commented Aug 13, 2022

Hi,

I have trained a scaled-yolov4 object detection model in darknet, which I have converted to a pytorch model via this repo. When I load the pytorch model onto my CPU, I get a very small increase in CPU memory usage (less than 0.2GB, which is the size of my .weights darknet file). However, when I run the command “model.to(‘cuda:0’)”, I get an increase in CPU memory by 2.5GB, which is strange as (1) shouldn’t it be GPU memory that increases and (2) why is it so much bigger than 0.2GB?

The command I use to obtain the memory usage inside my docker container is os.system(‘cat /sys/fs/cgroup/memory/memory.usage_in_bytes’). I am assuming returns CPU memory usage.

Did anyone else face this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant