Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build a GPU accelerated docker container with jetson-inferense, python3.10 and ros2 humble for jetson nano 4G #513

Closed
fatemeh-mohseni-AI opened this issue May 10, 2024 · 1 comment

Comments

@fatemeh-mohseni-AI
Copy link

fatemeh-mohseni-AI commented May 10, 2024

Hello.
I have a Jetson Nano 4G and must use its GPU.
I need python3.8 << and ros2 Humble or foxy.
I know it can not be installed on Jetpack 4.6.1, so I thought I might have it in a docker container. I also need to use
the jetson-inference package inside the container.
is that scenario possible?

(Note that ros Foxy can be installed on Ubuntu 20 but humble needs ubuntu22. It would be great if I could use Humble)

can I have a docker container which uses GPU and has python3.10, ROS Humble or foxy, and jetson-inference ?
this is a vital question for me and I would appreciate it if anyone can help me with it.
thanks

@fatemeh-mohseni-AI
Copy link
Author

here I explained the solution . if anyone ever needed .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant