Skip to content

LiteraturePro/MODNet

Repository files navigation

MODNet

Version Web for MODNet-model Human Matting

Convert images on web !

The webapp is deployed Heroku Web here - https://modnet.herokuapp.com/ (abandoned)

The webapp is deployed IBM Cloud Foundry Web here - https://modnet.mybluemix.net/ (abandoned)

The webapp is deployed Streamlit Web here - https://modnet.streamlit.app/


DigitalOcean Referral Badge

Docker version API for MODNet-model Human Matting

Convert images on api !

The webapp is deployed Divio-Online here - https://modnet.us.aldryn.io/

The webapp is deployed Divio-Test here - https://modnet-stage.us.aldryn.io/

The webapp is deployed Heroku here - https://modnet-demo.herokuapp.com/ (abandoned)

The webapp is deployed Aliyun Severless here - http://modnet.ovzv.cn/ (abandoned)

The webapp is deployed AWS Lambda here -


What is this?

Update

Use the associated applications of the model

Explain

MODNet-model Human Matting(Look at the picture)

This project is to package the matting program implemented by modnet algorithm as docker image to provide API calling service. If you don't know modnet, please read the original author's warehouse first. What I'm going to talk about is to use docker to build modnet as an API for calling. Of course, you can also directly run the app.py in the form of flash. Docker is used to avoid configuration environment errors. Modnet can run on GPU or CPU. This project can use GPU or CPU.

Compiled project on hub.docker.com

Build

Make sure you have docker installed

  1. Clone the MODNet repository:

    git clone https://github.com/LiteraturePro/MODNet.git
    cd MODNet
    
  2. Input command to build image:

    docker build -t mod-matting .
    
    • I also provided the compilation command for Heroku, just replace the last command of dockerfile file with each other,
    • For general
    CMD exec gunicorn --bind 0.0.0.0:8080 --workers 1 --threads 8 --timeout 0 app:app
    
    • For heroku
    CMD exec gunicorn --bind 0.0.0.0:$PORT --workers 1 --threads 8 --timeout 0 app:app
    
    • For Aliyun Severless
    • For Alibaba cloud functional computing service, it specifies that the service must run on port 9000, so two locations need to be changed, one of which is as follows
    CMD exec gunicorn --bind 0.0.0.0:9000 --workers 1 --threads 8 --timeout 0 app:app
    
    • The other one needs to be changed app.py Change 8080 in to 9000 port
  3. Running image (You can specify the running port yourself):

    docker run -p 8080:8080 mod-matting
    

Install

Make sure you have docker installed

I have built the image and can install it directly. The installation command is as follows(You can specify the running port yourself):

Now your service has started to run, but it runs on the local port. If you need to realize the external network call, you need to act as an agent to proxy the service to your domain name,

Use

The call I have shown is based on the agent I have done. If you need to call it, you need to do it yourself

  • provided that you have installed docker. After you deploy correctly, both GET and POST requests can be accessed. The actual display is as follows
    • Interface: http://your domain/api or http://127.0.0.1:8080/api can be accessed.
    • Parameter: image value: a picture
    • Return value: the base64 data stream after processing the image

Other

Thanks for the work of the original author and the revised author. If you like, please give a star.