Skip to content
This repository was archived by the owner on Sep 16, 2021. It is now read-only.

Latest commit

 

History

History
20 lines (16 loc) · 1.24 KB

File metadata and controls

20 lines (16 loc) · 1.24 KB

Batch Inference

Batch inference is where you take your trained models (or pre-trained models from the model zoo) and perform predictions using groups of images at a time for faster speed.

Steps

Here's the steps for performing batch inference.

  1. SSH into the ml-inferencer machine
  2. Copy all the images you want to run inference on to /mnt/disk1/ml-apps/detectron2_web_app/data/inference/ directory.
    1. Only *.JPG, *.PNG, *.jpg and *.png files are picked up (case-sensitive globbing)
  3. Copy the Corrosion model weights to /mnt/disk1/ml-apps/detectron2_web_app/model-corrosion.pth
  4. Copy the Edges-and-Welds model weights to /mnt/disk1/ml-apps/detectron2_web_app/model-edges-and-welds.pth
  5. Kick-off the pipeline by running the following scripts in order:
    1. python predict.py corrosion
    2. python predict.py edges-and-welds
    3. python merge.py
    4. python visualize.py
  6. Copy the predictions (/mnt/disk1/ml-apps/detectron2_web_app/data/predictions/corrosion/*.pkl and /mnt/disk1/ml-apps/detectron2_web_app/data/predictions/edges-and-welds/*.pkl) to your long-term storage of choice
  7. Copy the visualizations (/mnt/disk1/ml-apps/detectron2_web_app/data/predictions/visualized) to your long-term storage of choice