Batch inference is where you take your trained models (or pre-trained models from the model zoo) and perform predictions using groups of images at a time for faster speed.
Here's the steps for performing batch inference.
- SSH into the
ml-inferencermachine - Copy all the images you want to run inference on to
/mnt/disk1/ml-apps/detectron2_web_app/data/inference/directory.- Only *.JPG, *.PNG, *.jpg and *.png files are picked up (case-sensitive globbing)
- Copy the Corrosion model weights to
/mnt/disk1/ml-apps/detectron2_web_app/model-corrosion.pth - Copy the Edges-and-Welds model weights to
/mnt/disk1/ml-apps/detectron2_web_app/model-edges-and-welds.pth - Kick-off the pipeline by running the following scripts in order:
python predict.py corrosionpython predict.py edges-and-weldspython merge.pypython visualize.py
- Copy the predictions (
/mnt/disk1/ml-apps/detectron2_web_app/data/predictions/corrosion/*.pkland/mnt/disk1/ml-apps/detectron2_web_app/data/predictions/edges-and-welds/*.pkl) to your long-term storage of choice - Copy the visualizations (
/mnt/disk1/ml-apps/detectron2_web_app/data/predictions/visualized) to your long-term storage of choice