hawk_eye.inference.benchmark_inference¶
A script to benchmark inference for both detector and classification models. This is useful for seeing how a new model performs on a device. For example, this might be run on the Jetson to see if the model meets performance cutoffs.
-
hawk_eye.inference.benchmark_inference.
benchmark
(timestamp: str, model_type: str, batch_size: int, run_time: float) → None[source]¶ Benchmarks a model.
This function will load the specified model, create a random tensor from the model’s internal height and width and the given batch then perform forward passes through the model for
run_time
seconds.- Parameters
timestamp – The model’s specific timestamp.
model_type – Which type of model this is.
batch_size – The batch size to benchmark the model on.
run_time – How long to run the benchmark in seconds.
Example¶
$ PYTHONPATH=. hawk_eye/inference/benchmark_inference.py \
--timestamp 2020-09-05T15.51.57 \
--model_type classifier \
--batch_size 50 \
--run_time 10
$ PYTHONPATH=. hawk_eye/inference/benchmark_inference.py \
--timestamp 2020-10-10T14.02.09 \
--model_type detector \
--batch_size 10 \
--run_time 15