|
1 | 1 | # Stabilo Optimize |
2 | | -Coming soon. |
| 2 | + |
| 3 | +[](https://github.com/rfonod/stabilo-optimize/releases) [](https://github.com/rfonod/stabilo-optimize/blob/main/LICENSE) [](https://doi.org/10.5281/zenodo.13828430) [](https://arxiv.org/abs/2411.02136) [](https://github.com/rfonod/stabilo-optimize) |
| 4 | + |
| 5 | +**Stabilo-Optimize** is a Python benchmarking tool designed specifically to evaluate and tune methods and hyperparameters of the [stabilo](https://github.com/rfonod/stabilo) 🚀 library for video and track stabilization tasks. It systematically generates performance evaluations through random perturbations, eliminating the need for ground-truth homographies. This tool significantly simplifies the optimization of stabilization techniques, making it ideal for high-precision tasks in fields such as urban monitoring, traffic analysis, and drone imagery processing. |
| 6 | + |
| 7 | + |
| 8 | + |
| 9 | +## Key Features |
| 10 | + |
| 11 | +- **Ground Truth-Free Benchmarking**: Randomly generates photometric and homographic perturbations (brightness variations, Gaussian blur, saturation adjustments, fog effects, rotations, translations, scales, and perspective shifts). |
| 12 | +- **Hierarchical Benchmarking Strategy**: Encourages users to systematically vary hyperparameters hierarchically for efficient parameter optimization. |
| 13 | +- **Flexible JSON Configuration**: Customize extensive parameter settings using nested dictionaries (see [comprehensive_benchmark.json](experiments/sample_experiment/comprehensive_benchmark.json) or [simple_benchmark.json](experiments/sample_experiment/simple_benchmark.json) for examples). |
| 14 | +- **Result Visualization**: Generates comprehensive performance plots and benchmarking process visualizations. |
| 15 | + |
| 16 | + |
| 17 | + |
| 18 | +## Installation |
| 19 | + |
| 20 | +1. **Create and activate a Python virtual environment** (Python >= 3.9), e.g., using [Miniconda3](https://docs.anaconda.com/free/miniconda/): |
| 21 | + ```bash |
| 22 | + conda create -n stabilo-optimize python=3.9 -y |
| 23 | + conda activate stabilo-optimize |
| 24 | + ``` |
| 25 | + |
| 26 | +2. **Clone or fork the repository**: |
| 27 | + ```bash |
| 28 | + git clone https://github.com/rfonod/stabilo-optimize.git |
| 29 | + cd stabilo-optimize |
| 30 | + ``` |
| 31 | + |
| 32 | +3. **Install dependencies**: |
| 33 | + ```bash |
| 34 | + pip install -r requirements.txt |
| 35 | + ``` |
| 36 | + |
| 37 | +## Example Usage |
| 38 | + |
| 39 | +A sample benchmark (`simple_benchmark.json`) with provided scenes and vehicle bounding box masks is included in the `experiments/sample_experiment` directory. To reproduce the results, run: |
| 40 | + |
| 41 | +```bash |
| 42 | +python benchmark.py experiments/sample_experiment/simple_benchmark.json -sp -sv -o |
| 43 | +``` |
| 44 | + |
| 45 | +- `-sp`: Save performance plots. |
| 46 | +- `-sv`: Save benchmark visualization video. |
| 47 | +- `-o`: Overwrite previous results. |
| 48 | + |
| 49 | +Use `python benchmark.py --help` to explore additional command-line options. |
| 50 | + |
| 51 | +**Note:** This example is limited to three scenes for demonstration purposes. Users should define their own benchmarks with a more representative selection of scenes for meaningful evaluation. |
| 52 | + |
| 53 | +## Custom Benchmarking |
| 54 | + |
| 55 | +To set up your own benchmark, create a new experiment directory within `experiments` containing: |
| 56 | + |
| 57 | +- `benchmark.json`: Configuration specifying methods/hyperparameters and number of random trials (`N`) per scene. For reliable results, set `N > 100`. |
| 58 | +- `scenes`: Directory containing input images (and optional exclusion masks in YOLO format). Ensure selected scenes adequately represent your stabilization tasks. To obtain reliable benchmarking results, include a diverse set of scenes covering different lighting conditions and camera viewpoints. |
| 59 | + |
| 60 | +Example structure: |
| 61 | + |
| 62 | +``` |
| 63 | +experiments |
| 64 | +└─custom_experiment |
| 65 | + ├─benchmark.json |
| 66 | + └─scenes |
| 67 | + ├ image1.jpg |
| 68 | + ├ image1.txt |
| 69 | + ├ image2.jpg |
| 70 | + ├ image2.txt |
| 71 | + ├ ... |
| 72 | +``` |
| 73 | +
|
| 74 | +**Note**: A comprehensive configuration file (`comprehensive_benchmark.json`) is included for illustration purposes. Due to computational costs, users should avoid directly running such an extensive parameter search. Instead, adopt a hierarchical parameter search approach by fixing some hyperparameters and varying others. |
| 75 | +
|
| 76 | +Refer to the [stabilo](https://github.com/rfonod/stabilo) library and the associated [manuscript](https://arxiv.org/abs/2411.02136) for detailed descriptions of available methods and hyperparameters. |
| 77 | +
|
| 78 | +## Benchmarking Metrics |
| 79 | +
|
| 80 | +Benchmarks use metrics like Homography Estimation Accuracy (HEA) and Mean Intersection over Union (MIoU). MIoU specifically evaluates the accuracy of object-level registration and requires bounding box masks for calculation. Detailed metric definitions and analysis are provided in the manuscript. |
| 81 | +
|
| 82 | +## Citing This Work |
| 83 | +
|
| 84 | +If using this tool for research or commercial applications, please cite appropriately: |
| 85 | +
|
| 86 | +**Preferred Citation:** |
| 87 | +
|
| 88 | +```bibtex |
| 89 | +@misc{fonod2025advanced, |
| 90 | + title={Advanced computer vision for extracting georeferenced vehicle trajectories from drone imagery}, |
| 91 | + author={Robert Fonod and Haechan Cho and Hwasoo Yeo and Nikolas Geroliminis}, |
| 92 | + year={2025}, |
| 93 | + eprint={2411.02136}, |
| 94 | + archivePrefix={arXiv}, |
| 95 | + primaryClass={cs.CV}, |
| 96 | + url={https://arxiv.org/abs/2411.02136}, |
| 97 | + doi={https://doi.org/10.48550/arXiv.2411.02136} |
| 98 | +} |
| 99 | +``` |
| 100 | +**Repository Citation:** |
| 101 | + |
| 102 | +```bibtex |
| 103 | +@software{fonod2025stabilo-optimize, |
| 104 | + author = {Fonod, Robert}, |
| 105 | + license = {MIT}, |
| 106 | + month = mar, |
| 107 | + title = {Stabilo Optimize: A Framework for Comprehensive Evaluation and Analysis for the Stabilo Library}, |
| 108 | + url = {https://github.com/rfonod/stabilo-optimize}, |
| 109 | + doi = {10.5281/zenodo.13828430}, |
| 110 | + version = {1.0.0}, |
| 111 | + year = {2025} |
| 112 | +} |
| 113 | +``` |
| 114 | + |
| 115 | +A [CITATION.cff](CITATION.cff) file is provided for consistent referencing. |
| 116 | + |
| 117 | +## Contributing |
| 118 | + |
| 119 | +Contributions are welcome! If you encounter any issues or have suggestions for improvements, please open a [GitHub Issue](https://github.com/rfonod/stabilo-optimize/issues) or submit a pull request. Your contributions are greatly appreciated! |
| 120 | + |
| 121 | +## License |
| 122 | + |
| 123 | +This project is distributed under the MIT License. Refer to the [LICENSE](LICENSE) file for detailed terms. |
0 commit comments