Skip to content

Commit c4a3038

Browse files
Bordaclaudepre-commit-ci[bot]
authored
feat(docs): add GEO infrastructure for AI search visibility (#2224)
* feat: add GEO infrastructure for AI search visibility * fix: guard GEO blocks with {% if page %} to fix 404 build error * feat: GEO platform optimization — FAQPage, IndexNow, answer paragraphs * fix: move Disallow:/0.*/ to User-agent:* in robots.txt * fix: block all numeric versioned paths in robots.txt --------- Co-authored-by: Claude Code <noreply@anthropic.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent 6069c47 commit c4a3038

20 files changed

Lines changed: 351 additions & 2 deletions

.github/workflows/publish-docs.yml

Lines changed: 52 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,5 +56,56 @@ jobs:
5656
env:
5757
MKDOCS_GIT_COMMITTERS_APIKEY: ${{ secrets.GITHUB_TOKEN }}
5858
run: |
59-
latest_tag=$(git describe --tags `git rev-list --tags --max-count=1`)
59+
latest_tag=$(git tag --sort=-v:refname | grep -E '^[0-9]+\.[0-9]+\.[0-9]+$' | head -1)
6060
mike deploy --push --update-aliases $latest_tag latest
61+
62+
# IndexNow key: 0d5d9799b1cc4a39825146388c6781eb
63+
# This key must stay in sync across three files:
64+
# docs/0d5d9799b1cc4a39825146388c6781eb.txt (key file served at site root)
65+
# docs/theme/main.html (indexnow-key meta tag)
66+
# this workflow (inject step + notify step below)
67+
# Bing/Yandex fetch https://supervision.roboflow.com/<key>.txt to verify ownership.
68+
# Do NOT rename or delete the .txt file or change the key string without updating all three.
69+
- name: 🌐 Inject GEO root files into gh-pages
70+
if: >
71+
(github.event_name == 'push' && github.ref == 'refs/heads/develop') ||
72+
github.event_name == 'workflow_dispatch' ||
73+
(github.event_name == 'release' && github.event.action == 'published')
74+
run: |
75+
cp docs/robots.txt /tmp/robots.txt
76+
cp docs/llms.txt /tmp/llms.txt
77+
cp docs/0d5d9799b1cc4a39825146388c6781eb.txt /tmp/indexnow.txt
78+
git fetch origin gh-pages
79+
git checkout gh-pages
80+
cp /tmp/robots.txt robots.txt
81+
cp /tmp/llms.txt llms.txt
82+
cp /tmp/indexnow.txt 0d5d9799b1cc4a39825146388c6781eb.txt
83+
git add robots.txt llms.txt 0d5d9799b1cc4a39825146388c6781eb.txt
84+
git diff --cached --quiet || git commit -m "chore: update GEO root files (robots.txt, llms.txt, indexnow)"
85+
git push origin gh-pages
86+
87+
- name: 📡 Notify IndexNow
88+
if: >
89+
(github.event_name == 'push' && github.ref == 'refs/heads/develop') ||
90+
github.event_name == 'workflow_dispatch' ||
91+
(github.event_name == 'release' && github.event.action == 'published')
92+
run: |
93+
curl -s -o /dev/null -w "%{http_code}" -X POST "https://api.indexnow.org/IndexNow" \
94+
-H "Content-Type: application/json; charset=utf-8" \
95+
-d '{
96+
"host": "supervision.roboflow.com",
97+
"key": "0d5d9799b1cc4a39825146388c6781eb",
98+
"keyLocation": "https://supervision.roboflow.com/0d5d9799b1cc4a39825146388c6781eb.txt",
99+
"urlList": [
100+
"https://supervision.roboflow.com/",
101+
"https://supervision.roboflow.com/latest/",
102+
"https://supervision.roboflow.com/latest/how_to/detect_and_annotate/",
103+
"https://supervision.roboflow.com/latest/how_to/track_objects/",
104+
"https://supervision.roboflow.com/latest/how_to/detect_small_objects/",
105+
"https://supervision.roboflow.com/latest/how_to/filter_detections/",
106+
"https://supervision.roboflow.com/latest/how_to/save_detections/",
107+
"https://supervision.roboflow.com/latest/how_to/count_in_zone/",
108+
"https://supervision.roboflow.com/latest/how_to/benchmark_a_model/",
109+
"https://supervision.roboflow.com/latest/how_to/process_datasets/"
110+
]
111+
}' || true
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
0d5d9799b1cc4a39825146388c6781eb

docs/changelog.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
---
2+
description: "Full version history of the supervision Python library — release notes, breaking changes, new features, and deprecations for every version."
3+
---
4+
15
# Changelog
26

37
### 0.28.0 <small>Unreleased</small>

docs/datasets/core.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
---
22
comments: true
3+
description: API reference for supervision's DetectionDataset and ClassificationDataset — load, merge, split, and convert datasets in YOLO, COCO, and VOC formats.
34
---
45

56
# Datasets

docs/detection/annotators.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
---
22
comments: true
3+
description: API reference for supervision's annotator classes — draw bounding boxes, masks, labels, tracks, and heatmaps on images with one method call.
34
---
45

56
# Annotators

docs/detection/core.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
---
22
comments: true
3+
description: API reference for supervision's Detections class — the core data structure for bounding boxes, masks, confidence scores, and tracker IDs.
34
---
45

56
# Detections

docs/how_to/benchmark_a_model.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
---
22
comments: true
3+
description: Benchmark object detection models with supervision — compute mAP, confusion matrix, and per-class metrics to compare model performance.
34
---
45

56
![Corgi Example](https://media.roboflow.com/supervision/image-examples/how-to/benchmark-models/corgi-sorted-2.png)
@@ -74,6 +75,8 @@ This will create a folder called `Corgi-v2-4` with the dataset in the current wo
7475

7576
Let's load a model.
7677

78+
Select and instantiate the detection or segmentation model you want to benchmark. Supervision works with Roboflow Inference for both local and cloud-deployed models, as well as Ultralytics YOLO checkpoints. Choose the tab below that matches your preferred framework, then pass images to the loaded model during the evaluation loop.
79+
7780
=== "Inference, Local"
7881

7982
Roboflow supports a range of state-of-the-art [pre-trained models](https://inference.roboflow.com/quickstart/aliases/) for object detection, instance segmentation, and pose tracking. You don't even need an API key!

docs/how_to/count_in_zone.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,8 @@
1+
---
2+
comments: true
3+
description: Count objects entering a polygon zone in images and video using supervision's PolygonZone — measure throughput and density in any region.
4+
---
5+
16
With supervision, you can count the number of objects in a zone in an image or video. In this guide, we will show how to count the number of cars in a traffic video.
27

38
[View the notebook that accompanies this tutorial](https://github.com/roboflow/notebooks/blob/main/notebooks/how-to-use-polygonzone-annotate-and-supervision.ipynb).
@@ -14,6 +19,8 @@ download_assets(VideoAssets.VEHICLES_2)
1419

1520
First, we need to initialize a model. Let's use a YOLOv8 model with the default COCO checkpoint. We also need to load a video on which to run inference.
1621

22+
Create a YOLO model instance and load the source video using supervision's `VideoInfo` helper. The model will process each frame during inference, while `VideoInfo` extracts resolution and frame-rate metadata needed by the polygon zone annotator. A shared color palette ensures consistent zone coloring throughout the output video.
23+
1724
```python
1825
import numpy as np
1926
import supervision as sv
@@ -65,6 +72,8 @@ polygons = [
6572

6673
With the coordinates of the zones to draw ready, we can set up our zones:
6774

75+
Instantiate a `PolygonZone` for each polygon array, pairing it with a `PolygonZoneAnnotator` for visual overlay and a `BoxAnnotator` for drawing detection boxes. Each zone will later trigger on incoming detections to determine which objects fall inside its boundaries, enabling per-zone counting in the inference callback.
76+
6877
```python
6978
zones = [
7079
sv.PolygonZone(polygon=polygon, frame_resolution_wh=video_info.resolution_wh)

docs/how_to/detect_and_annotate.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
---
22
comments: true
3+
description: Learn to load model predictions, create Detections objects, and annotate images with bounding boxes, labels, and masks using supervision.
34
---
45

56
# Detect and Annotate
@@ -19,6 +20,8 @@ source image.
1920
First, you'll need to obtain predictions from your object detection or segmentation
2021
model.
2122

23+
To run inference, initialize your chosen model and pass the source image to its predict or infer method. Supervision supports Roboflow Inference, Ultralytics YOLO, and Hugging Face Transformers -- select the tab matching your framework. The result is a framework-specific object you will convert to a `Detections` instance in the next step.
24+
2225
=== "Inference"
2326

2427
```python
@@ -68,6 +71,8 @@ model.
6871

6972
Now that we have predictions from a model, we can load them into Supervision.
7073

74+
Each supported framework has a dedicated class method on `sv.Detections` that converts raw model output into a unified Supervision object. Call `from_inference`, `from_ultralytics`, or `from_transformers` depending on the package you used for inference. This normalization step ensures all downstream annotators and filters work identically regardless of the source model.
75+
7176
=== "Inference"
7277

7378
We can do so using the [`sv.Detections.from_inference`](https://supervision.roboflow.com/latest/detection/core/#supervision.detection.core.Detections.from_inference) method, which accepts model results from both detection and segmentation models.
@@ -138,6 +143,8 @@ You can load predictions from other computer vision frameworks and libraries usi
138143

139144
Finally, we can annotate the image with the predictions. Since we are working with an object detection model, we will use the [`sv.BoxAnnotator`](https://supervision.roboflow.com/latest/detection/annotators/#supervision.annotators.core.BoxAnnotator) and [`sv.LabelAnnotator`](https://supervision.roboflow.com/latest/detection/annotators/#supervision.annotators.core.LabelAnnotator) classes.
140145

146+
To draw bounding boxes and class labels on your image, create a `BoxAnnotator` and a `LabelAnnotator`, then call their `annotate` methods in sequence. Each annotator returns the modified image, so you can chain multiple annotators together. The result is a single NumPy array with all visual overlays rendered and ready for display or saving.
147+
141148
=== "Inference"
142149

143150
```{ .py hl_lines="10-16" }

docs/how_to/detect_small_objects.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
---
22
comments: true
3+
description: Detect small objects in images by applying SAHI inference slicing with supervision's InferenceSlicer — improve recall for tiny targets.
34
---
45

56
# Detect Small Objects
@@ -19,6 +20,8 @@ with the [Inference](https://github.com/roboflow/inference),
1920
Small object detection in high-resolution images presents challenges due to the objects'
2021
size relative to the image resolution.
2122

23+
Running a standard detection model on the full image establishes a baseline for comparison. Load your chosen model, pass the image through it, and convert the results into a `Detections` object. This baseline reveals how many small objects the model misses at native resolution, motivating the sliced inference approach shown later.
24+
2225
=== "Inference"
2326

2427
```python

0 commit comments

Comments
 (0)