You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/tutorials/basic_usage.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,22 +30,22 @@ print(planar_nx[0]) # (Networkx) Graph with 64 nodes and 177 edges
30
30
When evaluating graph generative models, we want to compare a set of *generated* graphs to a set of *reference* graphs (typically the test set).
31
31
In `polygraph`, we provide various different metrics to quantify how similar these two sets of graphs are.
32
32
We usually pass collections of NetworkX graphs to metrics.
33
-
Below, we demonstrate how a set of these metrics, combined in the [`MMD2CollectionGaussianTV`][polygraph.metrics.MMD2CollectionGaussianTV] benchmark may be computed:
33
+
Below, we demonstrate how a set of these metrics, combined in the [`GaussianTVMMD2Benchmark`][polygraph.metrics.GaussianTVMMD2Benchmark] benchmark may be computed:
34
34
35
35
```python
36
-
from polygraph.metrics importMMD2CollectionGaussianTV
36
+
from polygraph.metrics importGaussianTVMMD2Benchmark
37
37
38
38
reference = planar.to_nx()
39
39
generated = sbm.to_nx()
40
40
41
-
benchmark =MMD2CollectionGaussianTV(reference)
41
+
benchmark =GaussianTVMMD2Benchmark(reference)
42
42
print(benchmark.compute(generated)) # Dictionary of different metrics
43
43
```
44
44
45
45
We discuss available metrics [in the next tutorial](metrics_overview.md).
46
46
47
47
All metrics are evaluated in a similar fashion, as defined by the common [interface](../api_reference/metrics/interface.md):
48
48
49
-
- We first initialize a metric object via `benchmark = MMD2CollectionGaussianTV(reference)`. This fits the metric to the `reference` set, caching data that is required in later computations
49
+
- We first initialize a metric object via `benchmark = GaussianTVMMD2Benchmark(reference)`. This fits the metric to the `reference` set, caching data that is required in later computations
50
50
- We then compute the metric against the generated set via `benchmark.compute(generated)`
51
51
- We may call `benchmark.compute` repeatedly with different generated sets, e.g. over the course of training
print(metrics.compute(generated_graphs)) # Dictionary of metrics
@@ -31,17 +31,17 @@ We now proceed to give a high-level overview over the different types of metrics
31
31
[Maximum Mean Discrepancy (MMD)](../api_reference/metrics/mmd.md) is the predominant method for comparing graph distributions.
32
32
The two distributions are embedded in a reproducing kernel Hilbert space (RKHS) and their distance is then computed in this space.
33
33
34
-
In `polygraph`, we bundle the most commonly used MMD metrics in two benchmark classes: [`MMD2CollectionGaussianTV`][polygraph.metrics.MMD2CollectionGaussianTV] and [`MMD2CollectionRBF`][polygraph.metrics.MMD2CollectionRBF]. These benchmarks may be evaluated in the following fashion:
34
+
In `polygraph`, we bundle the most commonly used MMD metrics in two benchmark classes: [`GaussianTVMMD2Benchmark`][polygraph.metrics.GaussianTVMMD2Benchmark] and [`RBFMMD2Benchmark`][polygraph.metrics.RBFMMD2Benchmark]. These benchmarks may be evaluated in the following fashion:
35
35
36
36
```python
37
37
from polygraph.datasets import PlanarGraphDataset, SBMGraphDataset
38
-
from polygraph.metrics importMMD2CollectionGaussianTV, MMD2IntervalCollectionGaussianTV
38
+
from polygraph.metrics importGaussianTVMMD2Benchmark, GaussianTVMMD2BenchmarkInterval
0 commit comments