-
|
By default, from torchmetrics.image.psnr import PeakSignalNoiseRatio
metric = PeakSignalNoiseRatio()
metric(img1, target1)
metric(img2, target2)
...
metric(img100, target100)
metric.compute()
# print the average of 100 pair of image's PSNRsHow can get from torchmetrics.image.psnr import PeakSignalNoiseRatio
metric = PeakSignalNoiseRatio(aggregation = "concatenate")
metric(img1, target1)
metric(img2, target2)
...
metric(img100, target100)
metric.compute()
# print a list of 100 pair of image's PSNRsTIA! |
Beta Was this translation helpful? Give feedback.
Answered by
Borda
Mar 18, 2026
Replies: 2 comments
-
|
Hey, did you figure out how to do this? |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
As of v1.9.0, you can get per-sample metric values (instead of the default batch-averaged aggregation) using two approaches: Approach 1 — Functional interface (simplest): from torchmetrics.functional.image import peak_signal_noise_ratio
psnr_values = []
for img_pred, img_target in zip(preds, targets):
psnr_values.append(
peak_signal_noise_ratio(img_pred.unsqueeze(0), img_target.unsqueeze(0), data_range=1.0)
)
psnr_per_image = torch.stack(psnr_values)Approach 2 — Custom metric with list state: import torch
from torchmetrics import Metric
from torchmetrics.functional.image import peak_signal_noise_ratio
from torchmetrics.utilities import dim_zero_cat
class PSNRPerSample(Metric):
def __init__(self, data_range: float = 1.0, **kwargs):
super().__init__(**kwargs)
self.data_range = data_range
self.add_state("values", default=[], dist_reduce_fx="cat")
def update(self, preds: torch.Tensor, target: torch.Tensor) -> None:
for i in range(preds.shape[0]):
val = peak_signal_noise_ratio(
preds[i].unsqueeze(0), target[i].unsqueeze(0),
data_range=self.data_range,
)
self.values.append(val.unsqueeze(0))
def compute(self) -> torch.Tensor:
return dim_zero_cat(self.values) # returns ALL per-sample valuesUsage: metric = PSNRPerSample(data_range=1.0)
for batch_pred, batch_target in dataloader:
metric.update(batch_pred, batch_target)
all_psnrs = metric.compute() # tensor of shape (N_total,)
print(f"Mean: {all_psnrs.mean()}, Std: {all_psnrs.std()}, Min: {all_psnrs.min()}")This pattern works for any metric — just swap Docs: PSNR | Custom Metrics |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
Borda
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
As of v1.9.0, you can get per-sample metric values (instead of the default batch-averaged aggregation) using two approaches:
Approach 1 — Functional interface (simplest):
Approach 2 — Custom metric with list state: