Metrics System

The metrics module provides built-in classification metrics and a flexible system for registering custom metrics.

Metric Registration

optimal_cutoffs.metrics.register_metric(name: str | None = None, func: Callable[[ndarray | float, ndarray | float, ndarray | float, ndarray | float], ndarray | float] | None = None, is_piecewise: bool = True, maximize: bool = True, needs_proba: bool = False) Callable[[ndarray | float, ndarray | float, ndarray | float, ndarray | float], ndarray | float] | Callable[[Callable[[ndarray | float, ndarray | float, ndarray | float, ndarray | float], ndarray | float]], Callable[[ndarray | float, ndarray | float, ndarray | float, ndarray | float], ndarray | float]][source]

Register a metric function.

Parameters:
  • name (str, optional) – Key under which to store the metric. If not provided, uses function’s __name__.

  • func (callable, optional) – Metric callable accepting (tp, tn, fp, fn) as scalars or arrays. Handles both scalar and array inputs via NumPy broadcasting.

  • is_piecewise (bool, default=True) – Whether metric is piecewise-constant w.r.t. threshold changes.

  • maximize (bool, default=True) – Whether to maximize (True) or minimize (False) the metric.

  • needs_proba (bool, default=False) – Whether metric requires probability scores (e.g., log-loss, Brier score).

Returns:

The registered function or a decorator if func is None.

Return type:

callable or decorator

optimal_cutoffs.metrics.register_metrics(metrics: dict[str, Callable[[ndarray | float, ndarray | float, ndarray | float, ndarray | float], ndarray | float]], is_piecewise: bool = True, maximize: bool = True, needs_proba: bool = False) None[source]

Register multiple metric functions at once.

Parameters:
  • metrics (dict) – Mapping of metric names to functions that handle both scalars and arrays.

  • is_piecewise (bool, default=True) – Whether metrics are piecewise-constant.

  • maximize (bool, default=True) – Whether metrics should be maximized.

  • needs_proba (bool, default=False) – Whether metrics require probability scores.

Metric Properties

optimal_cutoffs.metrics.is_piecewise_metric(metric_name: str) bool[source]

Check if a metric is piecewise-constant.

optimal_cutoffs.metrics.should_maximize_metric(metric_name: str) bool[source]

Check if a metric should be maximized.

optimal_cutoffs.metrics.needs_probability_scores(metric_name: str) bool[source]

Check if a metric needs probability scores.

optimal_cutoffs.metrics.has_vectorized_implementation(metric_name: str) bool[source]

Check if a metric has a vectorized implementation.

Note: Always returns True since all metrics handle both scalar and array inputs.

Built-in Metrics

optimal_cutoffs.metrics.f1_score(tp: ndarray | float, tn: ndarray | float, fp: ndarray | float, fn: ndarray | float) ndarray | float[source]

F1 score: 2*TP / (2*TP + FP + FN).

Automatically handles both scalar and array inputs via NumPy broadcasting.

optimal_cutoffs.metrics.accuracy_score(tp: ndarray | float, tn: ndarray | float, fp: ndarray | float, fn: ndarray | float) ndarray | float[source]

Accuracy: (TP + TN) / (TP + TN + FP + FN).

optimal_cutoffs.metrics.precision_score(tp: ndarray | float, tn: ndarray | float, fp: ndarray | float, fn: ndarray | float) ndarray | float[source]

Precision: TP / (TP + FP).

optimal_cutoffs.metrics.recall_score(tp: ndarray | float, tn: ndarray | float, fp: ndarray | float, fn: ndarray | float) ndarray | float[source]

Recall: TP / (TP + FN).

Cost-Sensitive Metrics

optimal_cutoffs.metrics.make_linear_counts_metric(w_tp: float = 0.0, w_tn: float = 0.0, w_fp: float = 0.0, w_fn: float = 0.0, name: str | None = None) Callable[[ndarray, ndarray, ndarray, ndarray], ndarray][source]

Create a vectorized linear utility metric from confusion matrix.

Returns: metric(tp, tn, fp, fn) = w_tp*tp + w_tn*tn + w_fp*fp + w_fn*fn

Parameters:
  • w_tp (float) – Weights for each confusion matrix component

  • w_tn (float) – Weights for each confusion matrix component

  • w_fp (float) – Weights for each confusion matrix component

  • w_fn (float) – Weights for each confusion matrix component

  • name (str, optional) – If provided, automatically registers the metric

Returns:

Vectorized metric function

Return type:

callable

Examples

>>> # Cost-sensitive: FN costs 5x more than FP
>>> metric = make_linear_counts_metric(w_fp=-1.0, w_fn=-5.0, name="cost_5to1")
>>> # Now can use: optimize_threshold(y, y_pred, metric="cost_5to1")
optimal_cutoffs.metrics.make_cost_metric(fp_cost: float, fn_cost: float, tp_benefit: float = 0.0, tn_benefit: float = 0.0, name: str | None = None) Callable[[ndarray, ndarray, ndarray, ndarray], ndarray][source]

Create a vectorized cost-sensitive metric.

Returns: tp_benefit*TP + tn_benefit*TN - fp_cost*FP - fn_cost*FN

Parameters:
  • fp_cost (float) – Cost of false positives (positive value)

  • fn_cost (float) – Cost of false negatives (positive value)

  • tp_benefit (float, default=0.0) – Benefit for true positives

  • tn_benefit (float, default=0.0) – Benefit for true negatives

  • name (str, optional) – If provided, automatically registers the metric

Returns:

Vectorized metric function

Return type:

callable

Examples

>>> # Classic cost-sensitive
>>> metric = make_cost_metric(fp_cost=1.0, fn_cost=5.0, name="cost_sensitive")
>>> # Now can use: optimize_threshold(y, y_pred, metric="cost_sensitive")

Confusion Matrix Utilities

Multiclass Metrics

Global Registries