onlinerake.OnlineRakingSGD¶

class onlinerake.OnlineRakingSGD(targets: Targets, learning_rate: float = 5.0, min_weight: float = 0.001, max_weight: float = 100.0, n_sgd_steps: int = 3, verbose: bool = False, track_convergence: bool = True, convergence_window: int = 20, compute_weight_stats: bool | int = False, max_history: int | None = 1000)[source]¶

Bases: object

Online raking via stochastic gradient descent.

A streaming weight calibration algorithm that adjusts observation weights to match target population margins using stochastic gradient descent (SGD). The algorithm minimizes squared-error loss between weighted margins and target proportions.

Parameters:
  • targets – Target population proportions for each feature.

  • learning_rate – Step size for gradient descent updates. Larger values lead to more aggressive updates but may cause oscillation. Default: 5.0.

  • min_weight – Lower bound for weights to prevent collapse. Must be positive. Default: 0.001.

  • max_weight – Upper bound for weights to prevent explosion. Must exceed min_weight. Default: 100.0.

  • n_sgd_steps – Number of gradient steps per observation. More steps can reduce oscillations but increase computation. Default: 3.

  • verbose – If True, log progress information. Default: False.

  • track_convergence – If True, monitor convergence metrics. Default: True.

  • convergence_window – Number of observations for convergence detection. Default: 20.

  • compute_weight_stats – Control weight statistics computation. If True: compute every observation. If False: never compute (best performance). If int k: compute every k observations. Default: False.

  • max_history – Maximum historical states to retain. None for unlimited (may cause memory issues). Default: 1000.

targets¶

The target proportions.

history¶

List of historical states after each update.

Examples

>>> # General features
>>> targets = Targets(owns_car=0.4, is_subscriber=0.2)
>>> raker = OnlineRakingSGD(targets, learning_rate=5.0)
>>> raker.partial_fit({'owns_car': 1, 'is_subscriber': 0})
>>> print(f"Loss: {raker.loss:.4f}")
>>> # Process multiple observations
>>> for obs in stream:
...     raker.partial_fit(obs)
...     if raker.converged:
...         break
Raises:

ValueError – If any parameter is invalid (negative learning rate, invalid weight bounds, non-positive convergence window, invalid compute_weight_stats).

Note

The algorithm supports arbitrary binary features, not limited to demographics. Feature names must match those defined in targets.

__init__(targets: Targets, learning_rate: float = 5.0, min_weight: float = 0.001, max_weight: float = 100.0, n_sgd_steps: int = 3, verbose: bool = False, track_convergence: bool = True, convergence_window: int = 20, compute_weight_stats: bool | int = False, max_history: int | None = 1000) None[source]¶

Methods

__init__(targets[, learning_rate, ...])

check_convergence([tolerance])

Check if algorithm has converged based on loss stability.

detect_oscillation([threshold])

Detect if loss is oscillating rather than converging.

fit_one(obs)

Process single observation and update weights.

partial_fit(obs)

Process single observation and update weights.

partial_fit_batch(observations)

Process multiple observations in batch.

Attributes

converged

Return True if the algorithm has detected convergence.

convergence_step

Get step number where convergence was detected.

effective_sample_size

Return the effective sample size (ESS).

gradient_norm_history

Get history of gradient norms.

loss

Get current squared-error loss.

loss_moving_average

Return moving average of loss over convergence window.

margins

Get current weighted margins.

raw_margins

Get unweighted (raw) margins.

weight_distribution_stats

Return comprehensive weight distribution statistics.

weights

Get copy of current weight vector.

check_convergence(tolerance: float = 1e-06) bool[source]¶

Check if algorithm has converged based on loss stability.

Parameters:

tolerance – Convergence tolerance. Smaller values require more stable loss. Default: 1e-6.

Returns:

True if convergence detected, False otherwise.

Note

Convergence is detected when loss is near zero or when relative standard deviation of recent losses is below tolerance.

property converged: bool¶

Return True if the algorithm has detected convergence.

property convergence_step: int | None¶

Get step number where convergence was detected.

Returns:

Observation number where convergence detected, or None if not yet converged.

detect_oscillation(threshold: float = 0.1) bool[source]¶

Detect if loss is oscillating rather than converging.

Parameters:

threshold – Relative threshold for detecting oscillation vs trend. Higher values are less sensitive to oscillation. Default: 0.1.

Returns:

True if oscillation detected in recent loss history, False otherwise.

Note

Oscillation suggests the learning rate may be too high.

property effective_sample_size: float¶

Return the effective sample size (ESS).

ESS is defined as (sum w_i)^2 / (sum w_i^2). It reflects the number of equally weighted observations that would yield the same variance as the current weighted estimator.

fit_one(obs: dict[str, Any] | Any) None¶

Process single observation and update weights.

Parameters:

obs – Observation containing feature indicators. Can be: - dict: Keys should match feature names in targets - object: Features accessed as attributes Values should be binary (0/1 or False/True). Missing features default to 0.

Returns:

None. Updates internal state in place.

Examples

>>> targets = Targets(owns_car=0.4, is_subscriber=0.2)
>>> raker = OnlineRakingSGD(targets)
>>>
>>> # Dict input
>>> raker.partial_fit({'owns_car': 1, 'is_subscriber': 0})
>>>
>>> # Object input (e.g., dataclass or namedtuple)
>>> from dataclasses import dataclass
>>> @dataclass
... class Obs:
...     owns_car: int
...     is_subscriber: int
>>> raker.partial_fit(Obs(owns_car=1, is_subscriber=0))

Note

After calling, inspect weights, margins, and loss properties for current state.

property gradient_norm_history: list[float]¶

Get history of gradient norms.

Returns:

List of gradient norms from each SGD step. Useful for analyzing convergence behavior.

property loss: float¶

Get current squared-error loss.

Computes sum of squared differences between current weighted margins and target proportions.

Returns:

Squared-error loss. Returns NaN if no observations processed. Lower values indicate better calibration to targets.

Examples

>>> # Perfect calibration would have loss near 0
>>> raker = OnlineRakingSGD(targets)
>>> # Process many observations...
>>> if raker.loss < 0.001:
...     print("Well calibrated")
property loss_moving_average: float¶

Return moving average of loss over convergence window.

property margins: dict[str, float]¶

Get current weighted margins.

Computes the weighted proportion of observations where each feature equals 1, using the current weight vector.

Returns:

Dictionary mapping feature names to weighted proportions. Returns NaN for all features if no observations processed.

Examples

>>> targets = Targets(a=0.5, b=0.3)
>>> raker = OnlineRakingSGD(targets)
>>> raker.partial_fit({'a': 1, 'b': 0})
>>> margins = raker.margins
>>> print(margins['a'] > margins['b'])  # a=1, b=0 in observation
True
partial_fit(obs: dict[str, Any] | Any) None[source]¶

Process single observation and update weights.

Parameters:

obs – Observation containing feature indicators. Can be: - dict: Keys should match feature names in targets - object: Features accessed as attributes Values should be binary (0/1 or False/True). Missing features default to 0.

Returns:

None. Updates internal state in place.

Examples

>>> targets = Targets(owns_car=0.4, is_subscriber=0.2)
>>> raker = OnlineRakingSGD(targets)
>>>
>>> # Dict input
>>> raker.partial_fit({'owns_car': 1, 'is_subscriber': 0})
>>>
>>> # Object input (e.g., dataclass or namedtuple)
>>> from dataclasses import dataclass
>>> @dataclass
... class Obs:
...     owns_car: int
...     is_subscriber: int
>>> raker.partial_fit(Obs(owns_car=1, is_subscriber=0))

Note

After calling, inspect weights, margins, and loss properties for current state.

partial_fit_batch(observations: list[dict[str, Any] | Any]) None[source]¶

Process multiple observations in batch.

Parameters:

observations – List of observations, each in same format as for partial_fit method.

Returns:

None. Updates internal state for all observations.

Examples

>>> observations = [
...     {'feature_a': 1, 'feature_b': 0},
...     {'feature_a': 0, 'feature_b': 1},
...     {'feature_a': 1, 'feature_b': 1},
... ]
>>> raker.partial_fit_batch(observations)

Note

Currently processes observations sequentially. Future versions may implement true batch processing for better performance.

property raw_margins: dict[str, float]¶

Get unweighted (raw) margins.

Computes the simple proportion of observations where each feature equals 1, without using weights.

Returns:

Dictionary mapping feature names to unweighted proportions. Returns NaN for all features if no observations processed.

Note

Useful for comparing weighted vs unweighted margins to assess the impact of the raking process.

property weight_distribution_stats: dict[str, float]¶

Return comprehensive weight distribution statistics.

property weights: ndarray[tuple[Any, ...], dtype[float64]]¶

Get copy of current weight vector.

Returns:

Array of shape (n_obs,) containing current weights.

Examples

>>> raker = OnlineRakingSGD(targets)
>>> raker.partial_fit({'feature_a': 1, 'feature_b': 0})
>>> weights = raker.weights
>>> print(weights.shape)
(1,)