API Reference¶
This page provides detailed documentation for all public classes and functions
in the onlinerake package.
Core Classes¶
Target population proportions for binary features. |
|
Online raking via stochastic gradient descent. |
|
Online raking via multiplicative weights updates. |
Targets¶
- class onlinerake.Targets(**kwargs: float)[source]¶
Bases:
objectTarget population proportions for binary features.
A flexible container for specifying target proportions for any set of binary features. Each feature should have a proportion between 0 and 1 representing the fraction of the population where that feature is True/1.
- Parameters:
**kwargs – Named feature proportions. Each key is a feature name and each value is the target proportion (between 0 and 1) for that feature being 1/True.
Examples
>>> # Product preferences >>> targets = Targets(owns_car=0.4, is_subscriber=0.2, likes_coffee=0.7) >>> print(targets.feature_names) ['is_subscriber', 'likes_coffee', 'owns_car']
>>> # Medical indicators >>> targets = Targets(has_diabetes=0.08, exercises=0.35, smoker=0.15)
>>> # Access target values >>> print(targets['owns_car']) 0.4
>>> # Check if feature exists >>> print('owns_car' in targets) True
- Raises:
ValueError – If any target proportion is not between 0 and 1.
Note
Feature names are stored in sorted order for consistent behavior across different Python versions and hash randomization settings.
- as_dict() dict[str, float][source]¶
Convert targets to a dictionary.
Examples
>>> targets = Targets(owns_car=0.4, is_subscriber=0.2) >>> targets.as_dict() {'owns_car': 0.4, 'is_subscriber': 0.2}
OnlineRakingSGD¶
- class onlinerake.OnlineRakingSGD(targets: Targets, learning_rate: float = 5.0, min_weight: float = 0.001, max_weight: float = 100.0, n_sgd_steps: int = 3, verbose: bool = False, track_convergence: bool = True, convergence_window: int = 20, compute_weight_stats: bool | int = False, max_history: int | None = 1000)[source]¶
Bases:
objectOnline raking via stochastic gradient descent.
A streaming weight calibration algorithm that adjusts observation weights to match target population margins using stochastic gradient descent (SGD). The algorithm minimizes squared-error loss between weighted margins and target proportions.
- Parameters:
targets – Target population proportions for each feature.
learning_rate – Step size for gradient descent updates. Larger values lead to more aggressive updates but may cause oscillation. Default: 5.0.
min_weight – Lower bound for weights to prevent collapse. Must be positive. Default: 0.001.
max_weight – Upper bound for weights to prevent explosion. Must exceed min_weight. Default: 100.0.
n_sgd_steps – Number of gradient steps per observation. More steps can reduce oscillations but increase computation. Default: 3.
verbose – If True, log progress information. Default: False.
track_convergence – If True, monitor convergence metrics. Default: True.
convergence_window – Number of observations for convergence detection. Default: 20.
compute_weight_stats – Control weight statistics computation. If True: compute every observation. If False: never compute (best performance). If int k: compute every k observations. Default: False.
max_history – Maximum historical states to retain. None for unlimited (may cause memory issues). Default: 1000.
- targets¶
The target proportions.
- history¶
List of historical states after each update.
Examples
>>> # General features >>> targets = Targets(owns_car=0.4, is_subscriber=0.2) >>> raker = OnlineRakingSGD(targets, learning_rate=5.0) >>> raker.partial_fit({'owns_car': 1, 'is_subscriber': 0}) >>> print(f"Loss: {raker.loss:.4f}")
>>> # Process multiple observations >>> for obs in stream: ... raker.partial_fit(obs) ... if raker.converged: ... break
- Raises:
ValueError – If any parameter is invalid (negative learning rate, invalid weight bounds, non-positive convergence window, invalid compute_weight_stats).
Note
The algorithm supports arbitrary binary features, not limited to demographics. Feature names must match those defined in targets.
- check_convergence(tolerance: float = 1e-06) bool[source]¶
Check if algorithm has converged based on loss stability.
- Parameters:
tolerance – Convergence tolerance. Smaller values require more stable loss. Default: 1e-6.
- Returns:
True if convergence detected, False otherwise.
Note
Convergence is detected when loss is near zero or when relative standard deviation of recent losses is below tolerance.
- property convergence_step: int | None¶
Get step number where convergence was detected.
- Returns:
Observation number where convergence detected, or None if not yet converged.
- detect_oscillation(threshold: float = 0.1) bool[source]¶
Detect if loss is oscillating rather than converging.
- Parameters:
threshold – Relative threshold for detecting oscillation vs trend. Higher values are less sensitive to oscillation. Default: 0.1.
- Returns:
True if oscillation detected in recent loss history, False otherwise.
Note
Oscillation suggests the learning rate may be too high.
- property effective_sample_size: float¶
Return the effective sample size (ESS).
ESS is defined as (sum w_i)^2 / (sum w_i^2). It reflects the number of equally weighted observations that would yield the same variance as the current weighted estimator.
- fit_one(obs: dict[str, Any] | Any) None¶
Process single observation and update weights.
- Parameters:
obs – Observation containing feature indicators. Can be: - dict: Keys should match feature names in targets - object: Features accessed as attributes Values should be binary (0/1 or False/True). Missing features default to 0.
- Returns:
None. Updates internal state in place.
Examples
>>> targets = Targets(owns_car=0.4, is_subscriber=0.2) >>> raker = OnlineRakingSGD(targets) >>> >>> # Dict input >>> raker.partial_fit({'owns_car': 1, 'is_subscriber': 0}) >>> >>> # Object input (e.g., dataclass or namedtuple) >>> from dataclasses import dataclass >>> @dataclass ... class Obs: ... owns_car: int ... is_subscriber: int >>> raker.partial_fit(Obs(owns_car=1, is_subscriber=0))
Note
After calling, inspect weights, margins, and loss properties for current state.
- property gradient_norm_history: list[float]¶
Get history of gradient norms.
- Returns:
List of gradient norms from each SGD step. Useful for analyzing convergence behavior.
- property loss: float¶
Get current squared-error loss.
Computes sum of squared differences between current weighted margins and target proportions.
- Returns:
Squared-error loss. Returns NaN if no observations processed. Lower values indicate better calibration to targets.
Examples
>>> # Perfect calibration would have loss near 0 >>> raker = OnlineRakingSGD(targets) >>> # Process many observations... >>> if raker.loss < 0.001: ... print("Well calibrated")
- property margins: dict[str, float]¶
Get current weighted margins.
Computes the weighted proportion of observations where each feature equals 1, using the current weight vector.
- Returns:
Dictionary mapping feature names to weighted proportions. Returns NaN for all features if no observations processed.
Examples
>>> targets = Targets(a=0.5, b=0.3) >>> raker = OnlineRakingSGD(targets) >>> raker.partial_fit({'a': 1, 'b': 0}) >>> margins = raker.margins >>> print(margins['a'] > margins['b']) # a=1, b=0 in observation True
- partial_fit(obs: dict[str, Any] | Any) None[source]¶
Process single observation and update weights.
- Parameters:
obs – Observation containing feature indicators. Can be: - dict: Keys should match feature names in targets - object: Features accessed as attributes Values should be binary (0/1 or False/True). Missing features default to 0.
- Returns:
None. Updates internal state in place.
Examples
>>> targets = Targets(owns_car=0.4, is_subscriber=0.2) >>> raker = OnlineRakingSGD(targets) >>> >>> # Dict input >>> raker.partial_fit({'owns_car': 1, 'is_subscriber': 0}) >>> >>> # Object input (e.g., dataclass or namedtuple) >>> from dataclasses import dataclass >>> @dataclass ... class Obs: ... owns_car: int ... is_subscriber: int >>> raker.partial_fit(Obs(owns_car=1, is_subscriber=0))
Note
After calling, inspect weights, margins, and loss properties for current state.
- partial_fit_batch(observations: list[dict[str, Any] | Any]) None[source]¶
Process multiple observations in batch.
- Parameters:
observations – List of observations, each in same format as for partial_fit method.
- Returns:
None. Updates internal state for all observations.
Examples
>>> observations = [ ... {'feature_a': 1, 'feature_b': 0}, ... {'feature_a': 0, 'feature_b': 1}, ... {'feature_a': 1, 'feature_b': 1}, ... ] >>> raker.partial_fit_batch(observations)
Note
Currently processes observations sequentially. Future versions may implement true batch processing for better performance.
- property raw_margins: dict[str, float]¶
Get unweighted (raw) margins.
Computes the simple proportion of observations where each feature equals 1, without using weights.
- Returns:
Dictionary mapping feature names to unweighted proportions. Returns NaN for all features if no observations processed.
Note
Useful for comparing weighted vs unweighted margins to assess the impact of the raking process.
- property weight_distribution_stats: dict[str, float]¶
Return comprehensive weight distribution statistics.
- property weights: ndarray[tuple[Any, ...], dtype[float64]]¶
Get copy of current weight vector.
- Returns:
Array of shape (n_obs,) containing current weights.
Examples
>>> raker = OnlineRakingSGD(targets) >>> raker.partial_fit({'feature_a': 1, 'feature_b': 0}) >>> weights = raker.weights >>> print(weights.shape) (1,)
OnlineRakingMWU¶
- class onlinerake.OnlineRakingMWU(targets, learning_rate: float = 1.0, min_weight: float = 0.001, max_weight: float = 100.0, n_steps: int = 3, verbose: bool = False, track_convergence: bool = True, convergence_window: int = 20, compute_weight_stats: bool | int = False)[source]¶
Bases:
OnlineRakingSGDOnline raking via multiplicative weights updates.
- Parameters:
targets (
Targets) – Target population proportions for each feature.learning_rate (float, optional) – Step size used in the exponent of the multiplicative update. A typical default is
learning_rate=1.0. The algorithm automatically clips extreme exponents based on the weights dtype to prevent numerical overflow/underflow, making it robust even with very large learning rates.min_weight (float, optional) – Lower bound applied to the weights after each update. This prevents weights from collapsing to zero. Must be positive.
max_weight (float, optional) – Upper bound applied to the weights after each update. This prevents runaway weights. Must exceed
min_weight.n_steps (int, optional) – Number of multiplicative updates applied each time a new observation arrives.
compute_weight_stats (bool or int, optional) – Controls computation of weight distribution statistics for performance. If True, compute on every call. If False, never compute. If integer k, compute every k observations. Default is False.
- fit_one(obs: dict[str, Any] | Any) None¶
Consume a single observation and update weights multiplicatively.
- Parameters:
obs (dict or object) – An observation containing feature indicators. For dict input, keys should match feature names in targets. For object input, features are accessed as attributes. Values should be binary (0/1 or False/True).
- Returns:
The internal state is updated in place.
- Return type:
None
- partial_fit(obs: dict[str, Any] | Any) None[source]¶
Consume a single observation and update weights multiplicatively.
- Parameters:
obs (dict or object) – An observation containing feature indicators. For dict input, keys should match feature names in targets. For object input, features are accessed as attributes. Values should be binary (0/1 or False/True).
- Returns:
The internal state is updated in place.
- Return type:
None