Score Combiner Module
Multi-factor suitability scoring framework.
This module provides a composable scoring system for evaluating terrain suitability for outdoor activities. It separates scoring into three concerns:
Transforms - Convert raw values to [0, 1] scores
Components - Define individual scoring factors with roles
Combiner - Combine components into a final score
Formula: final = (weighted sum of additive) * (product of multiplicative)
Scoring Transforms
All transforms convert raw values into scores in [0, 1].
Scoring transformation functions.
All transformations convert raw values into scores in the range [0, 1].
Transformation types: 1. trapezoidal - sweet spot with ramp-up/down (e.g., slope_mean, snow_depth) 2. dealbreaker - step function with optional soft falloff (e.g., cliff detection) 3. linear - simple normalization with optional power scaling 4. terrain_consistency - combined roughness + slope_std metric
These are designed to be composable and user-configurable.
- src.scoring.transforms.trapezoidal(value, sweet_range, ramp_range)[source]
Trapezoidal transformation with a sweet spot.
Returns 1.0 for values in the sweet spot, ramps up/down at edges, and returns 0.0 outside the ramp range.
- Shape:
/
/
- _______/ _______
- | | |
ramp sweet sweet ramp start start end end
- Parameters:
- Returns:
Score in [0, 1]
- Return type:
Example
>>> trapezoidal(10.0, sweet_range=(5, 15), ramp_range=(3, 25)) 1.0 >>> trapezoidal(4.0, sweet_range=(5, 15), ramp_range=(3, 25)) 0.5
- src.scoring.transforms.dealbreaker(value, threshold, falloff=0, below_is_good=True)[source]
Dealbreaker (step function) transformation.
Returns 1.0 (no penalty) for safe values, 0.0 (full penalty) for dangerous values. Optional soft falloff for gradual transition.
- Shape (hard cutoff, falloff=0):
|________
threshold
- Shape (soft falloff):
______
threshold +falloff
- Parameters:
- Returns:
Score in [0, 1] where 1.0 = safe, 0.0 = dealbreaker
- Return type:
Example
>>> dealbreaker(10.0, threshold=25) # Below threshold 1.0 >>> dealbreaker(30.0, threshold=25, falloff=0) # Above, hard cutoff 0.0 >>> dealbreaker(30.0, threshold=25, falloff=10) # Midway in falloff 0.5
- src.scoring.transforms.linear(value, value_range, invert=False, power=1.0)[source]
Linear normalization transformation.
Maps value_range to [0, 1], clamping values outside the range. Optional power scaling for non-linear relationships.
- Parameters:
- Returns:
Score in [0, 1]
- Return type:
Example
>>> linear(50.0, value_range=(0, 100)) 0.5 >>> linear(0.5, value_range=(0, 1), invert=True) # High CV = bad 0.5 >>> linear(0.5, value_range=(0, 1), power=0.5) # sqrt scaling 0.707...
- src.scoring.transforms.snow_consistency(interseason_cv, intraseason_cv, interseason_threshold=1.5, intraseason_threshold=1.0)[source]
Combined snow consistency metric from year-to-year and within-winter variability.
Uses RMS (root mean square) to combine normalized CVs. Returns 1.0 for consistent snow, with gradual falloff for high variability.
This is used as an ADDITIVE component (weighted contribution to score), not a multiplicative penalty. The score represents how reliable the snow is.
- Parameters:
interseason_cv (float | ndarray) – Year-to-year coefficient of variation
intraseason_cv (float | ndarray) – Within-winter coefficient of variation
interseason_threshold (float) – CV value that maps to full inconsistency (default: 1.5)
intraseason_threshold (float) – CV value that maps to full inconsistency (default: 1.0)
- Returns:
Consistency score in [0, 1] where 1.0 = reliable, 0.0 = unreliable
- Return type:
Example
>>> snow_consistency(interseason_cv=0.0, intraseason_cv=0.0) 1.0 >>> snow_consistency(interseason_cv=1.5, intraseason_cv=1.0) 0.0 >>> snow_consistency(interseason_cv=0.75, intraseason_cv=0.5) # Both at 50% 0.5
- src.scoring.transforms.terrain_consistency(roughness, slope_std, roughness_threshold=30.0, slope_std_threshold=10.0, soft_start=0.5)[source]
Combined terrain consistency metric from roughness and slope variability.
Uses RMS to combine normalized roughness and slope_std, but only penalizes EXTREME inconsistency. Most terrain gets a score of 1.0 (no penalty).
The penalty only kicks in when the combined inconsistency exceeds soft_start, providing a gradual falloff for very rough terrain.
- Parameters:
roughness (float | ndarray) – Elevation standard deviation in meters
slope_std (float | ndarray) – Slope standard deviation in degrees
roughness_threshold (float) – Roughness value that maps to full inconsistency
slope_std_threshold (float) – Slope std value that maps to full inconsistency
soft_start (float) – Normalized inconsistency level where penalty begins (default 0.5)
- Returns:
Consistency score in [0, 1] where 1.0 = acceptable, 0.0 = extremely rough
- Return type:
Example
>>> terrain_consistency(roughness=0.0, slope_std=0.0) 1.0 >>> terrain_consistency(roughness=15.0, slope_std=5.0) # Both at 50%, at threshold 1.0 >>> terrain_consistency(roughness=30.0, slope_std=10.0) # Extreme 0.0
Score Components and Combiner
Score combination system for multi-factor suitability scoring.
Provides: - ScoreComponent: Defines a single scoring factor with transform and role - ScoreCombiner: Combines multiple components into a final score
Components have two roles: - additive: Weighted sum (e.g., slope_score, depth_score) - multiplicative: Penalties that reduce the score (e.g., cliff_penalty)
Formula: final_score = (sum of weighted additive) * (product of multiplicative)
- class src.scoring.combiner.ScoreComponent(name, transform, transform_params, role, weight=None)[source]
Bases:
objectA single scoring component with transform and role.
- Parameters:
- role
“additive” (weighted sum) or “multiplicative” (penalty)
- Type:
Literal[‘additive’, ‘multiplicative’]
- class src.scoring.combiner.ScoreCombiner(name, components=<factory>)[source]
Bases:
objectCombines multiple ScoreComponents into a final score.
Formula: final_score = (weighted sum of additive) * (product of multiplicative)
- Parameters:
name (str)
components (list[ScoreComponent])
- components
List of ScoreComponent instances
- components: list[ScoreComponent]
- get_component_scores(inputs)[source]
Get individual transformed scores for each component.
Useful for debugging and visualization.
- __init__(name, components=<factory>)
- Parameters:
name (str)
components (list[ScoreComponent])
- Return type:
None
Pre-built Scoring Configurations
Sledding Scorer
Default sledding suitability scoring configuration.
This config defines how terrain and snow statistics are combined into a sledding suitability score. Users can modify this config or create their own based on local conditions.
- Score formula:
final = (weighted sum of additive) × (product of multiplicative)
Components: - Additive (weighted sum to 1.0):
slope_mean: Ideal slope angle (trapezoidal, 5-15° sweet spot)
snow_depth: Adequate snow coverage (trapezoidal, 150-500mm sweet spot)
snow_coverage: Reliability of snow days (linear)
snow_consistency: Snow reliability (RMS of inter/intra-season CVs)
aspect_bonus: North-facing snow retention bonus (linear)
runout_bonus: Safe stopping area available (linear)
Multiplicative (penalties, only extreme values): - cliff_penalty: Dangerous steep sections (dealbreaker on p95) - terrain_consistency: Extreme roughness only (soft threshold at 50%)
- src.scoring.configs.sledding.create_default_sledding_scorer()[source]
Create the default sledding suitability scorer.
- Returns:
ScoreCombiner configured for sledding suitability analysis.
- Return type:
Example
>>> scorer = create_default_sledding_scorer() >>> score = scorer.compute({ ... "slope_mean": 10.0, # degrees ... "snow_depth": 300.0, # mm ... "snow_coverage": 0.7, # ratio 0-1 ... "snow_consistency": 0.3, # CV (lower is better) ... "aspect_bonus": 0.03, # northness × strength ... "runout_bonus": 1.0, # 1.0 if slope_min < 5° ... "slope_p95": 20.0, # degrees (for cliff detection) ... "roughness": 10.0, # meters ... "slope_std": 3.0, # degrees ... })
- src.scoring.configs.sledding.get_required_inputs()[source]
Get documentation of required inputs for the sledding scorer.
- src.scoring.configs.sledding.compute_derived_inputs(slope_stats, snow_stats)[source]
Compute derived inputs from raw statistics.
This helper computes the pre-processed inputs that the scorer expects.
- src.scoring.configs.sledding.compute_improved_sledding_score(slope_stats, snow_stats)[source]
Compute improved sledding score using trapezoid functions and synergy bonuses.
This is the new scoring system that uses: - Trapezoid functions for sweet spots (snow, slope) - Hard deal breakers (slope > 40°, roughness > 6m, insufficient coverage) - Coverage with diminishing returns - Synergy bonuses for exceptional combinations - Multiplicative base score
XC Skiing Scorer
Cross-country skiing suitability scoring configuration.
This config defines how snow statistics are combined into a cross-country skiing suitability score. XC skiing depends primarily on snow conditions (depth, coverage, consistency) with no penalty for slope.
- Score formula:
final = weighted sum of snow metrics (no multiplicative penalties)
Components: - snow_depth: Adequate snow base (trapezoidal, 100-400mm sweet spot) - snow_coverage: Reliability of snow days (linear) - snow_consistency: Snow reliability (RMS of inter/intra-season CVs)
Unlike sledding, no penalties for slope steepness - XC skiing can adapt to various terrain grades. Only snow quality matters.
- src.scoring.configs.xc_skiing.create_xc_skiing_scorer()[source]
Create the XC skiing suitability scorer.
- Returns:
ScoreCombiner configured for XC skiing suitability analysis.
- Return type:
Example
>>> scorer = create_xc_skiing_scorer() >>> score = scorer.compute({ ... "snow_depth": 250.0, # mm ... "snow_coverage": 0.75, # ratio 0-1 ... "snow_consistency": 0.75,# consistency score 0-1 (higher is better) ... })
- src.scoring.configs.xc_skiing.get_required_inputs()[source]
Get documentation of required inputs for the XC skiing scorer.
- src.scoring.configs.xc_skiing.compute_derived_inputs(snow_stats)[source]
Compute derived inputs from snow statistics.
This helper prepares snow metrics for the XC skiing scorer.
- src.scoring.configs.xc_skiing.compute_improved_xc_skiing_score(snow_stats)[source]
Compute improved XC skiing score using deal breakers and linear coverage.
This is the new scoring system that uses: - Trapezoid function for snow depth (100-400mm optimal) - Linear coverage (proportional to snow days, no diminishing returns) - Inverted consistency (lower CV = better) - Hard deal breaker: Coverage < 15% (< ~18 days per season) - Weighted sum (depth 30%, coverage 60%, consistency 10%)
Parks handle terrain safety, so only snow conditions matter.
Usage Examples
Using the combiner framework:
from src.scoring.combiner import ScoreCombiner, ScoreComponent
scorer = ScoreCombiner(
components=[
ScoreComponent(
name="slope",
transform="trapezoidal",
params={"sweet_range": (5, 15), "ramp_range": (3, 25)},
role="additive",
weight=1.0,
),
ScoreComponent(
name="cliff",
transform="dealbreaker",
params={"threshold": 25, "falloff": 10},
role="multiplicative",
),
]
)
scores = scorer.combine(slope=slope_data, cliff=p95_data)
Using pre-built scorers:
from src.scoring.configs.sledding import DEFAULT_SLEDDING_SCORER
scores = DEFAULT_SLEDDING_SCORER.combine(
slope_mean=slope_stats.slope_mean,
slope_p95=slope_stats.slope_p95,
roughness=slope_stats.roughness,
slope_std=slope_stats.slope_std,
snow_depth=snow_stats["median_max_depth"],
snow_coverage=snow_stats["mean_snow_day_ratio"],
snow_consistency=consistency,
)
See Also
Scoring Module - Legacy scoring functions (trapezoid_score, compute_sledding_score)
Snow Integration: Sledding Location Analysis - Sledding score example