1. Overview
The Utility Scores export provides model-derived preference scores at the respondent and attribute-level. It is designed for teams running conjoint or trade-off analysis who need direct access to utility values outside the platform.
Use this export when you want to:
- Compare which attribute levels drive choice most strongly.
- Build custom simulators in Excel, Python, or BI tools.
- Segment utility patterns across respondent groups.
For the estimator that produces these values, see Utility and simulated share methodology. For the question types this export applies to, see Choice-Based Conjoint and Running MaxDiff. If you need the underlying posterior draws rather than summary scores, see the Raw Draws Export Format.
2. File Structure & Layout
Each row represents one respondent's utility value for one attribute level.
A single respondent will appear on multiple rows, typically one for every tested level in every included attribute.
Example (first 5 rows):
| respondent_id | attribute | level | raw_utility | zero_centered_utility | share_scaled_utility | model_run_id |
|---|---|---|---|---|---|---|
| r_1001 | Brand | Alpha | 0.84 | 0.32 | 12.7 | run_2026_04 |
| r_1001 | Brand | Beta | -0.21 | -0.73 | 8.1 | run_2026_04 |
| r_1001 | Price | $9.99 | 0.45 | 0.11 | 11.2 | run_2026_04 |
| r_1001 | Price | $14.99 | -0.67 | -1.01 | 6.9 | run_2026_04 |
| r_1001 | Delivery Speed | Same day | 1.03 | 0.51 | 13.6 | run_2026_04 |
3. Key Columns
- respondent_id - Unique participant identifier.
- attribute - The conjoint attribute or feature family (for example, Brand or Price).
- level - The specific level within that attribute.
- raw_utility - Direct model estimate for that respondent-level combination.
- zero_centered_utility - Utility normalized so values are centered for interpretability across levels.
- share_scaled_utility - Utility scaled to support preference-share style simulation.
- model_run_id - Identifier for the modeling run used to generate the scores.
- weight - Optional respondent weight for weighted aggregation workflows.
4. Data Representation
Respondent-level estimates
Utilities are stored at respondent granularity, allowing full distribution analysis rather than only averages.
Multiple scoring variants
The export includes both raw and transformed score variants so you can select the scoring scale that matches your simulator or reporting method.
Attribute completeness
For each included respondent, rows are expected across all modeled levels in the exported design.
5. Missing & Special Values
- Missing respondent-level estimates may appear as blank values when a record is excluded from model fitting.
- If weighting is unavailable for a run, weight may be blank or default to
1. - Filter to a single model_run_id when comparing utilities to avoid mixing model versions.
6. Best Practices
- Aggregate by attribute and level to produce mean utilities for topline readouts.
- Use respondent-level rows for cluster analysis and segment profiling.
- Keep score scale consistent across analyses (for example, do not mix raw and share-scaled values in the same chart).
7. When to Use Utility Scores Export
- When building custom conjoint simulators.
- When you need respondent-level preference heterogeneity.
- When validating model outputs outside the platform.