Additive Multi-Attribute Value Theory in Talent Acquisition
Every hiring process claims to be objective. Very few actually are. I have sat in enough recruitment debriefs to know that the final decision often comes down to a vague sense of “fit” — a feeling that resists scrutiny precisely because it was never formalised in the first place. The candidate who interviews well over coffee beats the candidate whose CV is stronger, and nobody can explain exactly why, because there was never a framework that forced the comparison to be explicit.
Additive Multi-Attribute Value Theory (MAVT) does not eliminate subjectivity — the choice of weights is inherently a value judgement — but it makes the subjectivity visible, auditable, and consistent. It forces you to say, in advance, what matters and by how much. That alone is a significant improvement over the status quo.
The Multi-Attribute Decision Problem
When evaluating a candidate $c$ against a job offer $o$, we assess multiple attributes:
$$A = \{a_1, a_2, ..., a_n\}$$Where typical attributes include:
- $a_1$: Technical skills match
- $a_2$: Years of relevant experience
- $a_3$: Education level
- $a_4$: Certifications
- $a_5$: Interview performance
Each attribute $a_i$ has a raw value $x_i(c)$ for candidate $c$ that must be normalized and weighted.
The Additive Value Function
The core MAVT model computes an overall value score as a weighted sum:
$$V(c) = \sum_{i=1}^{n} w_i \cdot v_i(x_i(c))$$Where:
- $V(c)$ is the total value score for candidate $c$
- $w_i$ is the weight assigned to attribute $i$, with $\sum w_i = 1$
- $v_i(x_i)$ is the value function that normalizes raw scores to $[0, 1]$
This additive form assumes preferential independence: the value contribution of one attribute doesn’t depend on the levels of other attributes.
Skill Matching with String Similarity
For skill-based matching, we compare required skills $S_o = \{s_1, s_2, ..., s_m\}$ from the job offer against candidate skills $S_c$. Using the Damerau-Levenshtein distance $d_{DL}$, we compute similarity:
$$sim(s_o, s_c) = 1 - \frac{d_{DL}(s_o, s_c)}{\max(|s_o|, |s_c|)}$$For each required skill, we find the best match:
$$match(s_o) = \max_{s_c \in S_c} sim(s_o, s_c)$$The overall skill matching score becomes:
$$v_{skills}(c) = \frac{1}{|S_o|} \sum_{s \in S_o} match(s)$$This captures fuzzy matching—a candidate listing “ReactJS” still scores well against a requirement for “React.js”.
Experience Scoring with Duration Weighting
Experience quality depends on multiple factors. For a skill $s$ with required experience $r_s$ years, we evaluate the candidate’s relevant experience:
$$v_{exp}(s, c) = \frac{\iota \cdot t}{r_s}$$Where:
- $t$ = duration in years at positions using skill $s$
- $\iota$ = experience type factor:
- $\iota = 1.0$ for full-time positions
- $\iota = 0.5$ for internships
- $r_s$ = required years for skill $s$
This formula naturally handles:
- Partial experience: 2 years against a 5-year requirement yields $0.4$
- Exceeding requirements: Scores cap at $1.0$ or can exceed for exceptional candidates
- Internship discounting: Internships contribute less weight
Semantic Similarity with Embeddings
For attributes that resist simple string matching—like comparing job responsibilities against candidate experience narratives—we use semantic embeddings.
Given text representations $T_o$ (job requirements) and $T_c$ (candidate experience), we generate embedding vectors:
$$\vec{e}_o = embed(T_o), \quad \vec{e}_c = embed(T_c)$$The semantic similarity uses cosine distance:
$$v_{semantic}(c) = 1 - \frac{\vec{e}_o \cdot \vec{e}_c}{||\vec{e}_o|| \cdot ||\vec{e}_c||}$$This captures conceptual alignment that keyword matching would miss—a candidate describing “building scalable microservices” semantically matches requirements for “distributed systems architecture”.
Combining Scores: The Final Model
The complete candidate score combines all components:
$$V(c) = w_1 \cdot v_{skills}(c) + w_2 \cdot v_{exp}(c) + w_3 \cdot v_{semantic}(c) + w_4 \cdot v_{edu}(c)$$Weight selection reflects organizational priorities:
- Technical roles: Higher $w_1$, $w_2$
- Senior positions: Higher $w_2$ (experience weight)
- Research roles: Higher $w_4$ (education weight)
Practical Implementation Considerations
Threshold-Based Filtering
Before scoring, apply minimum thresholds for critical attributes:
$$eligible(c) = \begin{cases} 1 & \text{if } v_i(c) \geq \theta_i \quad \forall i \in Critical \\ 0 & \text{otherwise} \end{cases}$$Score Calibration
Raw scores require calibration against historical hiring data. If candidates with scores above $0.7$ historically succeeded, this becomes a benchmark for recommendations.
Handling Missing Data
For missing attributes, options include:
- Assign neutral value: $v_i = 0.5$
- Redistribute weights to known attributes
- Flag for manual review
Advantages of the MAVT Approach
The real advantage is not accuracy — no model can perfectly predict job performance — but defensibility. When a rejected candidate asks why, you can point to a score breakdown rather than a gut feeling.
- Transparency: Every score component is explainable
- Consistency: Same criteria applied to all candidates
- Auditability: Weights and formulae can be reviewed for bias
- Flexibility: Weights adjust per role without changing infrastructure
Limitations and Mitigations
Preferential independence assumption may not hold—10 years of experience might matter more for senior roles than entry-level. This can be addressed with interaction terms:
$$V(c) = \sum_i w_i v_i + \sum_{i \lt j} w_{ij} v_i v_j$$Weight elicitation is challenging. Techniques include:
- Pairwise comparisons (AHP method)
- Swing weighting
- Historical data regression
Reflection
MAVT does not solve hiring. It solves the process of hiring — the part that should be mechanical so that human judgement can be reserved for the part that should not be. The mathematics ensure that every candidate is measured against the same yardstick; the humans still choose the yardstick, still define what “good” means for a given role, still make the final call when two candidates score within a margin of each other.
I have found that the most valuable moment in implementing such a system is not the scoring itself but the weight-elicitation meeting — the conversation where hiring managers are forced to articulate, out loud and in numbers, what they actually care about. That conversation alone, even without the model, would improve most hiring processes I have seen.
Additive Multi-Attribute Value Theory in Talent Acquisition
Applying MAVT to build objective candidate scoring systems.
Achraf SOLTANI — January 21, 2025
