Skip to content

Nimiq Validator Trustscore

An algorithm that calculates a score to help users assess how reliable a validator is.

The Validator Trustscore (VTS) algorithm is designed to help users assess the reliability of validators in the Nimiq Wallet. This score ranges from 0 to 1, where 0 indicates a validator is not trustworthy, and 1 indicates a highly trustworthy validator. This way, stakers can make better informed decisions about which validators to trust. The algorithm is based on three key factors:

  • Dominance: Evaluates the dominance of the validator's stake relative to the total stake in the network, penalizing validators with higher stakes to prevent centralization.
  • Reliability: Assesses the consistency of a validator in producing blocks over the past 9 months.
  • Availability: Measures how often a validator is online and selected to produce blocks.
Heads up

The Validator Trustscore is still under heavy development and nothing is final. Feel free to share your Suggestions or Feedback.

The Validator Score in the Wallet

Preview of the Validator Trustscore in the Nimiq Wallet


The VTS algorithm is open source, with its design and implementation available to the public, just like our blockchain. The implementation is currently under development and will be available as an npm package. An API may also be made available for public use to access the score. More information about this API will be provided in the future. This document details the calculation methods for each factor.

The VTS algorithm

The algorithm uses three factors: Dominance, Reliability, and Liveness. Each factor ranges from 0 to 1.

T=D×R×L

The dominance factor is based on the dominance of the validator's stake relative to the total stake in the network. Reliability and availability are based on behaviour over the last 9 months. For these parameters we only consider completed epochs, not the currently active one. Therefore, the score is not live and can have a delay of up to 12 hours (an epoch lasts 12 hours).

Before going any further, we define m, the number of epochs to consider, knowing that the duration of the window is 9 months.

Calculation of m

m=window_duration_msepoch_durationwindow_duration_ms=9×30×24×60×60×1000epoch_duration=block_duration×blocks_per_epoch
  • Block duration and blocks per epoch are constants from the policy

The curves and constants presented in this document are subject to change at any time in the future. We will keep the community informed of any changes.

Dominance

The dominance factor ensures that no single validator controls too much of the network's total stake. If a validator controls a large portion of the total stake, they will get a lower score. We penalize validators with a higher stake compared to those with a lower stake, as a lower stake promotes a fair distribution of control across the network.

Dominance Ratio

To find the dominance ratio (s) of a validator, we have two methods:

  1. First method: Calculate the ratio by dividing the validator's share by the total share of the network for an active epoch:
s=vZ

Where v is the validator's share and Z is the total network share. This method applies when the epoch is active, so we can access each validator's balance using the getActiveValidators function from the RPC.

  1. Second method: A different approach is used for a closed epoch. This is less accurate due to some randomness and is considered a fallback option. Here we look at the slot distribution of each voting block, which reflects the amount staked by each validator. The dominance ratio is calculated by dividing the number of slots allocated to a validator by the total number of slots in that epoch:
s=slSl

Where sl is the number of slots allocated to the validator and Sl is the total number of slots.

The second method in the code is called dominanceRatioViaSlots.

Curve adjustment

Then, we apply a curve to the stake percentage to calculate the dominance score (S):

S=max(0,1sk),being t=0.15 and k=7.5

Where t is the threshold and k is the slope of the curve.

Graph of the dominance factor. The x-axis represents the dominance of the validator, and the y-axis represents the dominance factor.

Here you can see some examples depending on the stake percentage:

Stake PercentageDominance Score
0%1
5%0.999
7.5%0.994
10%0.952
12.5%0.745
>=15%0

Due to technical limitations, we can currently only calculate the dominance of validators that are active in the current epoch. We cannot calculate the score of a validator at a given timestamp.

Reliability

The Reliability factor measures how consistent a validator is when it comes to producing blocks when they should. Validators who regularly produce the blocks they are expected to will have a high reliability score. On the other hand, validators who often fail to produce their expected blocks will have a lower reliability score. The score is a moving average of the reliability score for each epoch. First, we calculate the reliability (ri) for each epoch.

ri is the number of blocks that the validator produced and received a reward for (Ci) divided by the number of blocks that the validator was likely to produce (Hi).

Calculation of Ci

Ci rewarded blocks (and thus produced) by a validator in the epoch i:

Ci=j=0N1cjfor i=0,1,2,,m1

cj is the number of blocks that the validator produced in the batch j, where j[0,N1].

  • N is the number of batches in an epoch that can be retrieved from the policy.
  • The number of blocks that the validators produced can be fetched from the blockchain via the rewarded inherent of a batch.

Calculation of Hi

Hi is the likelihood that a validator will produce a block in the epoch i:

Hi=hi,vk=0V1hi,vfor i=0,1,2,,m1
  • V is the number of active validators in the epoch i.
  • hi,v is the assignated slot number for the validator v in the epoch i.
ri=CiHi,for i=0,1,2,,m1

Where r0 is the Reliability value of the most recent epoch and rm1 is the Reliability of the oldest epoch.

To combine all the Reliability scores into a single value, we do a moving average, where more recent epochs have higher weights than older ones.

R¯=i=0m1(1aim1)rii=0m1(1aim1),a=0.5

Being a, the parameter determining how much the observation of the oldest epoch is worth relative to the observation of the newest epoch.

Adjusting for High-Reliability Expectations

The previous formula provides a weighted moving average score for validators reliability in block production, where a score of 0.9 indicates a significant downtime of 10%*, highlighting recent performance and the need for improved consistency.

To better reflect the high standards required, we will plot the value on a circle, with c as the parameter defining the slope of the arc.

R=c+1R¯2+2cR¯+(c1)2Center of circle at (c,-c+1), where c=0.16

Graph of the reliability score adjustment. The x-axis represents the reliability score, and the y-axis represents the adjusted reliability score.

What we achieve with this adjustment is to penalise more severely those validators that have a low reliability score and are unable to produce blocks when expected.

* Using 10% is only a heavy approximation. The value of $0.9 could represent 10% downtime, but also 20% or 5%, depending on when the downtime occurred. We say 10% to help the reader understand the scale of the score.

Availability

The availability factor measures how often a validator is online and selected to produce blocks. We want validators to be active because it ensures the network runs smoothly and securely. Validators that are frequently online and producing blocks receive a higher score, promoting consistent participation and reliability.

Why availability

If a validator is not active and producing blocks, it could still have a high dominance and Reliability score. This would be misleading because they are not contributing to the operation of the network. The availability factor ensures that only active validators that are actually selected to produce blocks receive a higher score. We want to penalise validators that are not selected to produce blocks because they are inactive, jailed, offline, etc.

We use the term availability instead of uptime because uptime implies precise measurement, as in server contexts where you can measure online time. In our case, we can't measure how long a validator has been online. We can only see when validators are active and producing blocks. There's no way of telling when they're active but not producing blocks, or when they're offline.

To be clear, a validator can be active and offline at the same time. It could be offline and not producing blocks because it's offline, or it could be active but not producing blocks because it hasn't been selected in a certain period. This is why we use availability to show how often a validator is selected to produce blocks.

How to calculate availability

The score is a moving average of the availability score for each epoch. First, we calculate the availability (li) for each epoch.

Take the number of epochs in which the validator was selected and divide by the number of epochs.

li={1if validator was selected in epoch i0otherwisefor i=0,1,2,,m1

where l0 is the availability value of the most recent epoch and lm1 is the availability of the oldest epoch.

To combine all the availability scores into a single value, we do a moving average, where more recent epochs have higher weights than older ones.

L¯=i=0m1(1aim1)lii=0m1(1aim1),a=0.5

Being a, the parameter determining how much the observation of the oldest epoch is worth relative to the observation of the newest epoch.

Adaptation to support smaller validators

To better support smaller validators in our PoS network, we use a curve to represent the value of the previous step. This adjustment aims to reduce the penalty for validators who are not frequently selected for block production, while still incentivising active participation.

The adjusted availability score is calculated using the following formula:

L=L¯2+2L¯

Graph of the availability score adjustment. The x-axis represents the availability score, and the y-axis represents the adjusted availability score.

Suggestions & Feedback

Like everything in Nimiq, this algorithm is designed for people. We know it is not perfect and we may have missed some details. If you have any feedback or suggestions that you think will improve the algorithm, please feel free to contact us by opening an issue or in the Telegram group.