LCOV - code coverage report
Current view: top level - choreo/tower - fd_tower.h (source / functions) Hit Total Coverage
Test: cov.lcov Lines: 2 11 18.2 %
Date: 2026-01-23 05:02:40 Functions: 0 0 -

          Line data    Source code
       1             : #ifndef HEADER_fd_src_choreo_tower_fd_tower_h
       2             : #define HEADER_fd_src_choreo_tower_fd_tower_h
       3             : 
       4             : #include "../../choreo/voter/fd_voter.h"
       5             : 
       6             : /* fd_tower presents an API for Solana's TowerBFT algorithm.
       7             : 
       8             :    What is TowerBFT? TowerBFT is an algorithm for converging a
       9             :    supermajority of stake in the validator cluster on the same fork.
      10             : 
      11             :         /-- 3-- 4 (A)
      12             :    1-- 2
      13             :         \-- 5     (B)
      14             : 
      15             :    Above is a diagram of a fork. The leader for slot 5 decided to build
      16             :    off slot 2, rather than slot 4. This can happen for various reasons,
      17             :    for example network propagation delay. We now have two possible forks
      18             :    labeled A and B. The consensus algorithm has to pick one of them.
      19             : 
      20             :    So, how does the consensus algorithm pick? As detailed in fd_ghost.h,
      21             :    we pick the fork based on the most stake from votes, called the
      22             :    "heaviest". Validators vote for blocks during replay, and
      23             :    simultaneously use other validator’s votes to determine which block
      24             :    to vote for. This encourages convergence, because as one fork gathers
      25             :    more votes, more and more votes pile-on, solidifying its position as
      26             :    the heaviest fork.
      27             : 
      28             :          /-- 3-- 4 (10%)
      29             :    1-- 2
      30             :          \-- 5     (9%)
      31             : 
      32             :    However, network propagation delay of votes can lead us to think one
      33             :    fork is heaviest, before observing new votes that indicate another
      34             :    fork is heavier. So our consensus algorithm also needs to support
      35             :    switching.
      36             : 
      37             :          /-- 3-- 4 (10%)
      38             :    1-- 2
      39             :          \-- 5     (15%)
      40             : 
      41             :    At the same time we don’t want excessive switching. The more often
      42             :    validators switch, the more difficult it will be to achieve that
      43             :    pile-on effect I just described.
      44             : 
      45             :    Note that to switch forks, you need to rollback a given slot and its
      46             :    descendants on that fork. In the example above, to switch to 1, 2, 5,
      47             :    we need to rollback 3 and 4. The consensus algorithm makes it more
      48             :    costly the further you want to rollback a fork. Here, I’ve added a
      49             :    column lockout, which doubles for every additional slot you want to
      50             :    rollback.
      51             : 
      52             :    Eventually you have traversed far enough down a fork, that the
      53             :    lockout is so great it is infeasible to imagine it ever rolling back
      54             :    in practice. So you can make that fork permanent or “commit” it. Once
      55             :    all validators do this, the blockchain now just has a single fork.
      56             : 
      57             :    Armed with some intuition, let’s now begin defining some terminology.
      58             :    Here is a diagram of a validator's "vote tower":
      59             : 
      60             :    slot | confirmation count (conf)
      61             :    --------------------------------
      62             :    4    | 1
      63             :    3    | 2
      64             :    2    | 3
      65             :    1    | 4
      66             : 
      67             :    It is a stack structure in which each element is a vote. The vote
      68             :    slot column indicates which slots the validator has voted for,
      69             :    ordered from most to least recent.
      70             : 
      71             :    The confirmation count column indicates how many consecutive votes on
      72             :    the same fork have been pushed on top of that vote. You are
      73             :    confirming your own votes for a fork every time you vote on top of
      74             :    the same fork.
      75             : 
      76             :    Two related concepts to confirmation count are lockout and expiration
      77             :    slot. Lockout equals 2 to the power of confirmation count. Every time
      78             :    we “confirm” a vote by voting on top of it, we double the lockout.
      79             :    The expiration slot is the sum of vote slot and lockout, so it also
      80             :    increases when lockouts double. It represents which slot the vote
      81             :    will expire. When a vote expires, it is popped from the top of the
      82             :    tower. An important Tower rule is that a validator cannot vote for a
      83             :    different fork from a given vote slot, until reaching the expiration
      84             :    slot for that vote slot. To summarize, the further a validator wants
      85             :    to rollback their fork (or vote slots) the longer the validator needs
      86             :    to wait without voting (in slot time).
      87             : 
      88             :    Here is the same tower, fully-expanded to include all the fields:
      89             : 
      90             :    slot | conf | lockout | expiration
      91             :    ----------------------------------
      92             :    4    | 1    | 2       | 6
      93             :    3    | 2    | 4       | 7
      94             :    2    | 3    | 8       | 10
      95             :    1    | 4    | 16      | 17
      96             : 
      97             :    Based on this tower, the validator is locked out from voting for any
      98             :    slot <= 6 that is on a different fork than slot 4. I’d like to
      99             :    emphasize that the expiration is with respect to the vote slot, and
     100             :    is _not_ related to the Proof-of-History slot or what the
     101             :    quote-unquote current slot is. So even if the current slot is now 7,
     102             :    the validator can’t go back and vote for slot 5, if slot 5 were on a
     103             :    different fork than 4. The earliest valid vote slot this validator
     104             :    could submit for a different fork from 4 would be slot 7 or later.
     105             : 
     106             :    Next let’s look at how the tower makes state transitions. Here we
     107             :    have the previous example tower, with a before-and-after view with
     108             :    respect to a vote for slot 9:
     109             : 
     110             :    (before)  slot | conf
     111             :             -----------
     112             :              4    | 1
     113             :              3    | 2
     114             :              2    | 3
     115             :              1    | 4
     116             : 
     117             :    (after)  slot | conf
     118             :             -----------
     119             :             9    | 1
     120             :             2    | 3
     121             :             1    | 4
     122             : 
     123             :    As you can see, we added a vote for slot 9 to the top of the tower.
     124             :    But we also removed the votes for slot 4 and slot 3. What happened?
     125             :    This is an example of vote expiry in action. When we voted for slot
     126             :    9, this exceeded the expirations of vote slots 4 and 3, which were 6
     127             :    and 7 respectively. This action of voting triggered the popping of
     128             :    the expired votes from the top of the tower.
     129             : 
     130             :    Next, we add a vote for slot 10:
     131             : 
     132             :    (before)  slot | conf
     133             :             -----------
     134             :              9    | 1
     135             :              2    | 3
     136             :              1    | 4
     137             : 
     138             :    (after)  slot | conf
     139             :             -----------
     140             :              10   | 1
     141             :              9    | 2
     142             :              2    | 3
     143             :              1    | 4
     144             : 
     145             :    The next vote for slot 10 doesn’t involve expirations, so we just add
     146             :    it to the top of the tower. Also, here is an important property of
     147             :    lockouts. Note that the lockout for vote slot 9 doubled (ie. the
     148             :    confirmation count increased by 1) but the lockouts of vote slots 2
     149             :    and 1 remained unchanged.
     150             : 
     151             :    The reason for this is confirmation counts only increase when they
     152             :    are consecutive in the vote tower. Because 4 and 3 were expired
     153             :    previously by the vote for 9, that consecutive property was broken.
     154             :    In this case, the vote for slot 10 is only consecutive with slot 9,
     155             :    but not 2 and 1. Specifically, there is a gap in the before-tower at
     156             :    confirmation count 2.
     157             : 
     158             :    In the after-tower, all the votes are again consecutive (confirmation
     159             :    counts 1, 2, 3, 4 are all accounted for), so the next vote will
     160             :    result in all lockouts doubling as long as it doesn’t result in more
     161             :    expirations.
     162             : 
     163             :    One other thing I’d like to point out about this vote for slot 10.
     164             :    Even though 10 >= the expiration slot of vote slot 2, which is
     165             :    10, voting for 11 did not expire the vote for 2. This is because
     166             :    expiration happens top-down and contiguously. Because vote slot 9 was
     167             :    not expired, we do not proceed with expiring 2.
     168             : 
     169             :    In the Tower rules, once a vote reaches a conf count of 32, it is
     170             :    considered rooted and it is popped from the bottom of the tower. Here
     171             :    is an example where 1 got rooted and popped from the bottom:
     172             : 
     173             :    (before)  slot | conf
     174             :             -----------
     175             :              50   | 1
     176             :              ...  | ... (29 votes elided)
     177             :              1    | 31
     178             : 
     179             :    (after)  slot | conf
     180             :             -----------
     181             :              53   | 1
     182             :              ...  | ... (29 votes elided)
     183             :              2    | 31
     184             : 
     185             :    So the tower is really a double-ended queue rather than a stack.
     186             : 
     187             :    Rooting has implications beyond the Tower. It's what we use to prune
     188             :    our state. Every time tower makes a new root slot, we prune any old
     189             :    state that does not originate from that new root slot. Our blockstore
     190             :    will discard blocks below that root, our forks structure will discard
     191             :    stale banks, funk (which is our accounts database) will discard stale
     192             :    transactions (which in turn track modifications to accounts), and
     193             :    ghost (which is our fork select tree) will discard stale nodes
     194             :    tracking stake percentages. We call this operation publishing.
     195             : 
     196             :    Note that the vote slots are not necessarily consecutive. Here I
     197             :    elided the votes sandwiched between the newest and oldest votes for
     198             :    brevity.
     199             : 
     200             :    Next, let’s go over three additional tower checks. These three checks
     201             :    further reinforce the consensus algorithm we established with
     202             :    intuition, in this case getting a supermajority (ie. 2/3) of stake to
     203             :    converge on a fork.
     204             : 
     205             :    The first is the threshold check. The threshold check makes sure at
     206             :    least 2/3 of stake has voted for the same fork as the vote at depth 8
     207             :    in our tower. Essentially, this guards our tower from getting too out
     208             :    of sync with the rest of the cluster. If we get too out of sync we
     209             :    can’t vote for a long time, because we had to rollback a vote we had
     210             :    already confirmed many times and had a large lockout. This might
     211             :    otherwise happen as the result of a network partition where we can
     212             :    only communicate with a subset of stake.
     213             : 
     214             :    Next is the lockout check. We went in detail on this earlier when
     215             :    going through the lockout and expiration slot, and as before, the
     216             :    rule is we can only vote on a slot for a different fork from a
     217             :    previous vote, after that vote’s expiration slot.
     218             : 
     219             :    Given this fork and tower from earlier:
     220             : 
     221             :         /-- 3-- 4
     222             :    1-- 2
     223             :         \-- 5
     224             : 
     225             :    slot | conf
     226             :    -----------
     227             :    4    | 1
     228             :    3    | 2
     229             :    2    | 3
     230             :    1    | 4
     231             : 
     232             :   You’re locked out from voting for 5 because it’s on a different fork
     233             :   from 4 and the expiration slot of your previous vote for 4 is 6.
     234             : 
     235             :   However, if we introduce a new slot 9:
     236             : 
     237             :         /-- 3-- 4
     238             :   1-- 2
     239             :         \-- 5-- 9
     240             : 
     241             :   slot | conf
     242             :   -----------
     243             :   9    | 1
     244             :   2    | 3
     245             :   1    | 4
     246             : 
     247             :   Here the new Slot 9 descends from 5, and exceeds vote slot 4’s
     248             :   expiration slot of 6 unlike 5.
     249             : 
     250             :   After your lockout expires, the tower rules allow you to vote for
     251             :   descendants of the fork slot you wanted to switch to in the first
     252             :   place (here, 9 descending from 5). So we eventually switch to the fork
     253             :   we wanted, by voting for 9 and expiring 3 and 4.
     254             : 
     255             :   Importantly, notice that the fork slots and vote slots are not exactly
     256             :   1-to-1. While conceptually our tower is voting for the fork 1, 2, 5,
     257             :   9, the vote for 5 is only implied. Our tower votes themselves still
     258             :   can’t include 5 due to lockout.
     259             : 
     260             :   Finally, the switch check. The switch check is used to safeguard
     261             :   optimistic confirmation. Optimistic confirmation is when a slot gets
     262             :   2/3 of stake-weighted votes. This is then treated as a signal that the
     263             :   slot will eventually get rooted. However, to actually guarantee this
     264             :   we need a rule that prevents validators from arbitrarily switching
     265             :   forks (even when their vote lockout has expired). This rule is the
     266             :   switch check.
     267             : 
     268             :   The switch check is additional to the lockout check. Before switching
     269             :   forks, we need to make sure at least 38% of stake has voted for a
     270             :   different fork than our own. Different fork is defined by finding the
     271             :   greatest common ancestor of our last voted fork slot and the slot we
     272             :   want to switch to. Any forks descending from the greatest common
     273             :   ancestor (which I will subsequently call the GCA) that are not our
     274             :   own fork are counted towards the switch check stake.
     275             : 
     276             :   Here we visualize the switch check:
     277             : 
     278             :              /-- 7
     279             :         /-- 3-- 4
     280             :   1-- 2  -- 6
     281             :         \-- 5-- 9
     282             : 
     283             :   First, we find the GCA of 4 and 9 which is 2. Then we look at all the
     284             :   descendants of the GCA that do not share a fork with us, and make sure
     285             :   their stake sums to more than 38%.
     286             : 
     287             :   I’d like to highlight that 7 here is not counted towards the switch
     288             :   proof, even though it is on a different fork from 4. This is because
     289             :   it’s on the same fork relative to the GCA.
     290             : 
     291             :   So that covers the checks. Next, there are two additional important
     292             :   concepts: "reset slot" and "vote slot". The reset slot is the slot you
     293             :   reset PoH to when it's your turn to be leader. Because you are
     294             :   responsible for producing a block, you need to decide which fork to
     295             :   build your block on. For example, if there are two competing slots 3
     296             :   and 4, you would decide whether to build 3 <- 5 or 4 <- 5. In general
     297             :   the reset slot is the same fork as the vote slot, but not always.
     298             :   There is an important reason for this. Recall this fork graph from
     299             :   earlier:
     300             : 
     301             :         /-- 3-- 4 (10%)
     302             :    1-- 2
     303             :         \-- 5-- 6 (9%)
     304             : 
     305             :   In this diagram, 4 is the winner of fork choice. All future leaders
     306             :   now want to reset to slot 4. Naively, this makes sense because you
     307             :   maximize the chance of your block finalizing (and earning the rewards)
     308             :   if you greedily (in the algorithmic, and perhaps also literal sense)
     309             :   pick what's currently the heaviest.
     310             : 
     311             :   However, say most validators actually voted fork 5, even though we
     312             :   currently observe 3 as the heavier. For whatever reason, those votes
     313             :   for 5 just didn't land (the leader for 6 missed the votes, network
     314             :   blip, etc.)
     315             : 
     316             :   All these validators that voted for 5 are now constrained by the
     317             :   switch check (38% of stake), and none of them can actually switch
     318             :   their vote to 4 (which only has 10%). But they're all continuing to
     319             :   build blocks on top of fork 4, which importantly implies that votes
     320             :   for 5 will not be able to propagate. This is because the validators
     321             :   that can't switch continue to refresh their votes for 5, but those
     322             :   votes never "land" because no one is building blocks on top of fork
     323             :   5 anymore (everyone is building on 4 because that's currently the
     324             :   heaviest).
     325             : 
     326             :   Therefore, it is important to reset to the same fork as your last vote
     327             :   slot, which is usually also the heaviest fork, but not always.
     328             : 
     329             :   Note that with both the vote slot and reset slot, the tower uses ghost
     330             :   to determine the last vote slot's ancestry. So what happens if the
     331             :   last vote slot isn't in the ghost? There are two separate cases in
     332             :   which this can happen that tower needs to handle:
     333             : 
     334             :   1. Our last vote slot > ghost root slot, but is not a descendant of
     335             :      the ghost root. This can happen if we get stuck on a minority fork
     336             :      with a long lockout. In the worst case, lockout duration is
     337             :      2^{threshold_check_depth} ie. 2^8 = 256 slots. In other words, we
     338             :      voted for and confirmed a minority fork 8 times in a row. We assume
     339             :      we won't vote past 8 times for the minority fork, because the
     340             :      threshold check would have stopped us (recall the threshold check
     341             :      requires 2/3 of stake to be on the same fork at depth 8 before we
     342             :      can keep voting for that fork).
     343             : 
     344             :      While waiting for those 256 slots of lockout to expire, it is
     345             :      possible that in the meantime a supermajority (ie. >2/3) of the
     346             :      cluster actually roots another fork that is not ours. During
     347             :      regular execution, we would not publish ghost until we have an
     348             :      updated tower root. So as long as the validator stays running while
     349             :      it is locked out from the supermajority fork, it keeps track of its
     350             :      vote slot's ancestry.
     351             : 
     352             :      If the validator were to stop running while locked out though (eg.
     353             :      operator needed to restart the box), the validator attempts to
     354             :      repair the ancestry of its last vote slot.
     355             : 
     356             :      In the worst case, if we cannot repair that ancestry, then we do
     357             :      not vote until replay reaches the expiration slot of that last vote
     358             :      slot. We can assume the votes > depth 8 in the tower do not violate
     359             :      lockout, because again the threshold check would have guarded it.
     360             : 
     361             :      TODO CURRENTLY THIS IS UNHANDLED. WHAT THE VALIDATOR DOES IF IT
     362             :      HAS LOST THE GHOST ANCESTRY IS IT WILL ERROR OUT.
     363             : 
     364             :   2. Our last vote slot < ghost root slot.  In this case we simply
     365             :      cannot determine whether our last vote slot is on the same fork as
     366             :      our ghost root slot because we no longer have ancestry information
     367             :      before the ghost root slot. This can happen if the validator is not
     368             :      running for a long time, then started up again. It will have to use
     369             :      the snapshot slot for the beginning of the ghost ancestry, which
     370             :      could be well past the last vote slot in the tower.
     371             : 
     372             :      In this case, before the validator votes again, it makes sure that
     373             :      the last vote's confirmation count >= THRESHOLD_CHECK_DEPTH (stated
     374             :      differently, it makes sure the next time it votes it will expire at
     375             :      least the first THRESHOLD_CHECK_DEPTH votes in the tower), and then
     376             :      it assumes that the last vote slot is on the same fork as the ghost
     377             :      root slot.
     378             : 
     379             :      TODO VERIFY AGAVE BEHAVIOR IS THE SAME.
     380             : 
     381             :   Now let’s switch gears from theory back to practice. What does it mean
     382             :   to send a vote?
     383             : 
     384             :   As a validator, you aren’t sending individual tower votes. Rather, you
     385             :   are sending your entire updated tower to the cluster every time.
     386             :   Essentially, the validator is continuously syncing their local tower
     387             :   with the cluster. That tower state is then stored inside a vote
     388             :   account, like any other state on Solana.
     389             : 
     390             :   On the flip side, we also must stay in sync the other way from cluster
     391             :   to local. If we have previously voted, we need to make sure our tower
     392             :   matches up with what the cluster has last seen. We know the most
     393             :   recent tower is in the last vote we sent, so we durably store every
     394             :   tower (by checkpointing it to disk) whenever we send a vote. In case
     395             :   this tower is out-of-date Conveniently Funk, our accounts database,
     396             :   stores all the vote accounts including our own, so on bootstrap we
     397             :   simply load in our vote account state itself to to initialize our own
     398             :   local view of the tower.
     399             : 
     400             :   Finally, a note on the difference between the Vote Program and
     401             :   TowerBFT. The Vote Program runs during transaction (block) execution.
     402             :   It checks that certain invariants about the tower inside a vote
     403             :   transaction are upheld (recall a validator sends their entire tower as
     404             :   part of a "vote"): otherwise, it fails the transaction. For example,
     405             :   it checks that every vote contains a tower in which the vote slots are
     406             :   strictly monotonically increasing. As a consequence, only valid towers
     407             :   are committed to the ledger. Another important detail of the Vote
     408             :   Program is that it only has a view of the current fork on which it is
     409             :   executing. Specifically, it can't observe what state is on other
     410             :   forks, like what a validator's tower looks like on fork A vs. fork B.
     411             : 
     412             :   The TowerBFT rules, on the other hand, run after transaction
     413             :   execution. Also unlike the Vote Program, the TowerBFT rules do not
     414             :   take the vote transactions as inputs: rather the inputs are the towers
     415             :   that have already been written to the ledger by the Vote Program. As
     416             :   described above, the Vote Program validates every tower, and in this
     417             :   way, the TowerBFT rules leverage the validation already done by the
     418             :   Vote Program to (mostly) assume each tower is valid. Every validator
     419             :   runs TowerBFT to update their own tower with votes based on the
     420             :   algorithm documented above. Importantly, TowerBFT has a view of all
     421             :   forks, and the validator makes a voting decision based on all forks.
     422             : */
     423             : 
     424             : #include "../fd_choreo_base.h"
     425             : #include "fd_tower_accts.h"
     426             : #include "fd_tower_forks.h"
     427             : #include "../ghost/fd_ghost.h"
     428             : #include "../notar/fd_notar.h"
     429             : #include "fd_epoch_stakes.h"
     430             : #include "../../disco/pack/fd_microblock.h"
     431             : 
     432             : /* FD_TOWER_PARANOID:  Define this to non-zero at compile time
     433             :    to turn on additional runtime integrity checks. */
     434             : 
     435             : #ifndef FD_TOWER_PARANOID
     436             : #define FD_TOWER_PARANOID 1
     437             : #endif
     438             : 
     439         618 : #define FD_TOWER_VOTE_MAX (31UL)
     440             : 
     441             : /* fd_tower is a representation of a validator's "vote tower" (described
     442             :    in detail in the preamble at the top of this file).  The votes in the
     443             :    tower are stored in an fd_deque.c ordered from lowest to highest vote
     444             :    slot (highest to lowest confirmation count) relative to the head and
     445             :    tail.  There can be at most FD_TOWER_VOTE_MAX votes in the tower. */
     446             : 
     447             : struct fd_tower_vote {
     448             :   ulong slot; /* vote slot */
     449             :   ulong conf; /* confirmation count */
     450             : };
     451             : typedef struct fd_tower_vote fd_tower_vote_t;
     452             : 
     453             : #define DEQUE_NAME fd_tower
     454           0 : #define DEQUE_T    fd_tower_vote_t
     455         618 : #define DEQUE_MAX  FD_TOWER_VOTE_MAX
     456             : #include "../../util/tmpl/fd_deque.c"
     457             : 
     458             : typedef fd_tower_vote_t fd_tower_t; /* typedef for semantic clarity */
     459             : 
     460             : /* FD_TOWER_{ALIGN,FOOTPRINT} provided for static declarations. */
     461             : 
     462             : #define FD_TOWER_ALIGN     (alignof(fd_tower_private_t))
     463             : #define FD_TOWER_FOOTPRINT (sizeof (fd_tower_private_t))
     464             : FD_STATIC_ASSERT( alignof(fd_tower_private_t)==8UL,   FD_TOWER_ALIGN     );
     465             : FD_STATIC_ASSERT( sizeof (fd_tower_private_t)==512UL, FD_TOWER_FOOTPRINT );
     466             : 
     467           0 : #define FD_TOWER_FLAG_ANCESTOR_ROLLBACK 0 /* rollback to an ancestor of our prev vote */
     468           0 : #define FD_TOWER_FLAG_SIBLING_CONFIRMED 1 /* our prev vote was a duplicate and its sibling got confirmed */
     469           0 : #define FD_TOWER_FLAG_SAME_FORK         2 /* prev vote is on the same fork */
     470           0 : #define FD_TOWER_FLAG_SWITCH_PASS       3 /* successfully switched to a different fork */
     471           0 : #define FD_TOWER_FLAG_SWITCH_FAIL       4 /* failed to switch to a different fork */
     472           0 : #define FD_TOWER_FLAG_LOCKOUT_FAIL      5 /* failed lockout check */
     473           0 : #define FD_TOWER_FLAG_THRESHOLD_FAIL    6 /* failed threshold check */
     474           0 : #define FD_TOWER_FLAG_PROPAGATED_FAIL   7 /* failed propagated check */
     475             : 
     476             : struct fd_tower_out {
     477             :   uchar     flags;          /* one of FD_TOWER_{EMPTY,...} */
     478             :   ulong     reset_slot;     /* slot to reset PoH to */
     479             :   fd_hash_t reset_block_id; /* block ID to reset PoH to */
     480             :   ulong     vote_slot;      /* slot to vote for (ULONG_MAX if no vote) */
     481             :   fd_hash_t vote_block_id;  /* block ID to vote for */
     482             :   ulong     root_slot;      /* new tower root slot (ULONG_MAX if no new root) */
     483             :   fd_hash_t root_block_id;  /* new tower root block ID */
     484             : };
     485             : typedef struct fd_tower_out fd_tower_out_t;
     486             : 
     487             : /* fd_tower_vote_and_reset selects both a block to vote for and block to
     488             :    reset to.  Returns a struct with a reason code (FD_TOWER_{EMPTY,...})
     489             :    in addition to {reset,vote,root}_{slot,block_id}.
     490             : 
     491             :    We can't always vote, so vote_slot may be ULONG_MAX which indicates
     492             :    no vote should be cast and caller should ignore vote_block_id.  New
     493             :    roots result from votes, so the same applies for root_slot (there is
     494             :    not always a new root).  However there is always a reset block, so
     495             :    reset_slot and reset_block_id will always be populated on return. The
     496             :    implementation contains detailed documentation of the tower rules. */
     497             : 
     498             : fd_tower_out_t
     499             : fd_tower_vote_and_reset( fd_tower_t        * tower,
     500             :                          fd_tower_accts_t  * accts,
     501             :                          fd_epoch_stakes_t * epoch_stakes,
     502             :                          fd_forks_t        * forks,
     503             :                          fd_ghost_t        * ghost,
     504             :                          fd_notar_t        * notar );
     505             : 
     506             : /* Misc */
     507             : 
     508             : /* fd_tower_reconcile reconciles our local tower with the on-chain tower
     509             :    inside our vote account.  Mirrors what Agave does. */
     510             : 
     511             : void
     512             : fd_tower_reconcile( fd_tower_t  * tower,
     513             :                     ulong         tower_root,
     514             :                     uchar const * vote_acc );
     515             : 
     516             : /* fd_tower_from_vote_acc deserializes the vote account into tower.
     517             :    Assumes tower is a valid local join and currently empty. */
     518             : 
     519             : void
     520             : fd_tower_from_vote_acc( fd_tower_t  * tower,
     521             :                         uchar const * vote_acc );
     522             : 
     523             : /* fd_tower_with_lat_from_vote_acc deserializes the vote account into
     524             :    tower, including slot latency (when available) for tower votes.
     525             :    Assumes tower points to a static array of length FD_TOWER_VOTE_MAX.
     526             : 
     527             :    Returns the number of copied elements. */
     528             : ulong
     529             : fd_tower_with_lat_from_vote_acc( fd_voter_vote_t tower[ static FD_TOWER_VOTE_MAX ],
     530             :                                  uchar const *      vote_acc );
     531             : 
     532             : /* fd_tower_to_vote_txn writes tower into a fd_tower_sync_t vote
     533             :    instruction and serializes it into a Solana transaction.  Assumes
     534             :    tower is a valid local join. */
     535             : 
     536             : void
     537             : fd_tower_to_vote_txn( fd_tower_t    const * tower,
     538             :                       ulong                 root,
     539             :                       fd_lockout_offset_t * lockouts_scratch,
     540             :                       fd_hash_t     const * bank_hash,
     541             :                       fd_hash_t     const * recent_blockhash,
     542             :                       fd_pubkey_t   const * validator_identity,
     543             :                       fd_pubkey_t   const * vote_authority,
     544             :                       fd_pubkey_t   const * vote_account,
     545             :                       fd_txn_p_t *          vote_txn );
     546             : 
     547             : /* fd_tower_verify checks tower is in a valid state. Valid iff:
     548             :    - cnt < FD_TOWER_VOTE_MAX
     549             :    - vote slots and confirmation counts in the tower are monotonically
     550             :      increasing */
     551             : 
     552             : int
     553             : fd_tower_verify( fd_tower_t const * tower );
     554             : 
     555             : /* fd_tower_print pretty-prints tower as a formatted table.
     556             : 
     557             :    Sample output:
     558             : 
     559             :         slot | confirmation count
     560             :    --------- | ------------------
     561             :    279803931 | 1
     562             :    279803930 | 2
     563             :    ...
     564             :    279803901 | 31
     565             :    279803900 | root
     566             : */
     567             : 
     568             : void
     569             : fd_tower_print( fd_tower_t const *         tower,
     570             :                 ulong                      root,
     571             :                 fd_io_buffered_ostream_t * ostream_opt );
     572             : 
     573             : #endif /* HEADER_fd_src_choreo_tower_fd_tower_h */

Generated by: LCOV version 1.14