LCOV - code coverage report
Current view: top level - choreo/tower - fd_tower.h (source / functions) Hit Total Coverage
Test: cov.lcov Lines: 2 11 18.2 %
Date: 2025-12-06 04:45:29 Functions: 0 0 -

          Line data    Source code
       1             : #ifndef HEADER_fd_src_choreo_tower_fd_tower_h
       2             : #define HEADER_fd_src_choreo_tower_fd_tower_h
       3             : 
       4             : /* fd_tower presents an API for Solana's TowerBFT algorithm.
       5             : 
       6             :    What is TowerBFT? TowerBFT is an algorithm for converging a
       7             :    supermajority of stake in the validator cluster on the same fork.
       8             : 
       9             :         /-- 3-- 4 (A)
      10             :    1-- 2
      11             :         \-- 5     (B)
      12             : 
      13             :    Above is a diagram of a fork. The leader for slot 5 decided to build
      14             :    off slot 2, rather than slot 4. This can happen for various reasons,
      15             :    for example network propagation delay. We now have two possible forks
      16             :    labeled A and B. The consensus algorithm has to pick one of them.
      17             : 
      18             :    So, how does the consensus algorithm pick? As detailed in fd_ghost.h,
      19             :    we pick the fork based on the most stake from votes, called the
      20             :    "heaviest". Validators vote for blocks during replay, and
      21             :    simultaneously use other validator’s votes to determine which block
      22             :    to vote for. This encourages convergence, because as one fork gathers
      23             :    more votes, more and more votes pile-on, solidifying its position as
      24             :    the heaviest fork.
      25             : 
      26             :          /-- 3-- 4 (10%)
      27             :    1-- 2
      28             :          \-- 5     (9%)
      29             : 
      30             :    However, network propagation delay of votes can lead us to think one
      31             :    fork is heaviest, before observing new votes that indicate another
      32             :    fork is heavier. So our consensus algorithm also needs to support
      33             :    switching.
      34             : 
      35             :          /-- 3-- 4 (10%)
      36             :    1-- 2
      37             :          \-- 5     (15%)
      38             : 
      39             :    At the same time we don’t want excessive switching. The more often
      40             :    validators switch, the more difficult it will be to achieve that
      41             :    pile-on effect I just described.
      42             : 
      43             :    Note that to switch forks, you need to rollback a given slot and its
      44             :    descendants on that fork. In the example above, to switch to 1, 2, 5,
      45             :    we need to rollback 3 and 4. The consensus algorithm makes it more
      46             :    costly the further you want to rollback a fork. Here, I’ve added a
      47             :    column lockout, which doubles for every additional slot you want to
      48             :    rollback.
      49             : 
      50             :    Eventually you have traversed far enough down a fork, that the
      51             :    lockout is so great it is infeasible to imagine it ever rolling back
      52             :    in practice. So you can make that fork permanent or “commit” it. Once
      53             :    all validators do this, the blockchain now just has a single fork.
      54             : 
      55             :    Armed with some intuition, let’s now begin defining some terminology.
      56             :    Here is a diagram of a validator's "vote tower":
      57             : 
      58             :    slot | confirmation count (conf)
      59             :    --------------------------------
      60             :    4    | 1
      61             :    3    | 2
      62             :    2    | 3
      63             :    1    | 4
      64             : 
      65             :    It is a stack structure in which each element is a vote. The vote
      66             :    slot column indicates which slots the validator has voted for,
      67             :    ordered from most to least recent.
      68             : 
      69             :    The confirmation count column indicates how many consecutive votes on
      70             :    the same fork have been pushed on top of that vote. You are
      71             :    confirming your own votes for a fork every time you vote on top of
      72             :    the same fork.
      73             : 
      74             :    Two related concepts to confirmation count are lockout and expiration
      75             :    slot. Lockout equals 2 to the power of confirmation count. Every time
      76             :    we “confirm” a vote by voting on top of it, we double the lockout.
      77             :    The expiration slot is the sum of vote slot and lockout, so it also
      78             :    increases when lockouts double. It represents which slot the vote
      79             :    will expire. When a vote expires, it is popped from the top of the
      80             :    tower. An important Tower rule is that a validator cannot vote for a
      81             :    different fork from a given vote slot, until reaching the expiration
      82             :    slot for that vote slot. To summarize, the further a validator wants
      83             :    to rollback their fork (or vote slots) the longer the validator needs
      84             :    to wait without voting (in slot time).
      85             : 
      86             :    Here is the same tower, fully-expanded to include all the fields:
      87             : 
      88             :    slot | conf | lockout | expiration
      89             :    ----------------------------------
      90             :    4    | 1    | 2       | 6
      91             :    3    | 2    | 4       | 7
      92             :    2    | 3    | 8       | 10
      93             :    1    | 4    | 16      | 17
      94             : 
      95             :    Based on this tower, the validator is locked out from voting for any
      96             :    slot <= 6 that is on a different fork than slot 4. I’d like to
      97             :    emphasize that the expiration is with respect to the vote slot, and
      98             :    is _not_ related to the Proof-of-History slot or what the
      99             :    quote-unquote current slot is. So even if the current slot is now 7,
     100             :    the validator can’t go back and vote for slot 5, if slot 5 were on a
     101             :    different fork than 4. The earliest valid vote slot this validator
     102             :    could submit for a different fork from 4 would be slot 7 or later.
     103             : 
     104             :    Next let’s look at how the tower makes state transitions. Here we
     105             :    have the previous example tower, with a before-and-after view with
     106             :    respect to a vote for slot 9:
     107             : 
     108             :    (before)  slot | conf
     109             :             -----------
     110             :              4    | 1
     111             :              3    | 2
     112             :              2    | 3
     113             :              1    | 4
     114             : 
     115             :    (after)  slot | conf
     116             :             -----------
     117             :             9    | 1
     118             :             2    | 3
     119             :             1    | 4
     120             : 
     121             :    As you can see, we added a vote for slot 9 to the top of the tower.
     122             :    But we also removed the votes for slot 4 and slot 3. What happened?
     123             :    This is an example of vote expiry in action. When we voted for slot
     124             :    9, this exceeded the expirations of vote slots 4 and 3, which were 6
     125             :    and 7 respectively. This action of voting triggered the popping of
     126             :    the expired votes from the top of the tower.
     127             : 
     128             :    Next, we add a vote for slot 10:
     129             : 
     130             :    (before)  slot | conf
     131             :             -----------
     132             :              9    | 1
     133             :              2    | 3
     134             :              1    | 4
     135             : 
     136             :    (after)  slot | conf
     137             :             -----------
     138             :              10   | 1
     139             :              9    | 2
     140             :              2    | 3
     141             :              1    | 4
     142             : 
     143             :    The next vote for slot 10 doesn’t involve expirations, so we just add
     144             :    it to the top of the tower. Also, here is an important property of
     145             :    lockouts. Note that the lockout for vote slot 9 doubled (ie. the
     146             :    confirmation count increased by 1) but the lockouts of vote slots 2
     147             :    and 1 remained unchanged.
     148             : 
     149             :    The reason for this is confirmation counts only increase when they
     150             :    are consecutive in the vote tower. Because 4 and 3 were expired
     151             :    previously by the vote for 9, that consecutive property was broken.
     152             :    In this case, the vote for slot 10 is only consecutive with slot 9,
     153             :    but not 2 and 1. Specifically, there is a gap in the before-tower at
     154             :    confirmation count 2.
     155             : 
     156             :    In the after-tower, all the votes are again consecutive (confirmation
     157             :    counts 1, 2, 3, 4 are all accounted for), so the next vote will
     158             :    result in all lockouts doubling as long as it doesn’t result in more
     159             :    expirations.
     160             : 
     161             :    One other thing I’d like to point out about this vote for slot 10.
     162             :    Even though 10 >= the expiration slot of vote slot 2, which is
     163             :    10, voting for 11 did not expire the vote for 2. This is because
     164             :    expiration happens top-down and contiguously. Because vote slot 9 was
     165             :    not expired, we do not proceed with expiring 2.
     166             : 
     167             :    In the Tower rules, once a vote reaches a conf count of 32, it is
     168             :    considered rooted and it is popped from the bottom of the tower. Here
     169             :    is an example where 1 got rooted and popped from the bottom:
     170             : 
     171             :    (before)  slot | conf
     172             :             -----------
     173             :              50   | 1
     174             :              ...  | ... (29 votes elided)
     175             :              1    | 31
     176             : 
     177             :    (after)  slot | conf
     178             :             -----------
     179             :              53   | 1
     180             :              ...  | ... (29 votes elided)
     181             :              2    | 31
     182             : 
     183             :    So the tower is really a double-ended queue rather than a stack.
     184             : 
     185             :    Rooting has implications beyond the Tower. It's what we use to prune
     186             :    our state. Every time tower makes a new root slot, we prune any old
     187             :    state that does not originate from that new root slot. Our blockstore
     188             :    will discard blocks below that root, our forks structure will discard
     189             :    stale banks, funk (which is our accounts database) will discard stale
     190             :    transactions (which in turn track modifications to accounts), and
     191             :    ghost (which is our fork select tree) will discard stale nodes
     192             :    tracking stake percentages. We call this operation publishing.
     193             : 
     194             :    Note that the vote slots are not necessarily consecutive. Here I
     195             :    elided the votes sandwiched between the newest and oldest votes for
     196             :    brevity.
     197             : 
     198             :    Next, let’s go over three additional tower checks. These three checks
     199             :    further reinforce the consensus algorithm we established with
     200             :    intuition, in this case getting a supermajority (ie. 2/3) of stake to
     201             :    converge on a fork.
     202             : 
     203             :    The first is the threshold check. The threshold check makes sure at
     204             :    least 2/3 of stake has voted for the same fork as the vote at depth 8
     205             :    in our tower. Essentially, this guards our tower from getting too out
     206             :    of sync with the rest of the cluster. If we get too out of sync we
     207             :    can’t vote for a long time, because we had to rollback a vote we had
     208             :    already confirmed many times and had a large lockout. This might
     209             :    otherwise happen as the result of a network partition where we can
     210             :    only communicate with a subset of stake.
     211             : 
     212             :    Next is the lockout check. We went in detail on this earlier when
     213             :    going through the lockout and expiration slot, and as before, the
     214             :    rule is we can only vote on a slot for a different fork from a
     215             :    previous vote, after that vote’s expiration slot.
     216             : 
     217             :    Given this fork and tower from earlier:
     218             : 
     219             :         /-- 3-- 4
     220             :    1-- 2
     221             :         \-- 5
     222             : 
     223             :    slot | conf
     224             :    -----------
     225             :    4    | 1
     226             :    3    | 2
     227             :    2    | 3
     228             :    1    | 4
     229             : 
     230             :   You’re locked out from voting for 5 because it’s on a different fork
     231             :   from 4 and the expiration slot of your previous vote for 4 is 6.
     232             : 
     233             :   However, if we introduce a new slot 9:
     234             : 
     235             :         /-- 3-- 4
     236             :   1-- 2
     237             :         \-- 5-- 9
     238             : 
     239             :   slot | conf
     240             :   -----------
     241             :   9    | 1
     242             :   2    | 3
     243             :   1    | 4
     244             : 
     245             :   Here the new Slot 9 descends from 5, and exceeds vote slot 4’s
     246             :   expiration slot of 6 unlike 5.
     247             : 
     248             :   After your lockout expires, the tower rules allow you to vote for
     249             :   descendants of the fork slot you wanted to switch to in the first
     250             :   place (here, 9 descending from 5). So we eventually switch to the fork
     251             :   we wanted, by voting for 9 and expiring 3 and 4.
     252             : 
     253             :   Importantly, notice that the fork slots and vote slots are not exactly
     254             :   1-to-1. While conceptually our tower is voting for the fork 1, 2, 5,
     255             :   9, the vote for 5 is only implied. Our tower votes themselves still
     256             :   can’t include 5 due to lockout.
     257             : 
     258             :   Finally, the switch check. The switch check is used to safeguard
     259             :   optimistic confirmation. Optimistic confirmation is when a slot gets
     260             :   2/3 of stake-weighted votes. This is then treated as a signal that the
     261             :   slot will eventually get rooted. However, to actually guarantee this
     262             :   we need a rule that prevents validators from arbitrarily switching
     263             :   forks (even when their vote lockout has expired). This rule is the
     264             :   switch check.
     265             : 
     266             :   The switch check is additional to the lockout check. Before switching
     267             :   forks, we need to make sure at least 38% of stake has voted for a
     268             :   different fork than our own. Different fork is defined by finding the
     269             :   greatest common ancestor of our last voted fork slot and the slot we
     270             :   want to switch to. Any forks descending from the greatest common
     271             :   ancestor (which I will subsequently call the GCA) that are not our
     272             :   own fork are counted towards the switch check stake.
     273             : 
     274             :   Here we visualize the switch check:
     275             : 
     276             :              /-- 7
     277             :         /-- 3-- 4
     278             :   1-- 2  -- 6
     279             :         \-- 5-- 9
     280             : 
     281             :   First, we find the GCA of 4 and 9 which is 2. Then we look at all the
     282             :   descendants of the GCA that do not share a fork with us, and make sure
     283             :   their stake sums to more than 38%.
     284             : 
     285             :   I’d like to highlight that 7 here is not counted towards the switch
     286             :   proof, even though it is on a different fork from 4. This is because
     287             :   it’s on the same fork relative to the GCA.
     288             : 
     289             :   So that covers the checks. Next, there are two additional important
     290             :   concepts: "reset slot" and "vote slot". The reset slot is the slot you
     291             :   reset PoH to when it's your turn to be leader. Because you are
     292             :   responsible for producing a block, you need to decide which fork to
     293             :   build your block on. For example, if there are two competing slots 3
     294             :   and 4, you would decide whether to build 3 <- 5 or 4 <- 5. In general
     295             :   the reset slot is the same fork as the vote slot, but not always.
     296             :   There is an important reason for this. Recall this fork graph from
     297             :   earlier:
     298             : 
     299             :         /-- 3-- 4 (10%)
     300             :    1-- 2
     301             :         \-- 5-- 6 (9%)
     302             : 
     303             :   In this diagram, 4 is the winner of fork choice. All future leaders
     304             :   now want to reset to slot 4. Naively, this makes sense because you
     305             :   maximize the chance of your block finalizing (and earning the rewards)
     306             :   if you greedily (in the algorithmic, and perhaps also literal sense)
     307             :   pick what's currently the heaviest.
     308             : 
     309             :   However, say most validators actually voted fork 5, even though we
     310             :   currently observe 3 as the heavier. For whatever reason, those votes
     311             :   for 5 just didn't land (the leader for 6 missed the votes, network
     312             :   blip, etc.)
     313             : 
     314             :   All these validators that voted for 5 are now constrained by the
     315             :   switch check (38% of stake), and none of them can actually switch
     316             :   their vote to 4 (which only has 10%). But they're all continuing to
     317             :   build blocks on top of fork 4, which importantly implies that votes
     318             :   for 5 will not be able to propagate. This is because the validators
     319             :   that can't switch continue to refresh their votes for 5, but those
     320             :   votes never "land" because no one is building blocks on top of fork
     321             :   5 anymore (everyone is building on 4 because that's currently the
     322             :   heaviest).
     323             : 
     324             :   Therefore, it is important to reset to the same fork as your last vote
     325             :   slot, which is usually also the heaviest fork, but not always.
     326             : 
     327             :   Note that with both the vote slot and reset slot, the tower uses ghost
     328             :   to determine the last vote slot's ancestry. So what happens if the
     329             :   last vote slot isn't in the ghost? There are two separate cases in
     330             :   which this can happen that tower needs to handle:
     331             : 
     332             :   1. Our last vote slot > ghost root slot, but is not a descendant of
     333             :      the ghost root. This can happen if we get stuck on a minority fork
     334             :      with a long lockout. In the worst case, lockout duration is
     335             :      2^{threshold_check_depth} ie. 2^8 = 256 slots. In other words, we
     336             :      voted for and confirmed a minority fork 8 times in a row. We assume
     337             :      we won't vote past 8 times for the minority fork, because the
     338             :      threshold check would have stopped us (recall the threshold check
     339             :      requires 2/3 of stake to be on the same fork at depth 8 before we
     340             :      can keep voting for that fork).
     341             : 
     342             :      While waiting for those 256 slots of lockout to expire, it is
     343             :      possible that in the meantime a supermajority (ie. >2/3) of the
     344             :      cluster actually roots another fork that is not ours. During
     345             :      regular execution, we would not publish ghost until we have an
     346             :      updated tower root. So as long as the validator stays running while
     347             :      it is locked out from the supermajority fork, it keeps track of its
     348             :      vote slot's ancestry.
     349             : 
     350             :      If the validator were to stop running while locked out though (eg.
     351             :      operator needed to restart the box), the validator attempts to
     352             :      repair the ancestry of its last vote slot.
     353             : 
     354             :      In the worst case, if we cannot repair that ancestry, then we do
     355             :      not vote until replay reaches the expiration slot of that last vote
     356             :      slot. We can assume the votes > depth 8 in the tower do not violate
     357             :      lockout, because again the threshold check would have guarded it.
     358             : 
     359             :      TODO CURRENTLY THIS IS UNHANDLED. WHAT THE VALIDATOR DOES IF IT
     360             :      HAS LOST THE GHOST ANCESTRY IS IT WILL ERROR OUT.
     361             : 
     362             :   2. Our last vote slot < ghost root slot.  In this case we simply
     363             :      cannot determine whether our last vote slot is on the same fork as
     364             :      our ghost root slot because we no longer have ancestry information
     365             :      before the ghost root slot. This can happen if the validator is not
     366             :      running for a long time, then started up again. It will have to use
     367             :      the snapshot slot for the beginning of the ghost ancestry, which
     368             :      could be well past the last vote slot in the tower.
     369             : 
     370             :      In this case, before the validator votes again, it makes sure that
     371             :      the last vote's confirmation count >= THRESHOLD_CHECK_DEPTH (stated
     372             :      differently, it makes sure the next time it votes it will expire at
     373             :      least the first THRESHOLD_CHECK_DEPTH votes in the tower), and then
     374             :      it assumes that the last vote slot is on the same fork as the ghost
     375             :      root slot.
     376             : 
     377             :      TODO VERIFY AGAVE BEHAVIOR IS THE SAME.
     378             : 
     379             :   Now let’s switch gears from theory back to practice. What does it mean
     380             :   to send a vote?
     381             : 
     382             :   As a validator, you aren’t sending individual tower votes. Rather, you
     383             :   are sending your entire updated tower to the cluster every time.
     384             :   Essentially, the validator is continuously syncing their local tower
     385             :   with the cluster. That tower state is then stored inside a vote
     386             :   account, like any other state on Solana.
     387             : 
     388             :   On the flip side, we also must stay in sync the other way from cluster
     389             :   to local. If we have previously voted, we need to make sure our tower
     390             :   matches up with what the cluster has last seen. We know the most
     391             :   recent tower is in the last vote we sent, so we durably store every
     392             :   tower (by checkpointing it to disk) whenever we send a vote. In case
     393             :   this tower is out-of-date Conveniently Funk, our accounts database,
     394             :   stores all the vote accounts including our own, so on bootstrap we
     395             :   simply load in our vote account state itself to to initialize our own
     396             :   local view of the tower.
     397             : 
     398             :   Finally, a note on the difference between the Vote Program and
     399             :   TowerBFT. The Vote Program runs during transaction (block) execution.
     400             :   It checks that certain invariants about the tower inside a vote
     401             :   transaction are upheld (recall a validator sends their entire tower as
     402             :   part of a "vote"): otherwise, it fails the transaction. For example,
     403             :   it checks that every vote contains a tower in which the vote slots are
     404             :   strictly monotonically increasing. As a consequence, only valid towers
     405             :   are committed to the ledger. Another important detail of the Vote
     406             :   Program is that it only has a view of the current fork on which it is
     407             :   executing. Specifically, it can't observe what state is on other
     408             :   forks, like what a validator's tower looks like on fork A vs. fork B.
     409             : 
     410             :   The TowerBFT rules, on the other hand, run after transaction
     411             :   execution. Also unlike the Vote Program, the TowerBFT rules do not
     412             :   take the vote transactions as inputs: rather the inputs are the towers
     413             :   that have already been written to the ledger by the Vote Program. As
     414             :   described above, the Vote Program validates every tower, and in this
     415             :   way, the TowerBFT rules leverage the validation already done by the
     416             :   Vote Program to (mostly) assume each tower is valid. Every validator
     417             :   runs TowerBFT to update their own tower with votes based on the
     418             :   algorithm documented above. Importantly, TowerBFT has a view of all
     419             :   forks, and the validator makes a voting decision based on all forks.
     420             : */
     421             : 
     422             : #include "../fd_choreo_base.h"
     423             : #include "fd_tower_accts.h"
     424             : #include "fd_tower_forks.h"
     425             : #include "../ghost/fd_ghost.h"
     426             : #include "../notar/fd_notar.h"
     427             : #include "fd_epoch_stakes.h"
     428             : #include "../../disco/pack/fd_microblock.h"
     429             : 
     430             : /* FD_TOWER_PARANOID:  Define this to non-zero at compile time
     431             :    to turn on additional runtime integrity checks. */
     432             : 
     433             : #ifndef FD_TOWER_PARANOID
     434             : #define FD_TOWER_PARANOID 1
     435             : #endif
     436             : 
     437         432 : #define FD_TOWER_VOTE_MAX (31UL)
     438             : 
     439             : /* fd_tower is a representation of a validator's "vote tower" (described
     440             :    in detail in the preamble at the top of this file).  The votes in the
     441             :    tower are stored in an fd_deque.c ordered from lowest to highest vote
     442             :    slot (highest to lowest confirmation count) relative to the head and
     443             :    tail.  There can be at most FD_TOWER_VOTE_MAX votes in the tower. */
     444             : 
     445             : struct fd_tower_vote {
     446             :   ulong slot; /* vote slot */
     447             :   ulong conf; /* confirmation count */
     448             : };
     449             : typedef struct fd_tower_vote fd_tower_vote_t;
     450             : 
     451             : #define DEQUE_NAME fd_tower
     452           0 : #define DEQUE_T    fd_tower_vote_t
     453         432 : #define DEQUE_MAX  FD_TOWER_VOTE_MAX
     454             : #include "../../util/tmpl/fd_deque.c"
     455             : 
     456             : typedef fd_tower_vote_t fd_tower_t; /* typedef for semantic clarity */
     457             : 
     458             : /* FD_TOWER_{ALIGN,FOOTPRINT} provided for static declarations. */
     459             : 
     460             : #define FD_TOWER_ALIGN     (alignof(fd_tower_private_t))
     461             : #define FD_TOWER_FOOTPRINT (sizeof (fd_tower_private_t))
     462             : FD_STATIC_ASSERT( alignof(fd_tower_private_t)==8UL,   FD_TOWER_ALIGN     );
     463             : FD_STATIC_ASSERT( sizeof (fd_tower_private_t)==512UL, FD_TOWER_FOOTPRINT );
     464             : 
     465           0 : #define FD_TOWER_FLAG_ANCESTOR_ROLLBACK 0 /* rollback to an ancestor of our prev vote */
     466           0 : #define FD_TOWER_FLAG_SIBLING_CONFIRMED 1 /* our prev vote was a duplicate and its sibling got confirmed */
     467           0 : #define FD_TOWER_FLAG_SAME_FORK         2 /* prev vote is on the same fork */
     468           0 : #define FD_TOWER_FLAG_SWITCH_PASS       3 /* successfully switched to a different fork */
     469           0 : #define FD_TOWER_FLAG_SWITCH_FAIL       4 /* failed to switch to a different fork */
     470           0 : #define FD_TOWER_FLAG_LOCKOUT_FAIL      5 /* failed lockout check */
     471           0 : #define FD_TOWER_FLAG_THRESHOLD_FAIL    6 /* failed threshold check */
     472           0 : #define FD_TOWER_FLAG_PROPAGATED_FAIL   7 /* failed propagated check */
     473             : 
     474             : struct fd_tower_out {
     475             :   uchar     flags;          /* one of FD_TOWER_{EMPTY,...} */
     476             :   ulong     reset_slot;     /* slot to reset PoH to */
     477             :   fd_hash_t reset_block_id; /* block ID to reset PoH to */
     478             :   ulong     vote_slot;      /* slot to vote for (ULONG_MAX if no vote) */
     479             :   fd_hash_t vote_block_id;  /* block ID to vote for */
     480             :   ulong     root_slot;      /* new tower root slot (ULONG_MAX if no new root) */
     481             :   fd_hash_t root_block_id;  /* new tower root block ID */
     482             : };
     483             : typedef struct fd_tower_out fd_tower_out_t;
     484             : 
     485             : /* fd_tower_vote_and_reset selects both a block to vote for and block to
     486             :    reset to.  Returns a struct with a reason code (FD_TOWER_{EMPTY,...})
     487             :    in addition to {reset,vote,root}_{slot,block_id}.
     488             : 
     489             :    We can't always vote, so vote_slot may be ULONG_MAX which indicates
     490             :    no vote should be cast and caller should ignore vote_block_id.  New
     491             :    roots result from votes, so the same applies for root_slot (there is
     492             :    not always a new root).  However there is always a reset block, so
     493             :    reset_slot and reset_block_id will always be populated on return. The
     494             :    implementation contains detailed documentation of the tower rules. */
     495             : 
     496             : fd_tower_out_t
     497             : fd_tower_vote_and_reset( fd_tower_t        * tower,
     498             :                          fd_tower_accts_t  * accts,
     499             :                          fd_epoch_stakes_t * epoch_stakes,
     500             :                          fd_forks_t        * forks,
     501             :                          fd_ghost_t        * ghost,
     502             :                          fd_notar_t        * notar );
     503             : 
     504             : /* Misc */
     505             : 
     506             : /* fd_tower_reconcile reconciles our local tower with the on-chain tower
     507             :    inside our vote account.  Mirrors what Agave does. */
     508             : 
     509             : void
     510             : fd_tower_reconcile( fd_tower_t  * tower,
     511             :                     ulong         tower_root,
     512             :                     uchar const * vote_acc );
     513             : 
     514             : /* fd_tower_from_vote_acc deserializes the vote account into tower.
     515             :    Assumes tower is a valid local join and currently empty. */
     516             : 
     517             : void
     518             : fd_tower_from_vote_acc( fd_tower_t  * tower,
     519             :                         uchar const * vote_acc );
     520             : 
     521             : /* fd_tower_to_vote_txn writes tower into a fd_tower_sync_t vote
     522             :    instruction and serializes it into a Solana transaction.  Assumes
     523             :    tower is a valid local join. */
     524             : 
     525             : void
     526             : fd_tower_to_vote_txn( fd_tower_t    const * tower,
     527             :                       ulong                 root,
     528             :                       fd_lockout_offset_t * lockouts_scratch,
     529             :                       fd_hash_t     const * bank_hash,
     530             :                       fd_hash_t     const * recent_blockhash,
     531             :                       fd_pubkey_t   const * validator_identity,
     532             :                       fd_pubkey_t   const * vote_authority,
     533             :                       fd_pubkey_t   const * vote_account,
     534             :                       fd_txn_p_t *          vote_txn );
     535             : 
     536             : /* fd_tower_verify checks tower is in a valid state. Valid iff:
     537             :    - cnt < FD_TOWER_VOTE_MAX
     538             :    - vote slots and confirmation counts in the tower are monotonically
     539             :      increasing */
     540             : 
     541             : int
     542             : fd_tower_verify( fd_tower_t const * tower );
     543             : 
     544             : /* fd_tower_print pretty-prints tower as a formatted table.
     545             : 
     546             :    Sample output:
     547             : 
     548             :         slot | confirmation count
     549             :    --------- | ------------------
     550             :    279803931 | 1
     551             :    279803930 | 2
     552             :    ...
     553             :    279803901 | 31
     554             :    279803900 | root
     555             : */
     556             : 
     557             : void
     558             : fd_tower_print( fd_tower_t const * tower,
     559             :                 ulong              root );
     560             : 
     561             : #endif /* HEADER_fd_src_choreo_tower_fd_tower_h */

Generated by: LCOV version 1.14