LCOV - code coverage report
Current view: top level - discoh/poh - fd_poh_tile.c (source / functions) Hit Total Coverage
Test: cov.lcov Lines: 0 918 0.0 %
Date: 2025-07-01 05:00:49 Functions: 0 46 0.0 %

          Line data    Source code
       1             : #define _GNU_SOURCE
       2             : 
       3             : /* Let's say there was a computer, the "leader" computer, that acted as
       4             :    a bank.  Users could send it messages saying they wanted to deposit
       5             :    money, or transfer it to someone else.
       6             : 
       7             :    That's how, for example, Bank of America works but there are problems
       8             :    with it.  One simple problem is: the bank can set your balance to
       9             :    zero if they don't like you.
      10             : 
      11             :    You could try to fix this by having the bank periodically publish the
      12             :    list of all account balances and transactions.  If the customers add
      13             :    unforgeable signatures to their deposit slips and transfers, then
      14             :    the bank cannot zero a balance without it being obvious to everyone.
      15             : 
      16             :    There's still problems.  The bank can't lie about your balance now or
      17             :    take your money, but it can just not accept deposits on your behalf
      18             :    by ignoring you.
      19             : 
      20             :    You could fix this by getting a few independent banks together, lets
      21             :    say Bank of America, Bank of England, and Westpac, and having them
      22             :    rotate who operates the leader computer periodically.  If one bank
      23             :    ignores your deposits, you can just wait and send them to the next
      24             :    one.
      25             : 
      26             :    This is Solana.
      27             : 
      28             :    There's still problems of course but they are largely technical.  How
      29             :    do the banks agree who is leader?  How do you recover if a leader
      30             :    misbehaves?  How do customers verify the transactions aren't forged?
      31             :    How do banks receive and publish and verify each others work quickly?
      32             :    These are the main technical innovations that enable Solana to work
      33             :    well.
      34             : 
      35             :    What about Proof of History?
      36             : 
      37             :    One particular niche problem is about the leader schedule.  When the
      38             :    leader computer is moving from one bank to another, the new bank must
      39             :    wait for the old bank to say it's done and provide a final list of
      40             :    balances that it can start working off of.  But: what if the computer
      41             :    at the old bank crashes and never says its done?
      42             : 
      43             :    Does the new leader just take over at some point?  What if the new
      44             :    leader is malicious, and says the past thousand leaders crashed, and
      45             :    there have been no transactions for days?  How do you check?
      46             : 
      47             :    This is what Proof of History solves.  Each bank in the network must
      48             :    constantly do a lot of busywork (compute hashes), even when it is not
      49             :    leader.
      50             : 
      51             :    If the prior thousand leaders crashed, and no transactions happened
      52             :    in an hour, the new leader would have to show they did about an hour
      53             :    of busywork for everyone else to believe them.
      54             : 
      55             :    A better name for this is proof of skipping.  If a leader is skipping
      56             :    slots (building off of a slot that is not the direct parent), it must
      57             :    prove that it waited a good amount of time to do so.
      58             : 
      59             :    It's not a perfect solution.  For one thing, some banks have really
      60             :    fast computers and can compute a lot of busywork in a short amount of
      61             :    time, allowing them to skip prior slot(s) anyway.  But: there is a
      62             :    social component that prevents validators from skipping the prior
      63             :    leader slot.  It is easy to detect when this happens and the network
      64             :    could respond by ignoring their votes or stake.
      65             : 
      66             :    You could come up with other schemes: for example, the network could
      67             :    just use wall clock time.  If a new leader publishes a block without
      68             :    waiting 400 milliseconds for the prior slot to complete, then there
      69             :    is no "proof of skipping" and the nodes ignore the slot.
      70             : 
      71             :    These schemes have a problem in that they are not deterministic
      72             :    across the network (different computers have different clocks), and
      73             :    so they will cause frequent forks which are very expensive to
      74             :    resolve.  Even though the proof of history scheme is not perfect,
      75             :    it is better than any alternative which is not deterministic.
      76             : 
      77             :    With all that background, we can now describe at a high level what
      78             :    this PoH tile actually does,
      79             : 
      80             :     (1) Whenever any other leader in the network finishes a slot, and
      81             :         the slot is determined to be the best one to build off of, this
      82             :         tile gets "reset" onto that block, the so called "reset slot".
      83             : 
      84             :     (2) The tile is constantly doing busy work, hash(hash(hash(...))) on
      85             :         top of the last reset slot, even when it is not leader.
      86             : 
      87             :     (3) When the tile becomes leader, it continues hashing from where it
      88             :         was.  Typically, the prior leader finishes their slot, so the
      89             :         reset slot will be the parent one, and this tile only publishes
      90             :         hashes for its own slot.  But if prior slots were skipped, then
      91             :         there might be a whole chain already waiting.
      92             : 
      93             :     That's pretty much it.  When we are leader, in addition to doing
      94             :     busywork, we publish ticks and microblocks to the shred tile.  A
      95             :     microblock is a non-empty group of transactions whose hashes are
      96             :     mixed-in to the chain, while a tick is a periodic stamp of the
      97             :     current hash, with no transactions (nothing mixed in).  We need
      98             :     to send both to the shred tile, as ticks are important for other
      99             :     validators to verify in parallel.
     100             : 
     101             :     As well, the tile should never become leader for a slot that it has
     102             :     published anything for, otherwise it may create a duplicate block.
     103             : 
     104             :     Some particularly common misunderstandings:
     105             : 
     106             :      - PoH is critical to security.
     107             : 
     108             :        This largely isn't true.  The target hash rate of the network is
     109             :        so slow (1 hash per 500 nanoseconds) that a malicious leader can
     110             :        easily catch up if they start from an old hash, and the only
     111             :        practical attack prevented is the proof of skipping.  Most of the
     112             :        long range attacks in the Solana whitepaper are not relevant.
     113             : 
     114             :      - PoH keeps passage of time.
     115             : 
     116             :        This is also not true.  The way the network keeps time so it can
     117             :        decide who is leader is that, each leader uses their operating
     118             :        system clock to time 400 milliseconds and publishes their block
     119             :        when this timer expires.
     120             : 
     121             :        If a leader just hashed as fast as they could, they could publish
     122             :        a block in tens of milliseconds, and the rest of the network
     123             :        would happily accept it.  This is why the Solana "clock" as
     124             :        determined by PoH is not accurate and drifts over time.
     125             : 
     126             :      - PoH prevents transaction reordering by the leader.
     127             : 
     128             :        The leader can, in theory, wait until the very end of their
     129             :        leader slot to publish anything at all to the network.  They can,
     130             :        in particular, hold all received transactions for 400
     131             :        milliseconds and then reorder and publish some right at the end
     132             :        to advantage certain transactions.
     133             : 
     134             :     You might be wondering... if all the PoH chain is helping us do is
     135             :     prove that slots were skipped correctly, why do we need to "mix in"
     136             :     transactions to the hash value?  Or do anything at all for slots
     137             :     where we don't skip the prior slot?
     138             : 
     139             :     It's a good question, and the answer is that this behavior is not
     140             :     necessary.  An ideal implementation of PoH have no concept of ticks
     141             :     or mixins, and would not be part of the TPU pipeline at all.
     142             :     Instead, there would be a simple field "skip_proof" on the last
     143             :     shred we send for a slot, the hash(hash(...)) value.  This field
     144             :     would only be filled in (and only verified by replayers) in cases
     145             :     where the slot actually skipped a parent.
     146             : 
     147             :     Then what is the "clock?  In Solana, time is constructed as follows:
     148             : 
     149             :     HASHES
     150             : 
     151             :         The base unit of time is a hash.  Hereafter, any values whose
     152             :         units are in hashes are called a "hashcnt" to distinguish them
     153             :         from actual hashed values.
     154             : 
     155             :         Agave generally defines a constant duration for each tick
     156             :         (see below) and then varies the number of hashcnt per tick, but
     157             :         as we consider the hashcnt the base unit of time, Firedancer and
     158             :         this PoH implementation defines everything in terms of hashcnt
     159             :         duration instead.
     160             : 
     161             :         In mainnet-beta, testnet, and devnet the hashcnt ticks over
     162             :         (increments) every 100 nanoseconds.  The hashcnt rate is
     163             :         specified as 500 nanoseconds according to the genesis, but there
     164             :         are several features which increase the number of hashes per
     165             :         tick while keeping tick duration constant, which make the time
     166             :         per hashcnt lower.  These features up to and including the
     167             :         `update_hashes_per_tick6` feature are activated on mainnet-beta,
     168             :         devnet, and testnet, and are described in the TICKS section
     169             :         below.
     170             : 
     171             :         Other chains and development environments might have a different
     172             :         hashcnt rate in the genesis, or they might not have activated
     173             :         the features which increase the rate yet, which we also support.
     174             : 
     175             :         In practice, although each validator follows a hashcnt rate of
     176             :         100 nanoseconds, the overall observed hashcnt rate of the
     177             :         network is a little slower than once every 100 nanoseconds,
     178             :         mostly because there are gaps and clock synchronization issues
     179             :         during handoff between leaders.  This is referred to as clock
     180             :         drift.
     181             : 
     182             :     TICKS
     183             : 
     184             :         The leader needs to periodically checkpoint the hash value
     185             :         associated with a given hashcnt so that they can publish it to
     186             :         other nodes for verification.
     187             : 
     188             :         On mainnet-beta, testnet, and devnet this occurs once every
     189             :         62,500 hashcnts, or approximately once every 6.4 microseconds.
     190             :         This value is determined at genesis time, and according to the
     191             :         features below, and could be different in development
     192             :         environments or on other chains which we support.
     193             : 
     194             :         Due to protocol limitations, when mixing in transactions to the
     195             :         proof-of-history chain, it cannot occur on a tick boundary (but
     196             :         can occur at any other hashcnt).
     197             : 
     198             :         Ticks exist mainly so that verification can happen in parallel.
     199             :         A verifier computer, rather than needing to do hash(hash(...))
     200             :         all in sequence to verify a proof-of-history chain, can do,
     201             : 
     202             :          Core 0: hash(hash(...))
     203             :          Core 1: hash(hash(...))
     204             :          Core 2: hash(hash(...))
     205             :          Core 3: hash(hash(...))
     206             :          ...
     207             : 
     208             :         Between each pair of tick boundaries.
     209             : 
     210             :         Solana sometimes calls the current tick the "tick height",
     211             :         although it makes more sense to think of it as a counter from
     212             :         zero, it's just the number of ticks since the genesis hash.
     213             : 
     214             :         There is a set of features which increase the number of hashcnts
     215             :         per tick.  These are all deployed on mainnet-beta, devnet, and
     216             :         testnet.
     217             : 
     218             :            name:             update_hashes_per_tick
     219             :            id:               3uFHb9oKdGfgZGJK9EHaAXN4USvnQtAFC13Fh5gGFS5B
     220             :            hashes per tick:  12,500
     221             :            hashcnt duration: 500 nanos
     222             : 
     223             :            name:             update_hashes_per_tick2
     224             :            id:               EWme9uFqfy1ikK1jhJs8fM5hxWnK336QJpbscNtizkTU
     225             :            hashes per tick:  17,500
     226             :            hashcnt duration: 357.142857143 nanos
     227             : 
     228             :            name:             update_hashes_per_tick3
     229             :            id:               8C8MCtsab5SsfammbzvYz65HHauuUYdbY2DZ4sznH6h5
     230             :            hashes per tick:  27,500
     231             :            hashcnt duration: 227.272727273 nanos
     232             : 
     233             :            name:             update_hashes_per_tick4
     234             :            id:               8We4E7DPwF2WfAN8tRTtWQNhi98B99Qpuj7JoZ3Aikgg
     235             :            hashes per tick:  47,500
     236             :            hashcnt duration: 131.578947368 nanos
     237             : 
     238             :            name:             update_hashes_per_tick5
     239             :            id:               BsKLKAn1WM4HVhPRDsjosmqSg2J8Tq5xP2s2daDS6Ni4
     240             :            hashes per tick:  57,500
     241             :            hashcnt duration: 108.695652174 nanos
     242             : 
     243             :            name:             update_hashes_per_tick6
     244             :            id:               FKu1qYwLQSiehz644H6Si65U5ZQ2cp9GxsyFUfYcuADv
     245             :            hashes per tick:  62,500
     246             :            hashcnt duration: 100 nanos
     247             : 
     248             :         In development environments, there is a way to configure the
     249             :         hashcnt per tick to be "none" during genesis, for a so-called
     250             :         "low power" tick producer.  The idea is not to spin cores during
     251             :         development.  This is equivalent to setting the hashcnt per tick
     252             :         to be 1, and increasing the hashcnt duration to the desired tick
     253             :         duration.
     254             : 
     255             :     SLOTS
     256             : 
     257             :         Each leader needs to be leader for a fixed amount of time, which
     258             :         is called a slot.  During a slot, a leader has an opportunity to
     259             :         receive transactions and produce a block for the network,
     260             :         although they may miss ("skip") the slot if they are offline or
     261             :         not behaving.
     262             : 
     263             :         In mainnet-beta, testnet, and devnet a slot is 64 ticks, or
     264             :         4,000,000 hashcnts, or approximately 400 milliseconds.
     265             : 
     266             :         Due to the way the leader schedule is constructed, each leader
     267             :         is always given at least four (4) consecutive slots in the
     268             :         schedule. This means when becoming leader you will be leader
     269             :         for at least 4 slots, or 1.6 seconds.
     270             : 
     271             :         It is rare, although can happen that a leader gets more than 4
     272             :         consecutive slots (eg, 8, or 12), if they are lucky with the
     273             :         leader schedule generation.
     274             : 
     275             :         The number of ticks in a slot is fixed at genesis time, and
     276             :         could be different for development or other chains, which we
     277             :         support.  There is nothing special about 4 leader slots in a
     278             :         row, and this might be changed in future, and the proof of
     279             :         history makes no assumptions that this is the case.
     280             : 
     281             :     EPOCHS
     282             : 
     283             :         Infrequently, the network needs to do certain housekeeping,
     284             :         mainly things like collecting rent and deciding on the leader
     285             :         schedule.  The length of an epoch is fixed on mainnet-beta,
     286             :         devnet and testnet at 420,000 slots, or around ~2 (1.94) days.
     287             :         This value is fixed at genesis time, and could be different for
     288             :         other chains including development, which we support.  Typically
     289             :         in development, epochs are every 8,192 slots, or around  ~1 hour
     290             :         (54.61 minutes), although it depends on the number of ticks per
     291             :         slot and the target hashcnt rate of the genesis as well.
     292             : 
     293             :         In development, epochs need not be a fixed length either.  There
     294             :         is a "warmup" option, where epochs start short and grow, which
     295             :         is useful for quickly warming up stake during development.
     296             : 
     297             :         The epoch is important because it is the only time the leader
     298             :         schedule is updated.  The leader schedule is a list of which
     299             :         leader is leader for which slot, and is generated by a special
     300             :         algorithm that is deterministic and known to all nodes.
     301             : 
     302             :         The leader schedule is computed one epoch in advance, so that
     303             :         at slot T, we always know who will be leader up until the end
     304             :         of slot T+EPOCH_LENGTH.  Specifically, the leader schedule for
     305             :         epoch N is computed during the epoch boundary crossing from
     306             :         N-2 to N-1. For mainnet-beta, the slots per epoch is fixed and
     307             :         will always be 420,000. */
     308             : 
     309             : #include "../bank/fd_bank_abi.h"
     310             : 
     311             : #include "../../disco/tiles.h"
     312             : #include "../../disco/bundle/fd_bundle_crank.h"
     313             : #include "../../disco/pack/fd_pack.h"
     314             : #include "../../ballet/sha256/fd_sha256.h"
     315             : #include "../../disco/metrics/fd_metrics.h"
     316             : #include "../../util/pod/fd_pod_format.h"
     317             : #include "../../disco/shred/fd_shredder.h"
     318             : #include "../../disco/keyguard/fd_keyload.h"
     319             : #include "../../disco/keyguard/fd_keyswitch.h"
     320             : #include "../../disco/metrics/generated/fd_metrics_poh.h"
     321             : #include "../../disco/plugin/fd_plugin.h"
     322             : #include "../../flamenco/leaders/fd_multi_epoch_leaders.h"
     323             : 
     324             : #include <string.h>
     325             : 
     326             : /* The maximum number of microblocks that pack is allowed to pack into a
     327             :    single slot.  This is not consensus critical, and pack could, if we
     328             :    let it, produce as many microblocks as it wants, and the slot would
     329             :    still be valid.
     330             : 
     331             :    We have this here instead so that PoH can estimate slot completion,
     332             :    and keep the hashcnt up to date as pack progresses through packing
     333             :    the slot.  If this upper bound was not enforced, PoH could tick to
     334             :    the last hash of the slot and have no hashes left to mixin incoming
     335             :    microblocks from pack, so this upper bound is a coordination
     336             :    mechanism so that PoH can progress hashcnts while the slot is active,
     337             :    and know that pack will not need those hashcnts later to do mixins. */
     338           0 : #define MAX_MICROBLOCKS_PER_SLOT (32768UL)
     339             : 
     340             : /* When we are hashing in the background in case a prior leader skips
     341             :    their slot, we need to store the result of each tick hash so we can
     342             :    publish them when we become leader.  The network requires at least
     343             :    one leader slot to publish in each epoch for the leader schedule to
     344             :    generate, so in the worst case we might need two full epochs of slots
     345             :    to store the hashes.  (Eg, if epoch T only had a published slot in
     346             :    position 0 and epoch T+1 only had a published slot right at the end).
     347             : 
     348             :    There is a tighter bound: the block data limit of mainnet-beta is
     349             :    currently FD_PACK_MAX_DATA_PER_BLOCK, or 27,332,342 bytes per slot.
     350             :    At 48 bytes per tick, it is not possible to publish a slot that skips
     351             :    569,424 or more prior slots. */
     352           0 : #define MAX_SKIPPED_TICKS (1UL+(FD_PACK_MAX_DATA_PER_BLOCK/48UL))
     353             : 
     354           0 : #define IN_KIND_BANK  (0)
     355           0 : #define IN_KIND_PACK  (1)
     356           0 : #define IN_KIND_STAKE (2)
     357             : 
     358             : 
     359             : typedef struct {
     360             :   fd_wksp_t * mem;
     361             :   ulong       chunk0;
     362             :   ulong       wmark;
     363             : } fd_poh_in_ctx_t;
     364             : 
     365             : typedef struct {
     366             :   ulong       idx;
     367             :   fd_wksp_t * mem;
     368             :   ulong       chunk0;
     369             :   ulong       wmark;
     370             :   ulong       chunk;
     371             : } fd_poh_out_ctx_t;
     372             : 
     373             : typedef struct {
     374             :   fd_stem_context_t * stem;
     375             : 
     376             :   /* Static configuration determined at genesis creation time.  See
     377             :      long comment above for more information. */
     378             :   ulong  tick_duration_ns;
     379             :   ulong  hashcnt_per_tick;
     380             :   ulong  ticks_per_slot;
     381             : 
     382             :   /* Derived from the above configuration, but we precompute it. */
     383             :   double slot_duration_ns;
     384             :   double hashcnt_duration_ns;
     385             :   ulong  hashcnt_per_slot;
     386             :   /* Constant, fixed at initialization.  The maximum number of
     387             :      microblocks that the pack tile can publish in each slot. */
     388             :   ulong max_microblocks_per_slot;
     389             : 
     390             :   /* Consensus-critical slot cost limits. */
     391             :   struct {
     392             :     ulong slot_max_cost;
     393             :     ulong slot_max_vote_cost;
     394             :     ulong slot_max_write_cost_per_acct;
     395             :   } limits;
     396             : 
     397             :   /* The current slot and hashcnt within that slot of the proof of
     398             :      history, including hashes we have been producing in the background
     399             :      while waiting for our next leader slot. */
     400             :   ulong slot;
     401             :   ulong hashcnt;
     402             :   ulong cus_used;
     403             : 
     404             :   /* When we send a microblock on to the shred tile, we need to tell
     405             :      it how many hashes there have been since the last microblock, so
     406             :      this tracks the hashcnt of the last published microblock.
     407             : 
     408             :      If we are skipping slots prior to our leader slot, the last_slot
     409             :      will be quite old, and potentially much larger than the number of
     410             :      hashcnts in one slot. */
     411             :   ulong last_slot;
     412             :   ulong last_hashcnt;
     413             : 
     414             :   /* If we have published a tick or a microblock for a particular slot
     415             :      to the shred tile, we should never become leader for that slot
     416             :      again, otherwise we could publish a duplicate block.
     417             : 
     418             :      This value tracks the max slot that we have published a tick or
     419             :      microblock for so we can prevent this. */
     420             :   ulong highwater_leader_slot;
     421             : 
     422             :   /* See how this field is used below.  If we have sequential leader
     423             :      slots, we don't reset the expected slot end time between the two,
     424             :      to prevent clock drift.  If we didn't do this, our 2nd slot would
     425             :      end 400ms + `time_for_replay_to_move_slot_and_reset_poh` after
     426             :      our 1st, rather than just strictly 400ms. */
     427             :   int  lagged_consecutive_leader_start;
     428             :   ulong expect_sequential_leader_slot;
     429             : 
     430             :   /* There's a race condition ... let's say two banks A and B, bank A
     431             :      processes some transactions, then releases the account locks, and
     432             :      sends the microblock to PoH to be stamped.  Pack now re-packs the
     433             :      same accounts with a new microblock, sends to bank B, bank B
     434             :      executes and sends the microblock to PoH, and this all happens fast
     435             :      enough that PoH picks the 2nd block to stamp before the 1st.  The
     436             :      accounts database changes now are misordered with respect to PoH so
     437             :      replay could fail.
     438             : 
     439             :      To prevent this race, we order all microblocks and only process
     440             :      them in PoH in the order they are produced by pack.  This is a
     441             :      little bit over-strict, we just need to ensure that microblocks
     442             :      with conflicting accounts execute in order, but this is easiest to
     443             :      implement for now. */
     444             :   ulong expect_microblock_idx;
     445             : 
     446             :   /* The PoH tile must never drop microblocks that get committed by the
     447             :      bank, so it needs to always be able to mixin a microblock hash.
     448             :      Mixing in requires incrementing the hashcnt, so we need to ensure
     449             :      at all times that there is enough hascnts left in the slot to
     450             :      mixin whatever future microblocks pack might produce for it.
     451             : 
     452             :      This value tracks that.  At any time, max_microblocks_per_slot
     453             :      - microblocks_lower_bound is an upper bound on the maximum number
     454             :      of microblocks that might still be received in this slot. */
     455             :   ulong microblocks_lower_bound;
     456             : 
     457             :   uchar __attribute__((aligned(32UL))) reset_hash[ 32 ];
     458             :   uchar __attribute__((aligned(32UL))) hash[ 32 ];
     459             : 
     460             :   /* When we are not leader, we need to save the hashes that were
     461             :      produced in case the prior leader skips.  If they skip, we will
     462             :      replay these skipped hashes into our next leader bank so that
     463             :      the slot hashes sysvar can be updated correctly, and also publish
     464             :      them to peer nodes as part of our outgoing shreds. */
     465             :   uchar skipped_tick_hashes[ MAX_SKIPPED_TICKS ][ 32 ];
     466             : 
     467             :   /* The timestamp in nanoseconds of when the reset slot was received.
     468             :      This is the timestamp we are building on top of to determine when
     469             :      our next leader slot starts. */
     470             :   long reset_slot_start_ns;
     471             : 
     472             :   /* The timestamp in nanoseconds of when we got the bank for the
     473             :      current leader slot. */
     474             :   long leader_bank_start_ns;
     475             : 
     476             :   /* The hashcnt corresponding to the start of the current reset slot. */
     477             :   ulong reset_slot;
     478             : 
     479             :   /* The hashcnt at which our next leader slot begins, or ULONG max if
     480             :      we have no known next leader slot. */
     481             :   ulong next_leader_slot;
     482             : 
     483             :   /* If an in progress frag should be skipped */
     484             :   int skip_frag;
     485             : 
     486             :   ulong max_active_descendant;
     487             : 
     488             :   /* If we currently are the leader according the clock AND we have
     489             :      received the leader bank for the slot from the replay stage,
     490             :      this value will be non-NULL.
     491             : 
     492             :      Note that we might be inside our leader slot, but not have a bank
     493             :      yet, in which case this will still be NULL.
     494             : 
     495             :      It will be NULL for a brief race period between consecutive leader
     496             :      slots, as we ping-pong back to replay stage waiting for a new bank.
     497             : 
     498             :      Agave refers to this as the "working bank". */
     499             :   void const * current_leader_bank;
     500             : 
     501             :   fd_sha256_t * sha256;
     502             : 
     503             :   fd_multi_epoch_leaders_t * mleaders;
     504             : 
     505             :   /* The last sequence number of an outgoing fragment to the shred tile,
     506             :      or ULONG max if no such fragment.  See fd_keyswitch.h for details
     507             :      of how this is used. */
     508             :   ulong shred_seq;
     509             : 
     510             :   int halted_switching_key;
     511             : 
     512             :   fd_keyswitch_t * keyswitch;
     513             :   fd_pubkey_t identity_key;
     514             : 
     515             :   /* We need a few pieces of information to compute the right addresses
     516             :      for bundle crank information that we need to send to pack. */
     517             :   struct {
     518             :     int enabled;
     519             :     fd_pubkey_t vote_account;
     520             :     fd_bundle_crank_gen_t gen[1];
     521             :   } bundle;
     522             : 
     523             : 
     524             :   /* The Agave client needs to be notified when the leader changes,
     525             :      so that they can resume the replay stage if it was suspended waiting. */
     526             :   void * signal_leader_change;
     527             : 
     528             :   /* These are temporarily set in during_frag so they can be used in
     529             :      after_frag once the frag has been validated as not overrun. */
     530             :   uchar _txns[ USHORT_MAX ];
     531             :   fd_microblock_trailer_t _microblock_trailer[ 1 ];
     532             : 
     533             :   int in_kind[ 64 ];
     534             :   fd_poh_in_ctx_t in[ 64 ];
     535             : 
     536             :   fd_poh_out_ctx_t shred_out[ 1 ];
     537             :   fd_poh_out_ctx_t pack_out[ 1 ];
     538             :   fd_poh_out_ctx_t plugin_out[ 1 ];
     539             : 
     540             :   fd_histf_t begin_leader_delay[ 1 ];
     541             :   fd_histf_t first_microblock_delay[ 1 ];
     542             :   fd_histf_t slot_done_delay[ 1 ];
     543             :   fd_histf_t bundle_init_delay[ 1 ];
     544             : 
     545             :   ulong features_activation_avail;
     546             :   fd_shred_features_activation_t features_activation[1];
     547             : 
     548             :   ulong parent_slot;
     549             :   uchar parent_block_id[ 32 ];
     550             : 
     551             :   uchar __attribute__((aligned(FD_MULTI_EPOCH_LEADERS_ALIGN))) mleaders_mem[ FD_MULTI_EPOCH_LEADERS_FOOTPRINT ];
     552             : } fd_poh_ctx_t;
     553             : 
     554             : /* The PoH recorder is implemented in Firedancer but for now needs to
     555             :    work with Agave, so we have a locking scheme for them to
     556             :    co-operate.
     557             : 
     558             :    This is because the PoH tile lives in the Agave memory address
     559             :    space and their version of concurrency is locking the PoH recorder
     560             :    and reading arbitrary fields.
     561             : 
     562             :    So we allow them to lock the PoH tile, although with a very bad (for
     563             :    them) locking scheme.  By default, the tile has full and exclusive
     564             :    access to the data.  If part of Agave wishes to read/write they
     565             :    can either,
     566             : 
     567             :      1. Rewrite their concurrency to message passing based on mcache
     568             :         (preferred, but not feasible).
     569             :      2. Signal to the tile they wish to acquire the lock, by setting
     570             :         fd_poh_waiting_lock to 1.
     571             : 
     572             :    During after_credit, the tile will check if the waiting lock is set
     573             :    to 1, and if so, set the returned lock to 1, indicating to the waiter
     574             :    that they may now proceed.
     575             : 
     576             :    When the waiter is done reading and writing, they restore the
     577             :    returned lock value back to zero, and the POH tile continues with its
     578             :    day. */
     579             : 
     580             : static fd_poh_ctx_t * fd_poh_global_ctx;
     581             : 
     582             : static volatile ulong fd_poh_waiting_lock __attribute__((aligned(128UL)));
     583             : static volatile ulong fd_poh_returned_lock __attribute__((aligned(128UL)));
     584             : 
     585             : /* Agave also needs to write to some mcaches, so we trampoline
     586             :    that via. the PoH tile as well. */
     587             : 
     588             : struct poh_link {
     589             :   fd_frag_meta_t * mcache;
     590             :   ulong            depth;
     591             :   ulong            tx_seq;
     592             : 
     593             :   void *           mem;
     594             :   void *           dcache;
     595             :   ulong            chunk0;
     596             :   ulong            wmark;
     597             :   ulong            chunk;
     598             : 
     599             :   ulong            cr_avail;
     600             :   ulong            rx_cnt;
     601             :   ulong *          rx_fseqs[ 32UL ];
     602             : };
     603             : 
     604             : typedef struct poh_link poh_link_t;
     605             : 
     606             : static poh_link_t gossip_dedup;
     607             : static poh_link_t stake_out;
     608             : static poh_link_t crds_shred;
     609             : static poh_link_t replay_resolv;
     610             : static poh_link_t executed_txn;
     611             : 
     612             : static poh_link_t replay_plugin;
     613             : static poh_link_t gossip_plugin;
     614             : static poh_link_t start_progress_plugin;
     615             : static poh_link_t vote_listener_plugin;
     616             : static poh_link_t validator_info_plugin;
     617             : 
     618             : static void
     619           0 : poh_link_wait_credit( poh_link_t * link ) {
     620           0 :   if( FD_LIKELY( link->cr_avail ) ) return;
     621             : 
     622           0 :   while( 1 ) {
     623           0 :     ulong cr_query = ULONG_MAX;
     624           0 :     for( ulong i=0UL; i<link->rx_cnt; i++ ) {
     625           0 :       ulong const * _rx_seq = link->rx_fseqs[ i ];
     626           0 :       ulong rx_seq = FD_VOLATILE_CONST( *_rx_seq );
     627           0 :       ulong rx_cr_query = (ulong)fd_long_max( (long)link->depth - fd_long_max( fd_seq_diff( link->tx_seq, rx_seq ), 0L ), 0L );
     628           0 :       cr_query = fd_ulong_min( rx_cr_query, cr_query );
     629           0 :     }
     630           0 :     if( FD_LIKELY( cr_query>0UL ) ) {
     631           0 :       link->cr_avail = cr_query;
     632           0 :       break;
     633           0 :     }
     634           0 :     FD_SPIN_PAUSE();
     635           0 :   }
     636           0 : }
     637             : 
     638             : static void
     639             : poh_link_publish( poh_link_t *  link,
     640             :                   ulong         sig,
     641             :                   uchar const * data,
     642           0 :                   ulong         data_sz ) {
     643           0 :   while( FD_UNLIKELY( !FD_VOLATILE_CONST( link->mcache ) ) ) FD_SPIN_PAUSE();
     644           0 :   if( FD_UNLIKELY( !link->mem ) ) return; /* link not enabled, don't publish */
     645           0 :   poh_link_wait_credit( link );
     646             : 
     647           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( link->mem, link->chunk );
     648           0 :   fd_memcpy( dst, data, data_sz );
     649           0 :   ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
     650           0 :   fd_mcache_publish( link->mcache, link->depth, link->tx_seq, sig, link->chunk, data_sz, 0UL, 0UL, tspub );
     651           0 :   link->chunk = fd_dcache_compact_next( link->chunk, data_sz, link->chunk0, link->wmark );
     652           0 :   link->cr_avail--;
     653           0 :   link->tx_seq++;
     654           0 : }
     655             : 
     656             : static void
     657             : poh_link_init( poh_link_t *     link,
     658             :                fd_topo_t *      topo,
     659             :                fd_topo_tile_t * tile,
     660           0 :                ulong            out_idx ) {
     661           0 :   fd_topo_link_t * topo_link = &topo->links[ tile->out_link_id[ out_idx ] ];
     662           0 :   fd_topo_wksp_t * wksp = &topo->workspaces[ topo->objs[ topo_link->dcache_obj_id ].wksp_id ];
     663             : 
     664           0 :   link->mem      = wksp->wksp;
     665           0 :   link->depth    = fd_mcache_depth( topo_link->mcache );
     666           0 :   link->tx_seq   = 0UL;
     667           0 :   link->dcache   = topo_link->dcache;
     668           0 :   link->chunk0   = fd_dcache_compact_chunk0( wksp->wksp, topo_link->dcache );
     669           0 :   link->wmark    = fd_dcache_compact_wmark ( wksp->wksp, topo_link->dcache, topo_link->mtu );
     670           0 :   link->chunk    = link->chunk0;
     671           0 :   link->cr_avail = 0UL;
     672           0 :   link->rx_cnt   = 0UL;
     673           0 :   for( ulong i=0UL; i<topo->tile_cnt; i++ ) {
     674           0 :     fd_topo_tile_t * _tile = &topo->tiles[ i ];
     675           0 :     for( ulong j=0UL; j<_tile->in_cnt; j++ ) {
     676           0 :       if( _tile->in_link_id[ j ]==topo_link->id && _tile->in_link_reliable[ j ] ) {
     677           0 :         FD_TEST( link->rx_cnt<32UL );
     678           0 :         link->rx_fseqs[ link->rx_cnt++ ] = _tile->in_link_fseq[ j ];
     679           0 :         break;
     680           0 :       }
     681           0 :     }
     682           0 :   }
     683           0 :   FD_COMPILER_MFENCE();
     684           0 :   link->mcache = topo_link->mcache;
     685           0 :   FD_COMPILER_MFENCE();
     686           0 :   FD_TEST( link->mcache );
     687           0 : }
     688             : 
     689             : /* To help show correctness, functions that might be called from
     690             :    Rust, either directly or indirectly, have this fake "attribute"
     691             :    CALLED_FROM_RUST, which is actually nothing.  Calls from Rust
     692             :    typically execute on threads did not call fd_boot, so they do not
     693             :    have the typical FD_TL variables.  In particular, they cannot use
     694             :    normal metrics, and their log messages don't have full context.
     695             :    Additionally, Rust functions marked CALLED_FROM_RUST cannot call back
     696             :    into a C fd_ext function without causing a deadlock (although the
     697             :    other Rust fd_ext functions have a similar problem).
     698             : 
     699             :    To prevent annotation from polluting the whole codebase, calls to
     700             :    functions outside this file are manually checked and marked as being
     701             :    safe at each call rather than annotated. */
     702             : #define CALLED_FROM_RUST
     703             : 
     704             : static CALLED_FROM_RUST fd_poh_ctx_t *
     705           0 : fd_ext_poh_write_lock( void ) {
     706           0 :   for(;;) {
     707             :     /* Acquire the waiter lock to make sure we are the first writer in the queue. */
     708           0 :     if( FD_LIKELY( !FD_ATOMIC_CAS( &fd_poh_waiting_lock, 0UL, 1UL) ) ) break;
     709           0 :     FD_SPIN_PAUSE();
     710           0 :   }
     711           0 :   FD_COMPILER_MFENCE();
     712           0 :   for(;;) {
     713             :     /* Now wait for the tile to tell us we can proceed. */
     714           0 :     if( FD_LIKELY( FD_VOLATILE_CONST( fd_poh_returned_lock ) ) ) break;
     715           0 :     FD_SPIN_PAUSE();
     716           0 :   }
     717           0 :   FD_COMPILER_MFENCE();
     718           0 :   return fd_poh_global_ctx;
     719           0 : }
     720             : 
     721             : static CALLED_FROM_RUST void
     722           0 : fd_ext_poh_write_unlock( void ) {
     723           0 :   FD_COMPILER_MFENCE();
     724           0 :   FD_VOLATILE( fd_poh_returned_lock ) = 0UL;
     725           0 : }
     726             : 
     727             : /* The PoH tile needs to interact with the Agave address space to
     728             :    do certain operations that Firedancer hasn't reimplemented yet, a.k.a
     729             :    transaction execution.  We have Agave export some wrapper
     730             :    functions that we call into during regular tile execution.  These do
     731             :    not need any locking, since they are called serially from the single
     732             :    PoH tile. */
     733             : 
     734             : extern CALLED_FROM_RUST void fd_ext_bank_acquire( void const * bank );
     735             : extern CALLED_FROM_RUST void fd_ext_bank_release( void const * bank );
     736             : extern CALLED_FROM_RUST void fd_ext_poh_signal_leader_change( void * sender );
     737             : extern                  void fd_ext_poh_register_tick( void const * bank, uchar const * hash );
     738             : 
     739             : /* fd_ext_poh_initialize is called by Agave on startup to
     740             :    initialize the PoH tile with some static configuration, and the
     741             :    initial reset slot and hash which it retrieves from a snapshot.
     742             : 
     743             :    This function is called by some random Agave thread, but
     744             :    it blocks booting of the PoH tile.  The tile will spin until it
     745             :    determines that this initialization has happened.
     746             : 
     747             :    signal_leader_change is an opaque Rust object that is used to
     748             :    tell the replay stage that the leader has changed.  It is a
     749             :    Box::into_raw(Arc::increment_strong(crossbeam::Sender)), so it
     750             :    has infinite lifetime unless this C code releases the refcnt.
     751             : 
     752             :    It can be used with `fd_ext_poh_signal_leader_change` which
     753             :    will just issue a nonblocking send on the channel. */
     754             : 
     755             : CALLED_FROM_RUST void
     756             : fd_ext_poh_initialize( ulong         tick_duration_ns,    /* See clock comments above, will be 6.4 microseconds for mainnet-beta. */
     757             :                        ulong         hashcnt_per_tick,    /* See clock comments above, will be 62,500 for mainnet-beta. */
     758             :                        ulong         ticks_per_slot,      /* See clock comments above, will almost always be 64. */
     759             :                        ulong         tick_height,         /* The counter (height) of the tick to start hashing on top of. */
     760             :                        uchar const * last_entry_hash,     /* Points to start of a 32 byte region of memory, the hash itself at the tick height. */
     761           0 :                        void *        signal_leader_change /* See comment above. */ ) {
     762           0 :   FD_COMPILER_MFENCE();
     763           0 :   for(;;) {
     764             :     /* Make sure the ctx is initialized before trying to take the lock. */
     765           0 :     if( FD_LIKELY( FD_VOLATILE_CONST( fd_poh_global_ctx ) ) ) break;
     766           0 :     FD_SPIN_PAUSE();
     767           0 :   }
     768           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     769             : 
     770           0 :   ctx->slot                = tick_height/ticks_per_slot;
     771           0 :   ctx->hashcnt             = 0UL;
     772           0 :   ctx->cus_used            = 0UL;
     773           0 :   ctx->last_slot           = ctx->slot;
     774           0 :   ctx->last_hashcnt        = 0UL;
     775           0 :   ctx->reset_slot          = ctx->slot;
     776           0 :   ctx->reset_slot_start_ns = fd_log_wallclock(); /* safe to call from Rust */
     777             : 
     778           0 :   memcpy( ctx->reset_hash, last_entry_hash, 32UL );
     779           0 :   memcpy( ctx->hash, last_entry_hash, 32UL );
     780             : 
     781           0 :   ctx->signal_leader_change = signal_leader_change;
     782             : 
     783             :   /* Static configuration about the clock. */
     784           0 :   ctx->tick_duration_ns = tick_duration_ns;
     785           0 :   ctx->hashcnt_per_tick = hashcnt_per_tick;
     786           0 :   ctx->ticks_per_slot   = ticks_per_slot;
     787             : 
     788             :   /* Recompute derived information about the clock. */
     789           0 :   ctx->slot_duration_ns    = (double)ticks_per_slot*(double)tick_duration_ns;
     790           0 :   ctx->hashcnt_duration_ns = (double)tick_duration_ns/(double)hashcnt_per_tick;
     791           0 :   ctx->hashcnt_per_slot    = ticks_per_slot*hashcnt_per_tick;
     792             : 
     793           0 :   if( FD_UNLIKELY( ctx->hashcnt_per_tick==1UL ) ) {
     794             :     /* Low power producer, maximum of one microblock per tick in the slot */
     795           0 :     ctx->max_microblocks_per_slot = ctx->ticks_per_slot;
     796           0 :   } else {
     797             :     /* See the long comment in after_credit for this limit */
     798           0 :     ctx->max_microblocks_per_slot = fd_ulong_min( MAX_MICROBLOCKS_PER_SLOT, ctx->ticks_per_slot*(ctx->hashcnt_per_tick-1UL) );
     799           0 :   }
     800             : 
     801           0 :   fd_ext_poh_write_unlock();
     802           0 : }
     803             : 
     804             : /* fd_ext_poh_acquire_bank gets the current leader bank if there is one
     805             :    currently active.  PoH might think we are leader without having a
     806             :    leader bank if the replay stage has not yet noticed we are leader.
     807             : 
     808             :    The bank that is returned is owned the caller, and must be converted
     809             :    to an Arc<Bank> by calling Arc::from_raw() on it.  PoH increments the
     810             :    reference count before returning the bank, so that it can also keep
     811             :    its internal copy.
     812             : 
     813             :    If there is no leader bank, NULL is returned.  In this case, the
     814             :    caller should not call `Arc::from_raw()`. */
     815             : 
     816             : CALLED_FROM_RUST void const *
     817           0 : fd_ext_poh_acquire_leader_bank( void ) {
     818           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     819           0 :   void const * bank = NULL;
     820           0 :   if( FD_LIKELY( ctx->current_leader_bank ) ) {
     821             :     /* Clone refcount before we release the lock. */
     822           0 :     fd_ext_bank_acquire( ctx->current_leader_bank );
     823           0 :     bank = ctx->current_leader_bank;
     824           0 :   }
     825           0 :   fd_ext_poh_write_unlock();
     826           0 :   return bank;
     827           0 : }
     828             : 
     829             : /* fd_ext_poh_reset_slot returns the slot height one above the last good
     830             :    (unskipped) slot we are building on top of.  This is always a good
     831             :    known value, and will not be ULONG_MAX. */
     832             : 
     833             : CALLED_FROM_RUST ulong
     834           0 : fd_ext_poh_reset_slot( void ) {
     835           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     836           0 :   ulong reset_slot = ctx->reset_slot;
     837           0 :   fd_ext_poh_write_unlock();
     838           0 :   return reset_slot;
     839           0 : }
     840             : 
     841             : CALLED_FROM_RUST void
     842           0 : fd_ext_poh_update_active_descendant( ulong max_active_descendant ) {
     843           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     844           0 :   ctx->max_active_descendant = max_active_descendant;
     845           0 :   fd_ext_poh_write_unlock();
     846           0 : }
     847             : 
     848             : /* fd_ext_poh_reached_leader_slot returns 1 if we have reached a slot
     849             :    where we are leader.  This is used by the replay stage to determine
     850             :    if it should create a new leader bank descendant of the prior reset
     851             :    slot block.
     852             : 
     853             :    Sometimes, even when we reach our slot we do not return 1, as we are
     854             :    giving a grace period to the prior leader to finish publishing their
     855             :    block.
     856             : 
     857             :    out_leader_slot is the slot height of the leader slot we reached, and
     858             :    reset_slot is the slot height of the last good (unskipped) slot we
     859             :    are building on top of. */
     860             : 
     861             : CALLED_FROM_RUST int
     862             : fd_ext_poh_reached_leader_slot( ulong * out_leader_slot,
     863           0 :                                 ulong * out_reset_slot ) {
     864           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     865             : 
     866           0 :   *out_leader_slot = ctx->next_leader_slot;
     867           0 :   *out_reset_slot  = ctx->reset_slot;
     868             : 
     869           0 :   if( FD_UNLIKELY( ctx->next_leader_slot==ULONG_MAX ||
     870           0 :                    ctx->slot<ctx->next_leader_slot ) ) {
     871             :     /* Didn't reach our leader slot yet. */
     872           0 :     fd_ext_poh_write_unlock();
     873           0 :     return 0;
     874           0 :   }
     875             : 
     876           0 :   if( FD_UNLIKELY( ctx->halted_switching_key ) ) {
     877             :     /* Reached our leader slot, but the leader pipeline is halted
     878             :        because we are switching identity key. */
     879           0 :     fd_ext_poh_write_unlock();
     880           0 :     return 0;
     881           0 :   }
     882             : 
     883           0 :   if( FD_LIKELY( ctx->reset_slot==ctx->next_leader_slot ) ) {
     884             :     /* We were reset onto our leader slot, because the prior leader
     885             :        completed theirs, so we should start immediately, no need for a
     886             :        grace period. */
     887           0 :     fd_ext_poh_write_unlock();
     888           0 :     return 1;
     889           0 :   }
     890             : 
     891           0 :   long now_ns = fd_log_wallclock();
     892           0 :   long expected_start_time_ns = ctx->reset_slot_start_ns + (long)((double)(ctx->next_leader_slot-ctx->reset_slot)*ctx->slot_duration_ns);
     893             : 
     894             :   /* If a prior leader is still in the process of publishing their slot,
     895             :      delay ours to let them finish ... unless they are so delayed that
     896             :      we risk getting skipped by the leader following us.  1.2 seconds
     897             :      is a reasonable default here, although any value between 0 and 1.6
     898             :      seconds could be considered reasonable.  This is arbitrary and
     899             :      chosen due to intuition. */
     900             : 
     901           0 :   if( FD_UNLIKELY( now_ns<expected_start_time_ns+(long)(3.0*ctx->slot_duration_ns) ) ) {
     902             :     /* If the max_active_descendant is >= next_leader_slot, we waited
     903             :        too long and a leader after us started publishing to try and skip
     904             :        us.  Just start our leader slot immediately, we might win ... */
     905             : 
     906           0 :     if( FD_LIKELY( ctx->max_active_descendant>=ctx->reset_slot && ctx->max_active_descendant<ctx->next_leader_slot ) ) {
     907             :       /* If one of the leaders between the reset slot and our leader
     908             :          slot is in the process of publishing (they have a descendant
     909             :          bank that is in progress of being replayed), then keep waiting.
     910             :          We probably wouldn't get a leader slot out before they
     911             :          finished.
     912             : 
     913             :          Unless... we are past the deadline to start our slot by more
     914             :          than 1.2 seconds, in which case we should probably start it to
     915             :          avoid getting skipped by the leader behind us. */
     916           0 :       fd_ext_poh_write_unlock();
     917           0 :       return 0;
     918           0 :     }
     919           0 :   }
     920             : 
     921           0 :   fd_ext_poh_write_unlock();
     922           0 :   return 1;
     923           0 : }
     924             : 
     925             : CALLED_FROM_RUST static inline void
     926             : publish_plugin_slot_start( fd_poh_ctx_t * ctx,
     927             :                            ulong          slot,
     928           0 :                            ulong          parent_slot ) {
     929           0 :   if( FD_UNLIKELY( !ctx->plugin_out->mem ) ) return;
     930             : 
     931           0 :   fd_plugin_msg_slot_start_t * slot_start = (fd_plugin_msg_slot_start_t *)fd_chunk_to_laddr( ctx->plugin_out->mem, ctx->plugin_out->chunk );
     932           0 :   *slot_start = (fd_plugin_msg_slot_start_t){ .slot = slot, .parent_slot = parent_slot };
     933           0 :   fd_stem_publish( ctx->stem, ctx->plugin_out->idx, FD_PLUGIN_MSG_SLOT_START, ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_start_t), 0UL, 0UL, 0UL );
     934           0 :   ctx->plugin_out->chunk = fd_dcache_compact_next( ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_start_t), ctx->plugin_out->chunk0, ctx->plugin_out->wmark );
     935           0 : }
     936             : 
     937             : CALLED_FROM_RUST static inline void
     938             : publish_plugin_slot_end( fd_poh_ctx_t * ctx,
     939             :                          ulong          slot,
     940           0 :                          ulong          cus_used ) {
     941           0 :   if( FD_UNLIKELY( !ctx->plugin_out->mem ) ) return;
     942             : 
     943           0 :   fd_plugin_msg_slot_end_t * slot_end = (fd_plugin_msg_slot_end_t *)fd_chunk_to_laddr( ctx->plugin_out->mem, ctx->plugin_out->chunk );
     944           0 :   *slot_end = (fd_plugin_msg_slot_end_t){ .slot = slot, .cus_used = cus_used };
     945           0 :   fd_stem_publish( ctx->stem, ctx->plugin_out->idx, FD_PLUGIN_MSG_SLOT_END, ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_end_t), 0UL, 0UL, 0UL );
     946           0 :   ctx->plugin_out->chunk = fd_dcache_compact_next( ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_end_t), ctx->plugin_out->chunk0, ctx->plugin_out->wmark );
     947           0 : }
     948             : 
     949             : extern int
     950             : fd_ext_bank_load_account( void const *  bank,
     951             :                           int           fixed_root,
     952             :                           uchar const * addr,
     953             :                           uchar *       owner,
     954             :                           uchar *       data,
     955             :                           ulong *       data_sz );
     956             : 
     957             : CALLED_FROM_RUST static void
     958             : publish_became_leader( fd_poh_ctx_t * ctx,
     959             :                        ulong          slot,
     960           0 :                        ulong          epoch ) {
     961           0 :   double tick_per_ns = fd_tempo_tick_per_ns( NULL );
     962           0 :   fd_histf_sample( ctx->begin_leader_delay, (ulong)((double)(fd_log_wallclock()-ctx->reset_slot_start_ns)/tick_per_ns) );
     963             : 
     964           0 :   if( FD_UNLIKELY( ctx->lagged_consecutive_leader_start ) ) {
     965             :     /* If we are mirroring Agave behavior, the wall clock gets reset
     966             :        here so we don't count time spent waiting for a bank to freeze
     967             :        or replay stage to actually start the slot towards our 400ms.
     968             : 
     969             :        See extended comments in the config file on this option. */
     970           0 :     ctx->reset_slot_start_ns = fd_log_wallclock() - (long)((double)(slot-ctx->reset_slot)*ctx->slot_duration_ns);
     971           0 :   }
     972             : 
     973           0 :   fd_bundle_crank_tip_payment_config_t config[1]             = { 0 };
     974           0 :   fd_acct_addr_t                       tip_receiver_owner[1] = { 0 };
     975             : 
     976           0 :   if( FD_UNLIKELY( ctx->bundle.enabled ) ) {
     977           0 :     long bundle_time = -fd_tickcount();
     978           0 :     fd_acct_addr_t tip_payment_config[1];
     979           0 :     fd_acct_addr_t tip_receiver[1];
     980           0 :     fd_bundle_crank_get_addresses( ctx->bundle.gen, epoch, tip_payment_config, tip_receiver );
     981             : 
     982           0 :     fd_acct_addr_t _dummy[1];
     983           0 :     uchar          dummy[1];
     984             : 
     985           0 :     void const * bank = ctx->current_leader_bank;
     986             : 
     987             :     /* Calling rust from a C function that is CALLED_FROM_RUST risks
     988             :        deadlock.  In this case, I checked the load_account function and
     989             :        ensured it never calls any C functions that acquire the lock. */
     990           0 :     ulong sz1 = sizeof(config), sz2 = 1UL;
     991           0 :     int found1 = fd_ext_bank_load_account( bank, 0, tip_payment_config->b, _dummy->b,             (uchar *)config, &sz1 );
     992           0 :     int found2 = fd_ext_bank_load_account( bank, 0, tip_receiver->b,       tip_receiver_owner->b,          dummy,  &sz2 );
     993             :     /* The bundle crank code detects whether the accounts were found by
     994             :        whether they have non-zero values (since found and uninitialized
     995             :        should be treated the same), so we actually don't really care
     996             :        about the value of found{1,2}. */
     997           0 :     (void)found1; (void)found2;
     998           0 :     bundle_time += fd_tickcount();
     999           0 :     fd_histf_sample( ctx->bundle_init_delay, (ulong)bundle_time );
    1000           0 :   }
    1001             : 
    1002           0 :   long slot_start_ns = ctx->reset_slot_start_ns + (long)((double)(slot-ctx->reset_slot)*ctx->slot_duration_ns);
    1003             : 
    1004             :   /* No need to check flow control, there are always credits became when we
    1005             :      are leader, we will not "become" leader again until we are done, so at
    1006             :      most one frag in flight at a time. */
    1007             : 
    1008           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->pack_out->mem, ctx->pack_out->chunk );
    1009             : 
    1010           0 :   fd_became_leader_t * leader = (fd_became_leader_t *)dst;
    1011           0 :   leader->slot_start_ns           = slot_start_ns;
    1012           0 :   leader->slot_end_ns             = (long)((double)slot_start_ns + ctx->slot_duration_ns);
    1013           0 :   leader->bank                    = ctx->current_leader_bank;
    1014           0 :   leader->max_microblocks_in_slot = ctx->max_microblocks_per_slot;
    1015           0 :   leader->ticks_per_slot          = ctx->ticks_per_slot;
    1016           0 :   leader->total_skipped_ticks     = ctx->ticks_per_slot*(slot-ctx->reset_slot);
    1017           0 :   leader->epoch                   = epoch;
    1018           0 :   leader->bundle->config[0]       = config[0];
    1019             : 
    1020           0 :   leader->limits.slot_max_cost                = ctx->limits.slot_max_cost;
    1021           0 :   leader->limits.slot_max_vote_cost           = ctx->limits.slot_max_vote_cost;
    1022           0 :   leader->limits.slot_max_write_cost_per_acct = ctx->limits.slot_max_write_cost_per_acct;
    1023             : 
    1024           0 :   memcpy( leader->bundle->last_blockhash,     ctx->reset_hash,    32UL );
    1025           0 :   memcpy( leader->bundle->tip_receiver_owner, tip_receiver_owner, 32UL );
    1026             : 
    1027           0 :   if( FD_UNLIKELY( leader->ticks_per_slot+leader->total_skipped_ticks>=MAX_SKIPPED_TICKS ) )
    1028           0 :     FD_LOG_ERR(( "Too many skipped ticks %lu for slot %lu, chain must halt", leader->ticks_per_slot+leader->total_skipped_ticks, slot ));
    1029             : 
    1030           0 :   ulong sig = fd_disco_poh_sig( slot, POH_PKT_TYPE_BECAME_LEADER, 0UL );
    1031           0 :   fd_stem_publish( ctx->stem, ctx->pack_out->idx, sig, ctx->pack_out->chunk, sizeof(fd_became_leader_t), 0UL, 0UL, 0UL );
    1032           0 :   ctx->pack_out->chunk = fd_dcache_compact_next( ctx->pack_out->chunk, sizeof(fd_became_leader_t), ctx->pack_out->chunk0, ctx->pack_out->wmark );
    1033           0 : }
    1034             : 
    1035             : /* The PoH tile knows when it should become leader by waiting for its
    1036             :    leader slot (with the operating system clock).  This function is so
    1037             :    that when it becomes the leader, it can be told what the leader bank
    1038             :    is by the replay stage.  See the notes in the long comment above for
    1039             :    more on how this works. */
    1040             : 
    1041             : CALLED_FROM_RUST void
    1042             : fd_ext_poh_begin_leader( void const * bank,
    1043             :                          ulong        slot,
    1044             :                          ulong        epoch,
    1045             :                          ulong        hashcnt_per_tick,
    1046             :                          ulong        cus_block_limit,
    1047             :                          ulong        cus_vote_cost_limit,
    1048           0 :                          ulong        cus_account_cost_limit ) {
    1049           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
    1050             : 
    1051           0 :   FD_TEST( !ctx->current_leader_bank );
    1052             : 
    1053           0 :   if( FD_UNLIKELY( slot!=ctx->slot ) )             FD_LOG_ERR(( "Trying to begin leader slot %lu but we are now on slot %lu", slot, ctx->slot ));
    1054           0 :   if( FD_UNLIKELY( slot!=ctx->next_leader_slot ) ) FD_LOG_ERR(( "Trying to begin leader slot %lu but next leader slot is %lu", slot, ctx->next_leader_slot ));
    1055             : 
    1056           0 :   if( FD_UNLIKELY( ctx->hashcnt_per_tick!=hashcnt_per_tick ) ) {
    1057           0 :     FD_LOG_WARNING(( "hashes per tick changed from %lu to %lu", ctx->hashcnt_per_tick, hashcnt_per_tick ));
    1058             : 
    1059             :     /* Recompute derived information about the clock. */
    1060           0 :     ctx->hashcnt_duration_ns = (double)ctx->tick_duration_ns/(double)hashcnt_per_tick;
    1061           0 :     ctx->hashcnt_per_slot = ctx->ticks_per_slot*hashcnt_per_tick;
    1062           0 :     ctx->hashcnt_per_tick = hashcnt_per_tick;
    1063             : 
    1064           0 :     if( FD_UNLIKELY( ctx->hashcnt_per_tick==1UL ) ) {
    1065             :       /* Low power producer, maximum of one microblock per tick in the slot */
    1066           0 :       ctx->max_microblocks_per_slot = ctx->ticks_per_slot;
    1067           0 :     } else {
    1068             :       /* See the long comment in after_credit for this limit */
    1069           0 :       ctx->max_microblocks_per_slot = fd_ulong_min( MAX_MICROBLOCKS_PER_SLOT, ctx->ticks_per_slot*(ctx->hashcnt_per_tick-1UL) );
    1070           0 :     }
    1071             : 
    1072             :     /* Discard any ticks we might have done in the interim.  They will
    1073             :        have the wrong number of hashes per tick.  We can just catch back
    1074             :        up quickly if not too many slots were skipped and hopefully
    1075             :        publish on time.  Note that tick production and verification of
    1076             :        skipped slots is done for the eventual bank that publishes a
    1077             :        slot, for example:
    1078             : 
    1079             :         Reset Slot:            998
    1080             :         Epoch Transition Slot: 1000
    1081             :         Leader Slot:           1002
    1082             : 
    1083             :        In this case, if a feature changing the hashcnt_per_tick is
    1084             :        activated in slot 1000, and we are publishing empty ticks for
    1085             :        slots 998, 999, 1000, and 1001, they should all have the new
    1086             :        hashes_per_tick number of hashes, rather than the older one, or
    1087             :        some combination. */
    1088             : 
    1089           0 :     FD_TEST( ctx->last_slot==ctx->reset_slot );
    1090           0 :     FD_TEST( !ctx->last_hashcnt );
    1091           0 :     ctx->slot = ctx->reset_slot;
    1092           0 :     ctx->hashcnt = 0UL;
    1093           0 :   }
    1094             : 
    1095           0 :   ctx->current_leader_bank     = bank;
    1096           0 :   ctx->microblocks_lower_bound = 0UL;
    1097           0 :   ctx->cus_used                = 0UL;
    1098           0 :   ctx->expect_microblock_idx   = 0UL;
    1099             : 
    1100           0 :   ctx->limits.slot_max_cost                = cus_block_limit;
    1101           0 :   ctx->limits.slot_max_vote_cost           = cus_vote_cost_limit;
    1102           0 :   ctx->limits.slot_max_write_cost_per_acct = cus_account_cost_limit;
    1103             : 
    1104             :   /* clamp and warn if we are underutilizing CUs */
    1105           0 :   if( FD_UNLIKELY( ctx->limits.slot_max_cost > FD_PACK_MAX_COST_PER_BLOCK_UPPER_BOUND ) ) {
    1106           0 :     FD_LOG_WARNING(( "Underutilizing protocol slot CU limit. protocol_limit=%lu validator_limit=%lu", ctx->limits.slot_max_cost, FD_PACK_MAX_COST_PER_BLOCK_UPPER_BOUND ));
    1107           0 :     ctx->limits.slot_max_cost = FD_PACK_MAX_COST_PER_BLOCK_UPPER_BOUND;
    1108           0 :   }
    1109           0 :   if( FD_UNLIKELY( ctx->limits.slot_max_vote_cost > FD_PACK_MAX_VOTE_COST_PER_BLOCK_UPPER_BOUND ) ) {
    1110           0 :     FD_LOG_WARNING(( "Underutilizing protocol vote CU limit. protocol_limit=%lu validator_limit=%lu", ctx->limits.slot_max_vote_cost, FD_PACK_MAX_VOTE_COST_PER_BLOCK_UPPER_BOUND ));
    1111           0 :     ctx->limits.slot_max_vote_cost = FD_PACK_MAX_VOTE_COST_PER_BLOCK_UPPER_BOUND;
    1112           0 :   }
    1113           0 :   if( FD_UNLIKELY( ctx->limits.slot_max_write_cost_per_acct > FD_PACK_MAX_WRITE_COST_PER_ACCT_UPPER_BOUND ) ) {
    1114           0 :     FD_LOG_WARNING(( "Underutilizing protocol write CU limit. protocol_limit=%lu validator_limit=%lu", ctx->limits.slot_max_write_cost_per_acct, FD_PACK_MAX_WRITE_COST_PER_ACCT_UPPER_BOUND ));
    1115           0 :     ctx->limits.slot_max_write_cost_per_acct = FD_PACK_MAX_WRITE_COST_PER_ACCT_UPPER_BOUND;
    1116           0 :   }
    1117             : 
    1118             :   /* We are about to start publishing to the shred tile for this slot
    1119             :      so update the highwater mark so we never republish in this slot
    1120             :      again.  Also check that the leader slot is greater than the
    1121             :      highwater, which should have been ensured earlier. */
    1122             : 
    1123           0 :   FD_TEST( ctx->highwater_leader_slot==ULONG_MAX || slot>=ctx->highwater_leader_slot );
    1124           0 :   ctx->highwater_leader_slot = fd_ulong_max( fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ), slot );
    1125             : 
    1126           0 :   publish_became_leader( ctx, slot, epoch );
    1127           0 :   FD_LOG_INFO(( "fd_ext_poh_begin_leader(slot=%lu, highwater_leader_slot=%lu, last_slot=%lu, last_hashcnt=%lu)", slot, ctx->highwater_leader_slot, ctx->last_slot, ctx->last_hashcnt ));
    1128             : 
    1129           0 :   fd_ext_poh_write_unlock();
    1130           0 : }
    1131             : 
    1132             : /* Determine what the next slot is in the leader schedule is that we are
    1133             :    leader.  Includes the current slot.  If we are not leader in what
    1134             :    remains of the current and next epoch, return ULONG_MAX. */
    1135             : 
    1136             : static inline CALLED_FROM_RUST ulong
    1137           0 : next_leader_slot( fd_poh_ctx_t * ctx ) {
    1138             :   /* If we have published anything in a particular slot, then we
    1139             :      should never become leader for that slot again. */
    1140           0 :   ulong min_leader_slot = fd_ulong_max( ctx->slot, fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ) );
    1141           0 :   return fd_multi_epoch_leaders_get_next_slot( ctx->mleaders, min_leader_slot, &ctx->identity_key );
    1142           0 : }
    1143             : 
    1144             : extern int
    1145             : fd_ext_admin_rpc_set_identity( uchar const * identity_keypair,
    1146             :                                int           require_tower );
    1147             : 
    1148             : static inline int FD_FN_SENSITIVE
    1149             : maybe_change_identity( fd_poh_ctx_t * ctx,
    1150           0 :                        int            definitely_not_leader ) {
    1151           0 :   if( FD_UNLIKELY( ctx->halted_switching_key && fd_keyswitch_state_query( ctx->keyswitch )==FD_KEYSWITCH_STATE_UNHALT_PENDING ) ) {
    1152           0 :     ctx->halted_switching_key = 0;
    1153           0 :     fd_keyswitch_state( ctx->keyswitch, FD_KEYSWITCH_STATE_COMPLETED );
    1154           0 :     return 1;
    1155           0 :   }
    1156             : 
    1157             :   /* Cannot change identity while in the middle of a leader slot, else
    1158             :      poh state machine would become corrupt. */
    1159             : 
    1160           0 :   int is_leader = !definitely_not_leader && ctx->next_leader_slot!=ULONG_MAX && ctx->slot>=ctx->next_leader_slot;
    1161           0 :   if( FD_UNLIKELY( is_leader ) ) return 0;
    1162             : 
    1163           0 :   if( FD_UNLIKELY( fd_keyswitch_state_query( ctx->keyswitch )==FD_KEYSWITCH_STATE_SWITCH_PENDING ) ) {
    1164           0 :     int failed = fd_ext_admin_rpc_set_identity( ctx->keyswitch->bytes, fd_keyswitch_param_query( ctx->keyswitch )==1 );
    1165           0 :     explicit_bzero( ctx->keyswitch->bytes, 32UL );
    1166           0 :     FD_COMPILER_MFENCE();
    1167           0 :     if( FD_UNLIKELY( failed==-1 ) ) {
    1168           0 :       fd_keyswitch_state( ctx->keyswitch, FD_KEYSWITCH_STATE_FAILED );
    1169           0 :       return 0;
    1170           0 :     }
    1171             : 
    1172           0 :     memcpy( ctx->identity_key.uc, ctx->keyswitch->bytes+32UL, 32UL );
    1173             : 
    1174             :     /* When we switch key, we might have ticked part way through a slot
    1175             :        that we are now leader in.  This violates the contract of the
    1176             :        tile, that when we become leader, we have not ticked in that slot
    1177             :        at all.  To see why this would be bad, consider the case where we
    1178             :        have ticked almost to the end, and there isn't enough space left
    1179             :        to reserve the minimum amount of microblocks needed by pack.
    1180             : 
    1181             :        To resolve this, we just reset PoH back to the reset slot, and
    1182             :        let it try to catch back up quickly. This is OK since the network
    1183             :        rarely skips. */
    1184           0 :     ctx->slot    = ctx->reset_slot;
    1185           0 :     ctx->hashcnt = 0UL;
    1186           0 :     memcpy( ctx->hash, ctx->reset_hash, 32UL );
    1187             : 
    1188           0 :     ctx->halted_switching_key = 1;
    1189           0 :     ctx->keyswitch->result    = ctx->shred_seq;
    1190           0 :     fd_keyswitch_state( ctx->keyswitch, FD_KEYSWITCH_STATE_COMPLETED );
    1191           0 :   }
    1192             : 
    1193           0 :   return 0;
    1194           0 : }
    1195             : 
    1196             : static CALLED_FROM_RUST void
    1197           0 : no_longer_leader( fd_poh_ctx_t * ctx ) {
    1198           0 :   if( FD_UNLIKELY( ctx->current_leader_bank ) ) fd_ext_bank_release( ctx->current_leader_bank );
    1199             :   /* If we stop being leader in a slot, we can never become leader in
    1200             :       that slot again, and all in-flight microblocks for that slot
    1201             :       should be dropped. */
    1202           0 :   ctx->highwater_leader_slot = fd_ulong_max( fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ), ctx->slot );
    1203           0 :   ctx->current_leader_bank = NULL;
    1204           0 :   int identity_changed = maybe_change_identity( ctx, 1 );
    1205           0 :   ctx->next_leader_slot = next_leader_slot( ctx );
    1206           0 :   if( FD_UNLIKELY( identity_changed ) ) {
    1207           0 :     FD_LOG_INFO(( "fd_poh_identity_changed(next_leader_slot=%lu)", ctx->next_leader_slot ));
    1208           0 :   }
    1209             : 
    1210           0 :   FD_COMPILER_MFENCE();
    1211           0 :   fd_ext_poh_signal_leader_change( ctx->signal_leader_change );
    1212           0 :   FD_LOG_INFO(( "no_longer_leader(next_leader_slot=%lu)", ctx->next_leader_slot ));
    1213           0 : }
    1214             : 
    1215             : /* fd_ext_poh_reset is called by the Agave client when a slot on
    1216             :    the active fork has finished a block and we need to reset our PoH to
    1217             :    be ticking on top of the block it produced. */
    1218             : 
    1219             : CALLED_FROM_RUST void
    1220             : fd_ext_poh_reset( ulong         completed_bank_slot, /* The slot that successfully produced a block */
    1221             :                   uchar const * reset_blockhash,     /* The hash of the last tick in the produced block */
    1222             :                   ulong         hashcnt_per_tick,    /* The hashcnt per tick of the bank that completed */
    1223             :                   uchar const * parent_block_id,     /* The block id of the parent block */
    1224           0 :                   ulong const * features_activation  /* The activation slot of shred-tile features */ ) {
    1225           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
    1226             : 
    1227           0 :   ulong slot_before_reset = ctx->slot;
    1228           0 :   int leader_before_reset = ctx->slot>=ctx->next_leader_slot;
    1229           0 :   if( FD_UNLIKELY( leader_before_reset && ctx->current_leader_bank ) ) {
    1230             :     /* If we were in the middle of a leader slot that we notified pack
    1231             :        pack to start packing for we can never publish into that slot
    1232             :        again, mark all in-flight microblocks to be dropped. */
    1233           0 :     ctx->highwater_leader_slot = fd_ulong_max( fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ), 1UL+ctx->slot );
    1234           0 :   }
    1235             : 
    1236           0 :   ctx->leader_bank_start_ns = fd_log_wallclock(); /* safe to call from Rust */
    1237           0 :   if( FD_UNLIKELY( ctx->expect_sequential_leader_slot==(completed_bank_slot+1UL) ) ) {
    1238             :     /* If we are being reset onto a slot, it means some block was fully
    1239             :        processed, so we reset to build on top of it.  Typically we want
    1240             :        to update the reset_slot_start_ns to the current time, because
    1241             :        the network will give the next leader 400ms to publish,
    1242             :        regardless of how long the prior leader took.
    1243             : 
    1244             :        But: if we were leader in the prior slot, and the block was our
    1245             :        own we can do better.  We know that the next slot should start
    1246             :        exactly 400ms after the prior one started, so we can use that as
    1247             :        the reset slot start time instead. */
    1248           0 :     ctx->reset_slot_start_ns = ctx->reset_slot_start_ns + (long)((double)((completed_bank_slot+1UL)-ctx->reset_slot)*ctx->slot_duration_ns);
    1249           0 :   } else {
    1250           0 :     ctx->reset_slot_start_ns = ctx->leader_bank_start_ns;
    1251           0 :   }
    1252           0 :   ctx->expect_sequential_leader_slot = ULONG_MAX;
    1253             : 
    1254           0 :   memcpy( ctx->reset_hash, reset_blockhash, 32UL );
    1255           0 :   memcpy( ctx->hash, reset_blockhash, 32UL );
    1256           0 :   if( FD_LIKELY( parent_block_id!=NULL ) ) {
    1257           0 :     ctx->parent_slot = completed_bank_slot;
    1258           0 :     memcpy( ctx->parent_block_id, parent_block_id, 32UL );
    1259           0 :   } else {
    1260           0 :     FD_LOG_WARNING(( "fd_ext_poh_reset(block_id=null,reset_slot=%lu,parent_slot=%lu) - ignored", completed_bank_slot, ctx->parent_slot ));
    1261           0 :   }
    1262           0 :   ctx->slot         = completed_bank_slot+1UL;
    1263           0 :   ctx->hashcnt      = 0UL;
    1264           0 :   ctx->last_slot    = ctx->slot;
    1265           0 :   ctx->last_hashcnt = 0UL;
    1266           0 :   ctx->reset_slot   = ctx->slot;
    1267             : 
    1268           0 :   if( FD_UNLIKELY( ctx->hashcnt_per_tick!=hashcnt_per_tick ) ) {
    1269           0 :     FD_LOG_WARNING(( "hashes per tick changed from %lu to %lu", ctx->hashcnt_per_tick, hashcnt_per_tick ));
    1270             : 
    1271             :     /* Recompute derived information about the clock. */
    1272           0 :     ctx->hashcnt_duration_ns = (double)ctx->tick_duration_ns/(double)hashcnt_per_tick;
    1273           0 :     ctx->hashcnt_per_slot = ctx->ticks_per_slot*hashcnt_per_tick;
    1274           0 :     ctx->hashcnt_per_tick = hashcnt_per_tick;
    1275             : 
    1276           0 :     if( FD_UNLIKELY( ctx->hashcnt_per_tick==1UL ) ) {
    1277             :       /* Low power producer, maximum of one microblock per tick in the slot */
    1278           0 :       ctx->max_microblocks_per_slot = ctx->ticks_per_slot;
    1279           0 :     } else {
    1280             :       /* See the long comment in after_credit for this limit */
    1281           0 :       ctx->max_microblocks_per_slot = fd_ulong_min( MAX_MICROBLOCKS_PER_SLOT, ctx->ticks_per_slot*(ctx->hashcnt_per_tick-1UL) );
    1282           0 :     }
    1283           0 :   }
    1284             : 
    1285             :   /* When we reset, we need to allow PoH to tick freely again rather
    1286             :      than being constrained.  If we are leader after the reset, this
    1287             :      is OK because we won't tick until we get a bank, and the lower
    1288             :      bound will be reset with the value from the bank. */
    1289           0 :   ctx->microblocks_lower_bound = ctx->max_microblocks_per_slot;
    1290             : 
    1291           0 :   if( FD_UNLIKELY( leader_before_reset ) ) {
    1292             :     /* No longer have a leader bank if we are reset. Replay stage will
    1293             :        call back again to give us a new one if we should become leader
    1294             :        for the reset slot.
    1295             : 
    1296             :        The order is important here, ctx->hashcnt must be updated before
    1297             :        calling no_longer_leader. */
    1298           0 :     no_longer_leader( ctx );
    1299           0 :   }
    1300           0 :   ctx->next_leader_slot = next_leader_slot( ctx );
    1301           0 :   FD_LOG_INFO(( "fd_ext_poh_reset(slot=%lu,next_leader_slot=%lu)", ctx->reset_slot, ctx->next_leader_slot ));
    1302             : 
    1303           0 :   if( FD_UNLIKELY( ctx->slot>=ctx->next_leader_slot ) ) {
    1304             :     /* We are leader after the reset... two cases: */
    1305           0 :     if( FD_LIKELY( ctx->slot==slot_before_reset ) ) {
    1306             :       /* 1. We are reset onto the same slot we are already leader on.
    1307             :             This is a common case when we have two leader slots in a
    1308             :             row, replay stage will reset us to our own slot.  No need to
    1309             :             do anything here, we already sent a SLOT_START. */
    1310           0 :       FD_TEST( leader_before_reset );
    1311           0 :     } else {
    1312             :       /* 2. We are reset onto a different slot. If we were leader
    1313             :             before, we should first end that slot, then begin the new
    1314             :             one if we are newly leader now. */
    1315           0 :       if( FD_LIKELY( leader_before_reset ) ) publish_plugin_slot_end( ctx, slot_before_reset, ctx->cus_used );
    1316           0 :       else                                   publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->reset_slot );
    1317           0 :     }
    1318           0 :   } else {
    1319           0 :     if( FD_UNLIKELY( leader_before_reset ) ) publish_plugin_slot_end( ctx, slot_before_reset, ctx->cus_used );
    1320           0 :   }
    1321             : 
    1322             :   /* There is a subset of FD_SHRED_FEATURES_ACTIVATION_... slots that
    1323             :       the shred tile needs to be aware of.  Since their computation
    1324             :       requires the bank, we are forced (so far) to receive them here
    1325             :       from the Rust side, before forwarding them to the shred tile as
    1326             :       POH_PKT_TYPE_FEAT_ACT_SLOT.  This is not elegant, and it should
    1327             :       be revised in the future (TODO), but it provides a "temporary"
    1328             :       working solution to handle features activation. */
    1329           0 :   fd_memcpy( ctx->features_activation->slots, features_activation, sizeof(fd_shred_features_activation_t) );
    1330           0 :   ctx->features_activation_avail = 1UL;
    1331             : 
    1332           0 :   fd_ext_poh_write_unlock();
    1333           0 : }
    1334             : 
    1335             : /* Since it can't easily return an Option<Pubkey>, return 1 for Some and
    1336             :    0 for None. */
    1337             : CALLED_FROM_RUST int
    1338             : fd_ext_poh_get_leader_after_n_slots( ulong n,
    1339           0 :                                      uchar out_pubkey[ static 32 ] ) {
    1340           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
    1341           0 :   ulong slot = ctx->slot + n;
    1342           0 :   fd_pubkey_t const * leader = fd_multi_epoch_leaders_get_leader_for_slot( ctx->mleaders, slot );
    1343             : 
    1344           0 :   int copied = 0;
    1345           0 :   if( FD_LIKELY( leader ) ) {
    1346           0 :     memcpy( out_pubkey, leader, 32UL );
    1347           0 :     copied = 1;
    1348           0 :   }
    1349           0 :   fd_ext_poh_write_unlock();
    1350           0 :   return copied;
    1351           0 : }
    1352             : 
    1353             : FD_FN_CONST static inline ulong
    1354           0 : scratch_align( void ) {
    1355           0 :   return 128UL;
    1356           0 : }
    1357             : 
    1358             : FD_FN_PURE static inline ulong
    1359           0 : scratch_footprint( fd_topo_tile_t const * tile ) {
    1360           0 :   (void)tile;
    1361           0 :   ulong l = FD_LAYOUT_INIT;
    1362           0 :   l = FD_LAYOUT_APPEND( l, alignof( fd_poh_ctx_t ), sizeof( fd_poh_ctx_t ) );
    1363           0 :   l = FD_LAYOUT_APPEND( l, FD_SHA256_ALIGN, FD_SHA256_FOOTPRINT );
    1364           0 :   return FD_LAYOUT_FINI( l, scratch_align() );
    1365           0 : }
    1366             : 
    1367             : static void
    1368             : publish_tick( fd_poh_ctx_t *      ctx,
    1369             :               fd_stem_context_t * stem,
    1370             :               uchar               hash[ static 32 ],
    1371           0 :               int                 is_skipped ) {
    1372           0 :   ulong hashcnt = ctx->hashcnt_per_tick*(1UL+(ctx->last_hashcnt/ctx->hashcnt_per_tick));
    1373             : 
    1374           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->shred_out->mem, ctx->shred_out->chunk );
    1375             : 
    1376           0 :   FD_TEST( ctx->last_slot>=ctx->reset_slot );
    1377           0 :   fd_entry_batch_meta_t * meta = (fd_entry_batch_meta_t *)dst;
    1378           0 :   if( FD_UNLIKELY( is_skipped ) ) {
    1379             :     /* We are publishing ticks for a skipped slot, the reference tick
    1380             :        and block complete flags should always be zero. */
    1381           0 :     meta->reference_tick = 0UL;
    1382           0 :     meta->block_complete = 0;
    1383           0 :   } else {
    1384           0 :     meta->reference_tick = hashcnt/ctx->hashcnt_per_tick;
    1385           0 :     meta->block_complete = hashcnt==ctx->hashcnt_per_slot;
    1386           0 :   }
    1387             : 
    1388           0 :   ulong slot = fd_ulong_if( meta->block_complete, ctx->slot-1UL, ctx->slot );
    1389           0 :   meta->parent_offset = 1UL+slot-ctx->reset_slot;
    1390             : 
    1391             :   /* From poh_reset we received the block_id for ctx->parent_slot.
    1392             :      Now we're telling shred tile to build on parent: (slot-meta->parent_offset).
    1393             :      The block_id that we're passing is valid iff the two are the same,
    1394             :      i.e. ctx->parent_slot == (slot-meta->parent_offset). */
    1395           0 :   meta->parent_block_id_valid = ctx->parent_slot == (slot-meta->parent_offset);
    1396           0 :   if( FD_LIKELY( meta->parent_block_id_valid ) ) {
    1397           0 :     fd_memcpy( meta->parent_block_id, ctx->parent_block_id, 32UL );
    1398           0 :   }
    1399             : 
    1400           0 :   FD_TEST( hashcnt>ctx->last_hashcnt );
    1401           0 :   ulong hash_delta = hashcnt-ctx->last_hashcnt;
    1402             : 
    1403           0 :   dst += sizeof(fd_entry_batch_meta_t);
    1404           0 :   fd_entry_batch_header_t * tick = (fd_entry_batch_header_t *)dst;
    1405           0 :   tick->hashcnt_delta = hash_delta;
    1406           0 :   fd_memcpy( tick->hash, hash, 32UL );
    1407           0 :   tick->txn_cnt = 0UL;
    1408             : 
    1409           0 :   ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
    1410           0 :   ulong sz = sizeof(fd_entry_batch_meta_t)+sizeof(fd_entry_batch_header_t);
    1411           0 :   ulong sig = fd_disco_poh_sig( slot, POH_PKT_TYPE_MICROBLOCK, 0UL );
    1412           0 :   fd_stem_publish( stem, ctx->shred_out->idx, sig, ctx->shred_out->chunk, sz, 0UL, 0UL, tspub );
    1413           0 :   ctx->shred_seq = stem->seqs[ ctx->shred_out->idx ];
    1414           0 :   ctx->shred_out->chunk = fd_dcache_compact_next( ctx->shred_out->chunk, sz, ctx->shred_out->chunk0, ctx->shred_out->wmark );
    1415             : 
    1416           0 :   if( FD_UNLIKELY( hashcnt==ctx->hashcnt_per_slot ) ) {
    1417           0 :     ctx->last_slot++;
    1418           0 :     ctx->last_hashcnt = 0UL;
    1419           0 :   } else {
    1420           0 :     ctx->last_hashcnt = hashcnt;
    1421           0 :   }
    1422           0 : }
    1423             : 
    1424             : static inline void
    1425             : publish_features_activation(  fd_poh_ctx_t *      ctx,
    1426           0 :                               fd_stem_context_t * stem ) {
    1427           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->shred_out->mem, ctx->shred_out->chunk );
    1428           0 :   fd_shred_features_activation_t * act_data = (fd_shred_features_activation_t *)dst;
    1429           0 :   fd_memcpy( act_data, ctx->features_activation, sizeof(fd_shred_features_activation_t) );
    1430             : 
    1431           0 :   ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
    1432           0 :   ulong sz = sizeof(fd_shred_features_activation_t);
    1433           0 :   ulong sig = fd_disco_poh_sig( ctx->slot, POH_PKT_TYPE_FEAT_ACT_SLOT, 0UL );
    1434           0 :   fd_stem_publish( stem, ctx->shred_out->idx, sig, ctx->shred_out->chunk, sz, 0UL, 0UL, tspub );
    1435           0 :   ctx->shred_seq = stem->seqs[ ctx->shred_out->idx ];
    1436           0 :   ctx->shred_out->chunk = fd_dcache_compact_next( ctx->shred_out->chunk, sz, ctx->shred_out->chunk0, ctx->shred_out->wmark );
    1437           0 : }
    1438             : 
    1439             : static inline void
    1440             : after_credit( fd_poh_ctx_t *      ctx,
    1441             :               fd_stem_context_t * stem,
    1442             :               int *               opt_poll_in,
    1443           0 :               int *               charge_busy ) {
    1444           0 :   ctx->stem = stem;
    1445             : 
    1446           0 :   FD_COMPILER_MFENCE();
    1447           0 :   if( FD_UNLIKELY( fd_poh_waiting_lock ) )  {
    1448           0 :     FD_VOLATILE( fd_poh_returned_lock ) = 1UL;
    1449           0 :     FD_COMPILER_MFENCE();
    1450           0 :     for(;;) {
    1451           0 :       if( FD_UNLIKELY( !FD_VOLATILE_CONST( fd_poh_returned_lock ) ) ) break;
    1452           0 :       FD_SPIN_PAUSE();
    1453           0 :     }
    1454           0 :     FD_COMPILER_MFENCE();
    1455           0 :     FD_VOLATILE( fd_poh_waiting_lock ) = 0UL;
    1456           0 :     *opt_poll_in = 0;
    1457           0 :     *charge_busy = 1;
    1458           0 :     return;
    1459           0 :   }
    1460           0 :   FD_COMPILER_MFENCE();
    1461             : 
    1462           0 :   if( FD_UNLIKELY( ctx->features_activation_avail ) ) {
    1463             :     /* If we have received an update on features_activation, then
    1464             :         forward them to the shred tile.  In principle, this should
    1465             :         happen at most once per slot. */
    1466           0 :     publish_features_activation( ctx, stem );
    1467           0 :     ctx->features_activation_avail = 0UL;
    1468           0 :   }
    1469             : 
    1470           0 :   int is_leader = ctx->next_leader_slot!=ULONG_MAX && ctx->slot>=ctx->next_leader_slot;
    1471           0 :   if( FD_UNLIKELY( is_leader && !ctx->current_leader_bank ) ) {
    1472             :     /* If we are the leader, but we didn't yet learn what the leader
    1473             :        bank object is from the replay stage, do not do any hashing.
    1474             : 
    1475             :        This is not ideal, but greatly simplifies the control flow. */
    1476           0 :     return;
    1477           0 :   }
    1478             : 
    1479             :   /* If we have skipped ticks pending because we skipped some slots to
    1480             :      become leader, register them now one at a time. */
    1481           0 :   if( FD_UNLIKELY( is_leader && ctx->last_slot<ctx->slot ) ) {
    1482           0 :     ulong publish_hashcnt = ctx->last_hashcnt+ctx->hashcnt_per_tick;
    1483           0 :     ulong tick_idx = (ctx->last_slot*ctx->ticks_per_slot+publish_hashcnt/ctx->hashcnt_per_tick)%MAX_SKIPPED_TICKS;
    1484             : 
    1485           0 :     fd_ext_poh_register_tick( ctx->current_leader_bank, ctx->skipped_tick_hashes[ tick_idx ] );
    1486           0 :     publish_tick( ctx, stem, ctx->skipped_tick_hashes[ tick_idx ], 1 );
    1487             : 
    1488             :     /* If we are catching up now and publishing a bunch of skipped
    1489             :        ticks, we do not want to process any incoming microblocks until
    1490             :        all the skipped ticks have been published out; otherwise we would
    1491             :        intersperse skipped tick messages with microblocks. */
    1492           0 :     *opt_poll_in = 0;
    1493           0 :     *charge_busy = 1;
    1494           0 :     return;
    1495           0 :   }
    1496             : 
    1497           0 :   int low_power_mode = ctx->hashcnt_per_tick==1UL;
    1498             : 
    1499             :   /* If we are the leader, always leave enough capacity in the slot so
    1500             :      that we can mixin any potential microblocks still coming from the
    1501             :      pack tile for this slot. */
    1502           0 :   ulong max_remaining_microblocks = ctx->max_microblocks_per_slot - ctx->microblocks_lower_bound;
    1503             :   /* With hashcnt_per_tick hashes per tick, we actually get
    1504             :      hashcnt_per_tick-1 chances to mixin a microblock.  For each tick
    1505             :      span that we need to reserve, we also need to reserve the hashcnt
    1506             :      for the tick, hence the +
    1507             :      max_remaining_microblocks/(hashcnt_per_tick-1) rounded up.
    1508             : 
    1509             :      However, if hashcnt_per_tick is 1 because we're in low power mode,
    1510             :      this should probably just be max_remaining_microblocks. */
    1511           0 :   ulong max_remaining_ticks_or_microblocks = max_remaining_microblocks;
    1512           0 :   if( FD_LIKELY( !low_power_mode ) ) max_remaining_ticks_or_microblocks += (max_remaining_microblocks+ctx->hashcnt_per_tick-2UL)/(ctx->hashcnt_per_tick-1UL);
    1513             : 
    1514           0 :   ulong restricted_hashcnt = fd_ulong_if( ctx->hashcnt_per_slot>=max_remaining_ticks_or_microblocks, ctx->hashcnt_per_slot-max_remaining_ticks_or_microblocks, 0UL );
    1515             : 
    1516           0 :   ulong min_hashcnt = ctx->hashcnt;
    1517             : 
    1518           0 :   if( FD_LIKELY( !low_power_mode ) ) {
    1519             :     /* Recall that there are two kinds of events that will get published
    1520             :        to the shredder,
    1521             : 
    1522             :          (a) Ticks. These occur every 62,500 (hashcnt_per_tick) hashcnts,
    1523             :              and there will be 64 (ticks_per_slot) of them in each slot.
    1524             : 
    1525             :              Ticks must not have any transactions mixed into the hash.
    1526             :              This is not strictly needed in theory, but is required by the
    1527             :              current consensus protocol.  They get published here in
    1528             :              after_credit.
    1529             : 
    1530             :          (b) Microblocks.  These can occur at any other hashcnt, as long
    1531             :              as it is not a tick.  Microblocks cannot be empty, and must
    1532             :              have at least one transactions mixed in.  These get
    1533             :              published in after_frag.
    1534             : 
    1535             :        If hashcnt_per_tick is 1, then we are in low power mode and the
    1536             :        following does not apply, since we can mix in transactions at any
    1537             :        time.
    1538             : 
    1539             :        In the normal, non-low-power mode, though, we have to be careful
    1540             :        to make sure that we do not publish microblocks on tick
    1541             :        boundaries.  To do that, we need to obey two rules:
    1542             :          (i)  after_credit must not leave hashcnt one before a tick
    1543             :               boundary
    1544             :          (ii) if after_credit begins one before a tick boundary, it must
    1545             :               advance hashcnt and publish the tick
    1546             : 
    1547             :        There's some interplay between min_hashcnt and restricted_hashcnt
    1548             :        here, and we need to show that there's always a value of
    1549             :        target_hashcnt we can pick such that
    1550             :            min_hashcnt <= target_hashcnt <= restricted_hashcnt.
    1551             :        We'll prove this by induction for current_slot==0 and
    1552             :        is_leader==true, since all other slots should be the same.
    1553             : 
    1554             :        Let m_j and r_j be the min_hashcnt and restricted_hashcnt
    1555             :        (respectively) for the jth call to after_credit in a slot.  We
    1556             :        want to show that for all values of j, it's possible to pick a
    1557             :        value h_j, the value of target_hashcnt for the jth call to
    1558             :        after_credit (which is also the value of hashcnt after
    1559             :        after_credit has completed) such that m_j<=h_j<=r_j.
    1560             : 
    1561             :        Additionally, let T be hashcnt_per_tick and N be ticks_per_slot.
    1562             : 
    1563             :        Starting with the base case, j==0.  m_j=0, and
    1564             :          r_0 = N*T - max_microblocks_per_slot
    1565             :                    - ceil(max_microblocks_per_slot/(T-1)).
    1566             : 
    1567             :        This is monotonic decreasing in max_microblocks_per_slot, so it
    1568             :        achieves its minimum when max_microblocks_per_slot is its
    1569             :        maximum.
    1570             :            r_0 >= N*T - N*(T-1) - ceil( (N*(T-1))/(T-1))
    1571             :                 = N*T - N*(T-1)-N = 0.
    1572             :        Thus, m_0 <= r_0, as desired.
    1573             : 
    1574             : 
    1575             : 
    1576             :        Then, for the inductive step, assume there exists h_j such that
    1577             :        m_j<=h_j<=r_j, and we want to show that there exists h_{j+1},
    1578             :        which is the same as showing m_{j+1}<=r_{j+1}.
    1579             : 
    1580             :        Let a_j be 1 if we had a microblock immediately following the jth
    1581             :        call to after_credit, and 0 otherwise.  Then hashcnt at the start
    1582             :        of the (j+1)th call to after_frag is h_j+a_j.
    1583             :        Also, set b_{j+1}=1 if we are in the case covered by rule (ii)
    1584             :        above during the (j+1)th call to after_credit, i.e. if
    1585             :        (h_j+a_j)%T==T-1.  Thus, m_{j+1} = h_j + a_j + b_{j+1}.
    1586             : 
    1587             :        If we received an additional microblock, then
    1588             :        max_remaining_microblocks goes down by 1, and
    1589             :        max_remaining_ticks_or_microblocks goes down by either 1 or 2,
    1590             :        which means restricted_hashcnt goes up by either 1 or 2.  In
    1591             :        particular, it goes up by 2 if the new value of
    1592             :        max_remaining_microblocks (at the start of the (j+1)th call to
    1593             :        after_credit) is congruent to 0 mod T-1.  Let b'_{j+1} be 1 if
    1594             :        this condition is met and 0 otherwise.  If we receive a
    1595             :        done_packing message, restricted_hashcnt can go up by more, but
    1596             :        we can ignore that case, since it is less restrictive.
    1597             :        Thus, r_{j+1}=r_j+a_j+b'_{j+1}.
    1598             : 
    1599             :        If h_j < r_j (strictly less), then h_j+a_j < r_j+a_j.  And thus,
    1600             :        since b_{j+1}<=b'_{j+1}+1, just by virtue of them both being
    1601             :        binary,
    1602             :              h_j + a_j + b_{j+1} <  r_j + a_j + b'_{j+1} + 1,
    1603             :        which is the same (for integers) as
    1604             :              h_j + a_j + b_{j+1} <= r_j + a_j + b'_{j+1},
    1605             :                  m_{j+1}         <= r_{j+1}
    1606             : 
    1607             :        On the other hand, if h_j==r_j, this is easy unless b_{j+1}==1,
    1608             :        which can also only happen if a_j==1.  Then (h_j+a_j)%T==T-1,
    1609             :        which means there's an integer k such that
    1610             : 
    1611             :              h_j+a_j==(ticks_per_slot-k)*T-1
    1612             :              h_j    ==ticks_per_slot*T -  k*(T-1)-1  - k-1
    1613             :                     ==ticks_per_slot*T - (k*(T-1)+1) - ceil( (k*(T-1)+1)/(T-1) )
    1614             : 
    1615             :        Since h_j==r_j in this case, and
    1616             :        r_j==(ticks_per_slot*T) - max_remaining_microblocks_j - ceil(max_remaining_microblocks_j/(T-1)),
    1617             :        we can see that the value of max_remaining_microblocks at the
    1618             :        start of the jth call to after_credit is k*(T-1)+1.  Again, since
    1619             :        a_j==1, then the value of max_remaining_microblocks at the start
    1620             :        of the j+1th call to after_credit decreases by 1 to k*(T-1),
    1621             :        which means b'_{j+1}=1.
    1622             : 
    1623             :        Thus, h_j + a_j + b_{j+1} == r_j + a_j + b'_{j+1}, so, in
    1624             :        particular, h_{j+1}<=r_{j+1} as desired. */
    1625           0 :      min_hashcnt += (ulong)(min_hashcnt%ctx->hashcnt_per_tick == (ctx->hashcnt_per_tick-1UL)); /* add b_{j+1}, enforcing rule (ii) */
    1626           0 :   }
    1627             :   /* Now figure out how many hashes are needed to "catch up" the hash
    1628             :      count to the current system clock, and clamp it to the allowed
    1629             :      range. */
    1630           0 :   long now = fd_log_wallclock();
    1631           0 :   ulong target_hashcnt;
    1632           0 :   if( FD_LIKELY( !is_leader ) ) {
    1633           0 :     target_hashcnt = (ulong)((double)(now - ctx->reset_slot_start_ns) / ctx->hashcnt_duration_ns) - (ctx->slot-ctx->reset_slot)*ctx->hashcnt_per_slot;
    1634           0 :   } else {
    1635             :     /* We might have gotten very behind on hashes, but if we are leader
    1636             :        we want to catch up gradually over the remainder of our leader
    1637             :        slot, not all at once right now.  This helps keep the tile from
    1638             :        being oversubscribed and taking a long time to process incoming
    1639             :        microblocks. */
    1640           0 :     long expected_slot_start_ns = ctx->reset_slot_start_ns + (long)((double)(ctx->slot-ctx->reset_slot)*ctx->slot_duration_ns);
    1641           0 :     double actual_slot_duration_ns = ctx->slot_duration_ns<(double)(ctx->leader_bank_start_ns - expected_slot_start_ns) ? 0.0 : ctx->slot_duration_ns - (double)(ctx->leader_bank_start_ns - expected_slot_start_ns);
    1642           0 :     double actual_hashcnt_duration_ns = actual_slot_duration_ns / (double)ctx->hashcnt_per_slot;
    1643           0 :     target_hashcnt = fd_ulong_if( actual_hashcnt_duration_ns==0.0, restricted_hashcnt, (ulong)((double)(now - ctx->leader_bank_start_ns) / actual_hashcnt_duration_ns) );
    1644           0 :   }
    1645             :   /* Clamp to [min_hashcnt, restricted_hashcnt] as above */
    1646           0 :   target_hashcnt = fd_ulong_max( fd_ulong_min( target_hashcnt, restricted_hashcnt ), min_hashcnt );
    1647             : 
    1648             :   /* The above proof showed that it was always possible to pick a value
    1649             :      of target_hashcnt, but we still have a lot of freedom in how to
    1650             :      pick it.  It simplifies the code a lot if we don't keep going after
    1651             :      a tick in this function.  In particular, we want to publish at most
    1652             :      1 tick in this call, since otherwise we could consume infinite
    1653             :      credits to publish here.  The credits are set so that we should
    1654             :      only ever publish one tick during this loop.  Also, all the extra
    1655             :      stuff (leader transitions, publishing ticks, etc.) we have to do
    1656             :      happens at tick boundaries, so this lets us consolidate all those
    1657             :      cases.
    1658             : 
    1659             :      Mathematically, since the current value of hashcnt is h_j+a_j, the
    1660             :      next tick (advancing a full tick if we're currently at a tick) is
    1661             :      t_{j+1} = T*(floor( (h_j+a_j)/T )+1).  We need to show that if we set
    1662             :      h'_{j+1} = min( h_{j+1}, t_{j+1} ), it is still valid.
    1663             : 
    1664             :      First, h'_{j+1} <= h_{j+1} <= r_{j+1}, so we're okay in that
    1665             :      direction.
    1666             : 
    1667             :      Next, observe that t_{j+1}>=h_j + a_j + 1, and recall that b_{j+1}
    1668             :      is 0 or 1. So then,
    1669             :                     t_{j+1} >= h_j+a_j+b_{j+1} = m_{j+1}.
    1670             : 
    1671             :      We know h_{j+1) >= m_{j+1} from before, so then h'_{j+1} >=
    1672             :      m_{j+1}, as desired. */
    1673             : 
    1674           0 :   ulong next_tick_hashcnt = ctx->hashcnt_per_tick * (1UL+(ctx->hashcnt/ctx->hashcnt_per_tick));
    1675           0 :   target_hashcnt = fd_ulong_min( target_hashcnt, next_tick_hashcnt );
    1676             : 
    1677             :   /* We still need to enforce rule (i). We know that min_hashcnt%T !=
    1678             :      T-1 because of rule (ii).  That means that if target_hashcnt%T ==
    1679             :      T-1 at this point, target_hashcnt > min_hashcnt (notice the
    1680             :      strict), so target_hashcnt-1 >= min_hashcnt and is thus still a
    1681             :      valid choice for target_hashcnt. */
    1682           0 :   target_hashcnt -= (ulong)( (!low_power_mode) & ((target_hashcnt%ctx->hashcnt_per_tick)==(ctx->hashcnt_per_tick-1UL)) );
    1683             : 
    1684           0 :   FD_TEST( target_hashcnt >= ctx->hashcnt       );
    1685           0 :   FD_TEST( target_hashcnt >= min_hashcnt        );
    1686           0 :   FD_TEST( target_hashcnt <= restricted_hashcnt );
    1687             : 
    1688           0 :   if( FD_UNLIKELY( ctx->hashcnt==target_hashcnt ) ) return; /* Nothing to do, don't publish a tick twice */
    1689             : 
    1690           0 :   *charge_busy = 1;
    1691             : 
    1692           0 :   while( ctx->hashcnt<target_hashcnt ) {
    1693           0 :     fd_sha256_hash( ctx->hash, 32UL, ctx->hash );
    1694           0 :     ctx->hashcnt++;
    1695           0 :   }
    1696             : 
    1697           0 :   if( FD_UNLIKELY( ctx->hashcnt==ctx->hashcnt_per_slot ) ) {
    1698           0 :     ctx->slot++;
    1699           0 :     ctx->hashcnt = 0UL;
    1700           0 :   }
    1701             : 
    1702           0 :   if( FD_UNLIKELY( !is_leader && !(ctx->hashcnt%ctx->hashcnt_per_tick ) ) ) {
    1703             :     /* We finished a tick while not leader... save the current hash so
    1704             :        it can be played back into the bank when we become the leader. */
    1705           0 :     ulong tick_idx = (ctx->slot*ctx->ticks_per_slot+ctx->hashcnt/ctx->hashcnt_per_tick)%MAX_SKIPPED_TICKS;
    1706           0 :     fd_memcpy( ctx->skipped_tick_hashes[ tick_idx ], ctx->hash, 32UL );
    1707             : 
    1708           0 :     ulong initial_tick_idx = (ctx->last_slot*ctx->ticks_per_slot+ctx->last_hashcnt/ctx->hashcnt_per_tick)%MAX_SKIPPED_TICKS;
    1709           0 :     if( FD_UNLIKELY( tick_idx==initial_tick_idx ) ) FD_LOG_ERR(( "Too many skipped ticks from slot %lu to slot %lu, chain must halt", ctx->last_slot, ctx->slot ));
    1710           0 :   }
    1711             : 
    1712           0 :   if( FD_UNLIKELY( is_leader && !(ctx->hashcnt%ctx->hashcnt_per_tick) ) ) {
    1713             :     /* We ticked while leader... tell the leader bank. */
    1714           0 :     fd_ext_poh_register_tick( ctx->current_leader_bank, ctx->hash );
    1715             : 
    1716             :     /* And send an empty microblock (a tick) to the shred tile. */
    1717           0 :     publish_tick( ctx, stem, ctx->hash, 0 );
    1718           0 :   }
    1719             : 
    1720           0 :   if( FD_UNLIKELY( !is_leader && ctx->slot>=ctx->next_leader_slot ) ) {
    1721             :     /* We ticked while not leader and are now leader... transition
    1722             :        the state machine. */
    1723           0 :     publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->reset_slot );
    1724           0 :     FD_LOG_INFO(( "fd_poh_ticked_into_leader(slot=%lu, reset_slot=%lu)", ctx->next_leader_slot, ctx->reset_slot ));
    1725           0 :   }
    1726             : 
    1727           0 :   if( FD_UNLIKELY( is_leader && ctx->slot>ctx->next_leader_slot ) ) {
    1728             :     /* We ticked while leader and are no longer leader... transition
    1729             :        the state machine. */
    1730           0 :     FD_TEST( !max_remaining_microblocks );
    1731           0 :     publish_plugin_slot_end( ctx, ctx->next_leader_slot, ctx->cus_used );
    1732           0 :     FD_LOG_INFO(( "fd_poh_ticked_outof_leader(slot=%lu)", ctx->next_leader_slot ));
    1733             : 
    1734           0 :     no_longer_leader( ctx );
    1735           0 :     ctx->expect_sequential_leader_slot = ctx->slot;
    1736             : 
    1737           0 :     double tick_per_ns = fd_tempo_tick_per_ns( NULL );
    1738           0 :     fd_histf_sample( ctx->slot_done_delay, (ulong)((double)(fd_log_wallclock()-ctx->reset_slot_start_ns)/tick_per_ns) );
    1739           0 :     ctx->next_leader_slot = next_leader_slot( ctx );
    1740             : 
    1741           0 :     if( FD_UNLIKELY( ctx->slot>=ctx->next_leader_slot ) ) {
    1742             :       /* We finished a leader slot, and are immediately leader for the
    1743             :          following slot... transition. */
    1744           0 :       publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->next_leader_slot-1UL );
    1745           0 :       FD_LOG_INFO(( "fd_poh_ticked_into_leader(slot=%lu, reset_slot=%lu)", ctx->next_leader_slot, ctx->next_leader_slot-1UL ));
    1746           0 :     }
    1747           0 :   }
    1748           0 : }
    1749             : 
    1750             : static inline void
    1751           0 : during_housekeeping( fd_poh_ctx_t * ctx ) {
    1752           0 :   if( FD_UNLIKELY( maybe_change_identity( ctx, 0 ) ) ) {
    1753           0 :     ctx->next_leader_slot = next_leader_slot( ctx );
    1754           0 :     FD_LOG_INFO(( "fd_poh_identity_changed(next_leader_slot=%lu)", ctx->next_leader_slot ));
    1755             : 
    1756             :     /* Signal replay to check if we are leader again, in-case it's stuck
    1757             :        because everything already replayed. */
    1758           0 :     FD_COMPILER_MFENCE();
    1759           0 :     fd_ext_poh_signal_leader_change( ctx->signal_leader_change );
    1760           0 :   }
    1761           0 : }
    1762             : 
    1763             : static inline void
    1764           0 : metrics_write( fd_poh_ctx_t * ctx ) {
    1765           0 :   FD_MHIST_COPY( POH, BEGIN_LEADER_DELAY_SECONDS,      ctx->begin_leader_delay     );
    1766           0 :   FD_MHIST_COPY( POH, FIRST_MICROBLOCK_DELAY_SECONDS,  ctx->first_microblock_delay );
    1767           0 :   FD_MHIST_COPY( POH, SLOT_DONE_DELAY_SECONDS,         ctx->slot_done_delay        );
    1768           0 :   FD_MHIST_COPY( POH, BUNDLE_INITIALIZE_DELAY_SECONDS, ctx->bundle_init_delay      );
    1769           0 : }
    1770             : 
    1771             : static int
    1772             : before_frag( fd_poh_ctx_t * ctx,
    1773             :              ulong          in_idx,
    1774             :              ulong          seq,
    1775           0 :              ulong          sig ) {
    1776           0 :   (void)seq;
    1777             : 
    1778           0 :   if( FD_LIKELY( ctx->in_kind[ in_idx ]==IN_KIND_BANK ) ) {
    1779           0 :     ulong microblock_idx = fd_disco_bank_sig_microblock_idx( sig );
    1780           0 :     FD_TEST( microblock_idx>=ctx->expect_microblock_idx );
    1781             : 
    1782             :     /* Return the fragment to stem so we can process it later, if it's
    1783             :        not next in the sequence. */
    1784           0 :     if( FD_UNLIKELY( microblock_idx>ctx->expect_microblock_idx ) ) return -1;
    1785             : 
    1786           0 :     ctx->expect_microblock_idx++;
    1787           0 :   }
    1788             : 
    1789           0 :   return 0;
    1790           0 : }
    1791             : 
    1792             : static inline void
    1793             : during_frag( fd_poh_ctx_t * ctx,
    1794             :              ulong          in_idx,
    1795             :              ulong          seq FD_PARAM_UNUSED,
    1796             :              ulong          sig,
    1797             :              ulong          chunk,
    1798             :              ulong          sz,
    1799           0 :              ulong          ctl FD_PARAM_UNUSED ) {
    1800             : 
    1801           0 :   ctx->skip_frag = 0;
    1802             : 
    1803           0 :   if( FD_UNLIKELY( ctx->in_kind[ in_idx ]==IN_KIND_STAKE ) ) {
    1804           0 :     if( FD_UNLIKELY( chunk<ctx->in[ in_idx ].chunk0 || chunk>ctx->in[ in_idx ].wmark ) )
    1805           0 :       FD_LOG_ERR(( "chunk %lu %lu corrupt, not in range [%lu,%lu]", chunk, sz,
    1806           0 :             ctx->in[ in_idx ].chunk0, ctx->in[ in_idx ].wmark ));
    1807             : 
    1808           0 :     uchar const * dcache_entry = fd_chunk_to_laddr_const( ctx->in[ in_idx ].mem, chunk );
    1809           0 :     fd_multi_epoch_leaders_stake_msg_init( ctx->mleaders, fd_type_pun_const( dcache_entry ) );
    1810           0 :     return;
    1811           0 :   }
    1812             : 
    1813           0 :   ulong pkt_type;
    1814           0 :   ulong slot;
    1815           0 :   switch( ctx->in_kind[ in_idx ] ) {
    1816           0 :     case IN_KIND_BANK: {
    1817           0 :       pkt_type = POH_PKT_TYPE_MICROBLOCK;
    1818           0 :       slot = fd_disco_bank_sig_slot( sig );
    1819           0 :       break;
    1820           0 :     }
    1821           0 :     case IN_KIND_PACK: {
    1822           0 :       pkt_type = fd_disco_poh_sig_pkt_type( sig );
    1823           0 :       slot = fd_disco_poh_sig_slot( sig );
    1824           0 :       break;
    1825           0 :     }
    1826           0 :     default:
    1827           0 :       FD_LOG_ERR(( "unexpected in_kind %d", ctx->in_kind[ in_idx ] ));
    1828           0 :   }
    1829             : 
    1830           0 :   int is_frag_for_prior_leader_slot = 0;
    1831           0 :   if( FD_LIKELY( pkt_type==POH_PKT_TYPE_DONE_PACKING || pkt_type==POH_PKT_TYPE_MICROBLOCK ) ) {
    1832             :     /* The following sequence is possible...
    1833             : 
    1834             :         1. We become leader in slot 10
    1835             :         2. While leader, we switch to a fork that is on slot 8, where
    1836             :             we are leader
    1837             :         3. We get the in-flight microblocks for slot 10
    1838             : 
    1839             :       These in-flight microblocks need to be dropped, so we check
    1840             :       against the high water mark (highwater_leader_slot) rather than
    1841             :       the current hashcnt here when determining what to drop.
    1842             : 
    1843             :       We know if the slot is lower than the high water mark it's from a stale
    1844             :       leader slot, because we will not become leader for the same slot twice
    1845             :       even if we are reset back in time (to prevent duplicate blocks). */
    1846           0 :     is_frag_for_prior_leader_slot = slot<ctx->highwater_leader_slot;
    1847           0 :   }
    1848             : 
    1849           0 :   if( FD_UNLIKELY( ctx->in_kind[ in_idx ]==IN_KIND_PACK ) ) {
    1850             :     /* We now know the real amount of microblocks published, so set an
    1851             :        exact bound for once we receive them. */
    1852           0 :     ctx->skip_frag = 1;
    1853           0 :     if( pkt_type==POH_PKT_TYPE_DONE_PACKING ) {
    1854           0 :       if( FD_UNLIKELY( is_frag_for_prior_leader_slot ) ) return;
    1855             : 
    1856           0 :       FD_TEST( ctx->microblocks_lower_bound<=ctx->max_microblocks_per_slot );
    1857           0 :       fd_done_packing_t const * done_packing = fd_chunk_to_laddr( ctx->in[ in_idx ].mem, chunk );
    1858           0 :       FD_LOG_INFO(( "done_packing(slot=%lu,seen_microblocks=%lu,microblocks_in_slot=%lu)",
    1859           0 :                     ctx->slot,
    1860           0 :                     ctx->microblocks_lower_bound,
    1861           0 :                     done_packing->microblocks_in_slot ));
    1862           0 :       ctx->microblocks_lower_bound += ctx->max_microblocks_per_slot - done_packing->microblocks_in_slot;
    1863           0 :     }
    1864           0 :     return;
    1865           0 :   } else {
    1866           0 :     if( FD_UNLIKELY( chunk<ctx->in[ in_idx ].chunk0 || chunk>ctx->in[ in_idx ].wmark || sz>USHORT_MAX ) )
    1867           0 :       FD_LOG_ERR(( "chunk %lu %lu corrupt, not in range [%lu,%lu]", chunk, sz, ctx->in[ in_idx ].chunk0, ctx->in[ in_idx ].wmark ));
    1868             : 
    1869           0 :     uchar * src = (uchar *)fd_chunk_to_laddr( ctx->in[ in_idx ].mem, chunk );
    1870             : 
    1871           0 :     fd_memcpy( ctx->_txns, src, sz-sizeof(fd_microblock_trailer_t) );
    1872           0 :     fd_memcpy( ctx->_microblock_trailer, src+sz-sizeof(fd_microblock_trailer_t), sizeof(fd_microblock_trailer_t) );
    1873             : 
    1874           0 :     ctx->skip_frag = is_frag_for_prior_leader_slot;
    1875           0 :   }
    1876           0 : }
    1877             : 
    1878             : static void
    1879             : publish_microblock( fd_poh_ctx_t *      ctx,
    1880             :                     fd_stem_context_t * stem,
    1881             :                     ulong               slot,
    1882             :                     ulong               hashcnt_delta,
    1883           0 :                     ulong               txn_cnt ) {
    1884           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->shred_out->mem, ctx->shred_out->chunk );
    1885           0 :   FD_TEST( slot>=ctx->reset_slot );
    1886           0 :   fd_entry_batch_meta_t * meta = (fd_entry_batch_meta_t *)dst;
    1887           0 :   meta->parent_offset = 1UL+slot-ctx->reset_slot;
    1888           0 :   meta->reference_tick = (ctx->hashcnt/ctx->hashcnt_per_tick) % ctx->ticks_per_slot;
    1889           0 :   meta->block_complete = !ctx->hashcnt;
    1890             : 
    1891             :   /* Refer to publish_tick() for details on meta->parent_block_id_valid. */
    1892           0 :   meta->parent_block_id_valid = ctx->parent_slot == (slot-meta->parent_offset);
    1893           0 :   if( FD_LIKELY( meta->parent_block_id_valid ) ) {
    1894           0 :     fd_memcpy( meta->parent_block_id, ctx->parent_block_id, 32UL );
    1895           0 :   }
    1896             : 
    1897           0 :   dst += sizeof(fd_entry_batch_meta_t);
    1898           0 :   fd_entry_batch_header_t * header = (fd_entry_batch_header_t *)dst;
    1899           0 :   header->hashcnt_delta = hashcnt_delta;
    1900           0 :   fd_memcpy( header->hash, ctx->hash, 32UL );
    1901             : 
    1902           0 :   dst += sizeof(fd_entry_batch_header_t);
    1903           0 :   ulong payload_sz = 0UL;
    1904           0 :   ulong included_txn_cnt = 0UL;
    1905           0 :   for( ulong i=0UL; i<txn_cnt; i++ ) {
    1906           0 :     fd_txn_p_t * txn = (fd_txn_p_t *)(ctx->_txns + i*sizeof(fd_txn_p_t));
    1907           0 :     if( FD_UNLIKELY( !(txn->flags & FD_TXN_P_FLAGS_EXECUTE_SUCCESS) ) ) continue;
    1908             : 
    1909           0 :     fd_memcpy( dst, txn->payload, txn->payload_sz );
    1910           0 :     payload_sz += txn->payload_sz;
    1911           0 :     dst        += txn->payload_sz;
    1912           0 :     included_txn_cnt++;
    1913           0 :   }
    1914           0 :   header->txn_cnt = included_txn_cnt;
    1915             : 
    1916             :   /* We always have credits to publish here, because we have a burst
    1917             :      value of 3 credits, and at most we will publish_tick() once and
    1918             :      then publish_became_leader() once, leaving one credit here to
    1919             :      publish the microblock. */
    1920           0 :   ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
    1921           0 :   ulong sz = sizeof(fd_entry_batch_meta_t)+sizeof(fd_entry_batch_header_t)+payload_sz;
    1922           0 :   ulong new_sig = fd_disco_poh_sig( slot, POH_PKT_TYPE_MICROBLOCK, 0UL );
    1923           0 :   fd_stem_publish( stem, ctx->shred_out->idx, new_sig, ctx->shred_out->chunk, sz, 0UL, 0UL, tspub );
    1924           0 :   ctx->shred_seq = stem->seqs[ ctx->shred_out->idx ];
    1925           0 :   ctx->shred_out->chunk = fd_dcache_compact_next( ctx->shred_out->chunk, sz, ctx->shred_out->chunk0, ctx->shred_out->wmark );
    1926           0 : }
    1927             : 
    1928             : static inline void
    1929             : after_frag( fd_poh_ctx_t *      ctx,
    1930             :             ulong               in_idx,
    1931             :             ulong               seq,
    1932             :             ulong               sig,
    1933             :             ulong               sz,
    1934             :             ulong               tsorig,
    1935             :             ulong               tspub,
    1936           0 :             fd_stem_context_t * stem ) {
    1937           0 :   (void)in_idx;
    1938           0 :   (void)seq;
    1939           0 :   (void)tsorig;
    1940           0 :   (void)tspub;
    1941             : 
    1942           0 :   if( FD_UNLIKELY( ctx->skip_frag ) ) return;
    1943             : 
    1944           0 :   if( FD_UNLIKELY( ctx->in_kind[ in_idx ]==IN_KIND_STAKE ) ) {
    1945           0 :     fd_multi_epoch_leaders_stake_msg_fini( ctx->mleaders );
    1946             :     /* It might seem like we do not need to do state transitions in and
    1947             :        out of being the leader here, since leader schedule updates are
    1948             :        always one epoch in advance (whether we are leader or not would
    1949             :        never change for the currently executing slot) but this is not
    1950             :        true for new ledgers when the validator first boots.  We will
    1951             :        likely be the leader in slot 1, and get notified of the leader
    1952             :        schedule for that slot while we are still in it.
    1953             : 
    1954             :        For safety we just handle both transitions, in and out, although
    1955             :        the only one possible should be into leader. */
    1956           0 :     ulong next_leader_slot_after_frag = next_leader_slot( ctx );
    1957             : 
    1958           0 :     int currently_leader  = ctx->slot>=ctx->next_leader_slot;
    1959           0 :     int leader_after_frag = ctx->slot>=next_leader_slot_after_frag;
    1960             : 
    1961           0 :     FD_LOG_INFO(( "stake_update(before_leader=%lu,after_leader=%lu)",
    1962           0 :                   ctx->next_leader_slot,
    1963           0 :                   next_leader_slot_after_frag ));
    1964             : 
    1965           0 :     ctx->next_leader_slot = next_leader_slot_after_frag;
    1966           0 :     if( FD_UNLIKELY( currently_leader && !leader_after_frag ) ) {
    1967             :       /* Shouldn't ever happen, otherwise we need to do a state
    1968             :          transition out of being leader. */
    1969           0 :       FD_LOG_ERR(( "stake update caused us to no longer be leader in an active slot" ));
    1970           0 :     }
    1971             : 
    1972             :     /* Nothing to do if we transition into being leader, since it
    1973             :        will just get picked up by the regular tick loop. */
    1974           0 :     if( FD_UNLIKELY( !currently_leader && leader_after_frag ) ) {
    1975           0 :       publish_plugin_slot_start( ctx, next_leader_slot_after_frag, ctx->reset_slot );
    1976           0 :     }
    1977             : 
    1978           0 :     return;
    1979           0 :   }
    1980             : 
    1981           0 :   if( FD_UNLIKELY( !ctx->microblocks_lower_bound ) ) {
    1982           0 :     double tick_per_ns = fd_tempo_tick_per_ns( NULL );
    1983           0 :     fd_histf_sample( ctx->first_microblock_delay, (ulong)((double)(fd_log_wallclock()-ctx->reset_slot_start_ns)/tick_per_ns) );
    1984           0 :   }
    1985             : 
    1986           0 :   ulong target_slot = fd_disco_bank_sig_slot( sig );
    1987             : 
    1988           0 :   if( FD_UNLIKELY( target_slot!=ctx->next_leader_slot || target_slot!=ctx->slot ) ) {
    1989           0 :     FD_LOG_ERR(( "packed too early or late target_slot=%lu, current_slot=%lu. highwater_leader_slot=%lu",
    1990           0 :                  target_slot, ctx->slot, ctx->highwater_leader_slot ));
    1991           0 :   }
    1992             : 
    1993           0 :   FD_TEST( ctx->current_leader_bank );
    1994           0 :   FD_TEST( ctx->microblocks_lower_bound<ctx->max_microblocks_per_slot );
    1995           0 :   ctx->microblocks_lower_bound += 1UL;
    1996             : 
    1997           0 :   ulong txn_cnt = (sz-sizeof(fd_microblock_trailer_t))/sizeof(fd_txn_p_t);
    1998           0 :   fd_txn_p_t * txns = (fd_txn_p_t *)(ctx->_txns);
    1999           0 :   ulong executed_txn_cnt = 0UL;
    2000           0 :   ulong cus_used         = 0UL;
    2001           0 :   for( ulong i=0UL; i<txn_cnt; i++ ) {
    2002             :     /* It's important that we check if a transaction is included in the
    2003             :        block with FD_TXN_P_FLAGS_EXECUTE_SUCCESS since
    2004             :        actual_consumed_cus may have a nonzero value for excluded
    2005             :        transactions used for monitoring purposes */
    2006           0 :     if( FD_LIKELY( txns[ i ].flags & FD_TXN_P_FLAGS_EXECUTE_SUCCESS ) ) {
    2007           0 :       executed_txn_cnt++;
    2008           0 :       cus_used += txns[ i ].bank_cu.actual_consumed_cus;
    2009           0 :     }
    2010           0 :   }
    2011             : 
    2012             :   /* We don't publish transactions that fail to execute.  If all the
    2013             :      transactions failed to execute, the microblock would be empty,
    2014             :      causing agave to think it's a tick and complain.  Instead, we just
    2015             :      skip the microblock and don't hash or update the hashcnt. */
    2016           0 :   if( FD_UNLIKELY( !executed_txn_cnt ) ) return;
    2017             : 
    2018           0 :   uchar data[ 64 ];
    2019           0 :   fd_memcpy( data, ctx->hash, 32UL );
    2020           0 :   fd_memcpy( data+32UL, ctx->_microblock_trailer->hash, 32UL );
    2021           0 :   fd_sha256_hash( data, 64UL, ctx->hash );
    2022             : 
    2023           0 :   ctx->hashcnt++;
    2024           0 :   FD_TEST( ctx->hashcnt>ctx->last_hashcnt );
    2025           0 :   ulong hashcnt_delta = ctx->hashcnt - ctx->last_hashcnt;
    2026             : 
    2027             :   /* The hashing loop above will never leave us exactly one away from
    2028             :      crossing a tick boundary, so this increment will never cause the
    2029             :      current tick (or the slot) to change, except in low power mode
    2030             :      for development, in which case we do need to register the tick
    2031             :      with the leader bank.  We don't need to publish the tick since
    2032             :      sending the microblock below is the publishing action. */
    2033           0 :   if( FD_UNLIKELY( !(ctx->hashcnt%ctx->hashcnt_per_slot ) ) ) {
    2034           0 :     ctx->slot++;
    2035           0 :     ctx->hashcnt = 0UL;
    2036           0 :   }
    2037             : 
    2038           0 :   ctx->last_slot    = ctx->slot;
    2039           0 :   ctx->last_hashcnt = ctx->hashcnt;
    2040             : 
    2041           0 :   ctx->cus_used += cus_used;
    2042             : 
    2043           0 :   if( FD_UNLIKELY( !(ctx->hashcnt%ctx->hashcnt_per_tick ) ) ) {
    2044           0 :     fd_ext_poh_register_tick( ctx->current_leader_bank, ctx->hash );
    2045           0 :     if( FD_UNLIKELY( ctx->slot>ctx->next_leader_slot ) ) {
    2046             :       /* We ticked while leader and are no longer leader... transition
    2047             :          the state machine. */
    2048           0 :       publish_plugin_slot_end( ctx, ctx->next_leader_slot, ctx->cus_used );
    2049             : 
    2050           0 :       no_longer_leader( ctx );
    2051             : 
    2052           0 :       if( FD_UNLIKELY( ctx->slot>=ctx->next_leader_slot ) ) {
    2053             :         /* We finished a leader slot, and are immediately leader for the
    2054             :            following slot... transition. */
    2055           0 :         publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->next_leader_slot-1UL );
    2056           0 :       }
    2057           0 :     }
    2058           0 :   }
    2059             : 
    2060           0 :   publish_microblock( ctx, stem, target_slot, hashcnt_delta, txn_cnt );
    2061           0 : }
    2062             : 
    2063             : static void
    2064             : privileged_init( fd_topo_t *      topo,
    2065           0 :                  fd_topo_tile_t * tile ) {
    2066           0 :   void * scratch = fd_topo_obj_laddr( topo, tile->tile_obj_id );
    2067             : 
    2068           0 :   FD_SCRATCH_ALLOC_INIT( l, scratch );
    2069           0 :   fd_poh_ctx_t * ctx = FD_SCRATCH_ALLOC_APPEND( l, alignof( fd_poh_ctx_t ), sizeof( fd_poh_ctx_t ) );
    2070             : 
    2071           0 :   if( FD_UNLIKELY( !strcmp( tile->poh.identity_key_path, "" ) ) )
    2072           0 :     FD_LOG_ERR(( "identity_key_path not set" ));
    2073             : 
    2074           0 :   const uchar * identity_key = fd_keyload_load( tile->poh.identity_key_path, /* pubkey only: */ 1 );
    2075           0 :   fd_memcpy( ctx->identity_key.uc, identity_key, 32UL );
    2076             : 
    2077           0 :   if( FD_UNLIKELY( !tile->poh.bundle.vote_account_path[0] ) ) {
    2078           0 :     tile->poh.bundle.enabled = 0;
    2079           0 :   }
    2080           0 :   if( FD_UNLIKELY( tile->poh.bundle.enabled ) ) {
    2081           0 :     if( FD_UNLIKELY( !fd_base58_decode_32( tile->poh.bundle.vote_account_path, ctx->bundle.vote_account.uc ) ) ) {
    2082           0 :       const uchar * vote_key = fd_keyload_load( tile->poh.bundle.vote_account_path, /* pubkey only: */ 1 );
    2083           0 :       fd_memcpy( ctx->bundle.vote_account.uc, vote_key, 32UL );
    2084           0 :     }
    2085           0 :   }
    2086           0 : }
    2087             : 
    2088             : /* The Agave client needs to communicate to the shred tile what
    2089             :    the shred version is on boot, but shred tile does not live in the
    2090             :    same address space, so have the PoH tile pass the value through
    2091             :    via. a shared memory ulong. */
    2092             : 
    2093             : static volatile ulong * fd_shred_version;
    2094             : 
    2095             : void
    2096           0 : fd_ext_shred_set_shred_version( ulong shred_version ) {
    2097           0 :   while( FD_UNLIKELY( !fd_shred_version ) ) FD_SPIN_PAUSE();
    2098           0 :   *fd_shred_version = shred_version;
    2099           0 : }
    2100             : 
    2101             : void
    2102             : fd_ext_poh_publish_gossip_vote( uchar * data,
    2103           0 :                                 ulong   data_len ) {
    2104           0 :   poh_link_publish( &gossip_dedup, 1UL, data, data_len );
    2105           0 : }
    2106             : 
    2107             : void
    2108             : fd_ext_poh_publish_leader_schedule( uchar * data,
    2109           0 :                                     ulong   data_len ) {
    2110           0 :   poh_link_publish( &stake_out, 2UL, data, data_len );
    2111           0 : }
    2112             : 
    2113             : void
    2114             : fd_ext_poh_publish_cluster_info( uchar * data,
    2115           0 :                                  ulong   data_len ) {
    2116           0 :   poh_link_publish( &crds_shred, 2UL, data, data_len );
    2117           0 : }
    2118             : 
    2119             : void
    2120           0 : fd_ext_poh_publish_executed_txn( uchar const * data  ) {
    2121           0 :   static int lock = 0;
    2122             : 
    2123             :   /* Need to lock since the link publisher is not concurrent, and replay
    2124             :      happens on a thread pool. */
    2125           0 :   for(;;) {
    2126           0 :     if( FD_LIKELY( FD_ATOMIC_CAS( &lock, 0, 1 )==0 ) ) break;
    2127           0 :     FD_SPIN_PAUSE();
    2128           0 :   }
    2129             : 
    2130           0 :   FD_COMPILER_MFENCE();
    2131           0 :   poh_link_publish( &executed_txn, 0UL, data, 64UL );
    2132           0 :   FD_COMPILER_MFENCE();
    2133             : 
    2134           0 :   FD_VOLATILE(lock) = 0;
    2135           0 : }
    2136             : 
    2137             : void
    2138             : fd_ext_plugin_publish_replay_stage( ulong   sig,
    2139             :                                     uchar * data,
    2140           0 :                                     ulong   data_len ) {
    2141           0 :   poh_link_publish( &replay_plugin, sig, data, data_len );
    2142           0 : }
    2143             : 
    2144             : void
    2145             : fd_ext_plugin_publish_genesis_hash( ulong   sig,
    2146             :                                     uchar * data,
    2147           0 :                                     ulong   data_len ) {
    2148           0 :   poh_link_publish( &replay_plugin, sig, data, data_len );
    2149           0 : }
    2150             : 
    2151             : void
    2152             : fd_ext_plugin_publish_start_progress( ulong   sig,
    2153             :                                       uchar * data,
    2154           0 :                                       ulong   data_len ) {
    2155           0 :   poh_link_publish( &start_progress_plugin, sig, data, data_len );
    2156           0 : }
    2157             : 
    2158             : void
    2159             : fd_ext_plugin_publish_vote_listener( ulong   sig,
    2160             :                                      uchar * data,
    2161           0 :                                      ulong   data_len ) {
    2162           0 :   poh_link_publish( &vote_listener_plugin, sig, data, data_len );
    2163           0 : }
    2164             : 
    2165             : void
    2166             : fd_ext_plugin_publish_validator_info( ulong   sig,
    2167             :                                       uchar * data,
    2168           0 :                                       ulong   data_len ) {
    2169           0 :   poh_link_publish( &validator_info_plugin, sig, data, data_len );
    2170           0 : }
    2171             : 
    2172             : void
    2173             : fd_ext_plugin_publish_periodic( ulong   sig,
    2174             :                                 uchar * data,
    2175           0 :                                 ulong   data_len ) {
    2176           0 :   poh_link_publish( &gossip_plugin, sig, data, data_len );
    2177           0 : }
    2178             : 
    2179             : void
    2180             : fd_ext_resolv_publish_root_bank( uchar * data,
    2181           0 :                                  ulong   data_len ) {
    2182           0 :   poh_link_publish( &replay_resolv, 0UL, data, data_len );
    2183           0 : }
    2184             : 
    2185             : void
    2186             : fd_ext_resolv_publish_completed_blockhash( uchar * data,
    2187           0 :                                            ulong   data_len ) {
    2188           0 :   poh_link_publish( &replay_resolv, 1UL, data, data_len );
    2189           0 : }
    2190             : 
    2191             : static inline fd_poh_out_ctx_t
    2192             : out1( fd_topo_t const *      topo,
    2193             :       fd_topo_tile_t const * tile,
    2194           0 :       char const *           name ) {
    2195           0 :   ulong idx = ULONG_MAX;
    2196             : 
    2197           0 :   for( ulong i=0UL; i<tile->out_cnt; i++ ) {
    2198           0 :     fd_topo_link_t const * link = &topo->links[ tile->out_link_id[ i ] ];
    2199           0 :     if( !strcmp( link->name, name ) ) {
    2200           0 :       if( FD_UNLIKELY( idx!=ULONG_MAX ) ) FD_LOG_ERR(( "tile %s:%lu had multiple output links named %s but expected one", tile->name, tile->kind_id, name ));
    2201           0 :       idx = i;
    2202           0 :     }
    2203           0 :   }
    2204             : 
    2205           0 :   if( FD_UNLIKELY( idx==ULONG_MAX ) ) FD_LOG_ERR(( "tile %s:%lu had no output link named %s", tile->name, tile->kind_id, name ));
    2206             : 
    2207           0 :   void * mem = topo->workspaces[ topo->objs[ topo->links[ tile->out_link_id[ idx ] ].dcache_obj_id ].wksp_id ].wksp;
    2208           0 :   ulong chunk0 = fd_dcache_compact_chunk0( mem, topo->links[ tile->out_link_id[ idx ] ].dcache );
    2209           0 :   ulong wmark  = fd_dcache_compact_wmark ( mem, topo->links[ tile->out_link_id[ idx ] ].dcache, topo->links[ tile->out_link_id[ idx ] ].mtu );
    2210             : 
    2211           0 :   return (fd_poh_out_ctx_t){ .idx = idx, .mem = mem, .chunk0 = chunk0, .wmark = wmark, .chunk = chunk0 };
    2212           0 : }
    2213             : 
    2214             : static void
    2215             : unprivileged_init( fd_topo_t *      topo,
    2216           0 :                    fd_topo_tile_t * tile ) {
    2217           0 :   void * scratch = fd_topo_obj_laddr( topo, tile->tile_obj_id );
    2218             : 
    2219           0 :   FD_SCRATCH_ALLOC_INIT( l, scratch );
    2220           0 :   fd_poh_ctx_t * ctx = FD_SCRATCH_ALLOC_APPEND( l, alignof( fd_poh_ctx_t ), sizeof( fd_poh_ctx_t ) );
    2221           0 :   void * sha256   = FD_SCRATCH_ALLOC_APPEND( l, FD_SHA256_ALIGN,                  FD_SHA256_FOOTPRINT                );
    2222             : 
    2223           0 : #define NONNULL( x ) (__extension__({                                        \
    2224           0 :       __typeof__((x)) __x = (x);                                             \
    2225           0 :       if( FD_UNLIKELY( !__x ) ) FD_LOG_ERR(( #x " was unexpectedly NULL" )); \
    2226           0 :       __x; }))
    2227             : 
    2228           0 :   ctx->mleaders = NONNULL( fd_multi_epoch_leaders_join( fd_multi_epoch_leaders_new( ctx->mleaders_mem ) ) );
    2229           0 :   ctx->sha256   = NONNULL( fd_sha256_join( fd_sha256_new( sha256 ) ) );
    2230           0 :   ctx->current_leader_bank = NULL;
    2231           0 :   ctx->signal_leader_change = NULL;
    2232             : 
    2233           0 :   ctx->shred_seq = ULONG_MAX;
    2234           0 :   ctx->halted_switching_key = 0;
    2235           0 :   ctx->keyswitch = fd_keyswitch_join( fd_topo_obj_laddr( topo, tile->keyswitch_obj_id ) );
    2236           0 :   FD_TEST( ctx->keyswitch );
    2237             : 
    2238           0 :   ctx->slot                  = 0UL;
    2239           0 :   ctx->hashcnt               = 0UL;
    2240           0 :   ctx->last_hashcnt          = 0UL;
    2241           0 :   ctx->highwater_leader_slot = ULONG_MAX;
    2242           0 :   ctx->next_leader_slot      = ULONG_MAX;
    2243           0 :   ctx->reset_slot            = ULONG_MAX;
    2244             : 
    2245           0 :   ctx->lagged_consecutive_leader_start = tile->poh.lagged_consecutive_leader_start;
    2246           0 :   ctx->expect_sequential_leader_slot = ULONG_MAX;
    2247             : 
    2248           0 :   ctx->microblocks_lower_bound = 0UL;
    2249             : 
    2250           0 :   ctx->max_active_descendant = 0UL;
    2251             : 
    2252           0 :   if( FD_UNLIKELY( tile->poh.bundle.enabled ) ) {
    2253           0 :     ctx->bundle.enabled = 1;
    2254           0 :     NONNULL( fd_bundle_crank_gen_init( ctx->bundle.gen, (fd_acct_addr_t const *)tile->poh.bundle.tip_distribution_program_addr,
    2255           0 :              (fd_acct_addr_t const *)tile->poh.bundle.tip_payment_program_addr,
    2256           0 :              (fd_acct_addr_t const *)ctx->bundle.vote_account.uc,
    2257           0 :              (fd_acct_addr_t const *)ctx->bundle.vote_account.uc, "NAN", 0UL ) ); /* last three arguments are properly bogus */
    2258           0 :   } else {
    2259           0 :     ctx->bundle.enabled = 0;
    2260           0 :   }
    2261             : 
    2262           0 :   ulong poh_shred_obj_id = fd_pod_query_ulong( topo->props, "poh_shred", ULONG_MAX );
    2263           0 :   FD_TEST( poh_shred_obj_id!=ULONG_MAX );
    2264             : 
    2265           0 :   fd_shred_version = fd_fseq_join( fd_topo_obj_laddr( topo, poh_shred_obj_id ) );
    2266           0 :   FD_TEST( fd_shred_version );
    2267             : 
    2268           0 :   poh_link_init( &gossip_dedup,          topo, tile, out1( topo, tile, "gossip_dedup" ).idx );
    2269           0 :   poh_link_init( &stake_out,             topo, tile, out1( topo, tile, "stake_out"    ).idx );
    2270           0 :   poh_link_init( &crds_shred,            topo, tile, out1( topo, tile, "crds_shred"   ).idx );
    2271           0 :   poh_link_init( &replay_resolv,         topo, tile, out1( topo, tile, "replay_resol" ).idx );
    2272           0 :   poh_link_init( &executed_txn,          topo, tile, out1( topo, tile, "executed_txn" ).idx );
    2273             : 
    2274           0 :   if( FD_LIKELY( tile->poh.plugins_enabled ) ) {
    2275           0 :     poh_link_init( &replay_plugin,         topo, tile, out1( topo, tile, "replay_plugi" ).idx );
    2276           0 :     poh_link_init( &gossip_plugin,         topo, tile, out1( topo, tile, "gossip_plugi" ).idx );
    2277           0 :     poh_link_init( &start_progress_plugin, topo, tile, out1( topo, tile, "startp_plugi" ).idx );
    2278           0 :     poh_link_init( &vote_listener_plugin,  topo, tile, out1( topo, tile, "votel_plugin" ).idx );
    2279           0 :     poh_link_init( &validator_info_plugin, topo, tile, out1( topo, tile, "valcfg_plugi" ).idx );
    2280           0 :   } else {
    2281             :     /* Mark these mcaches as "available", so the system boots, but the
    2282             :        memory is not set so nothing will actually get published via.
    2283             :        the links. */
    2284           0 :     FD_COMPILER_MFENCE();
    2285           0 :     replay_plugin.mcache = (fd_frag_meta_t*)1;
    2286           0 :     gossip_plugin.mcache = (fd_frag_meta_t*)1;
    2287           0 :     start_progress_plugin.mcache = (fd_frag_meta_t*)1;
    2288           0 :     vote_listener_plugin.mcache = (fd_frag_meta_t*)1;
    2289           0 :     validator_info_plugin.mcache = (fd_frag_meta_t*)1;
    2290           0 :     FD_COMPILER_MFENCE();
    2291           0 :   }
    2292             : 
    2293           0 :   FD_LOG_INFO(( "PoH waiting to be initialized by Agave client... %lu %lu", fd_poh_waiting_lock, fd_poh_returned_lock ));
    2294           0 :   FD_VOLATILE( fd_poh_global_ctx ) = ctx;
    2295           0 :   FD_COMPILER_MFENCE();
    2296           0 :   for(;;) {
    2297           0 :     if( FD_LIKELY( FD_VOLATILE_CONST( fd_poh_waiting_lock ) ) ) break;
    2298           0 :     FD_SPIN_PAUSE();
    2299           0 :   }
    2300           0 :   FD_VOLATILE( fd_poh_waiting_lock ) = 0UL;
    2301           0 :   FD_VOLATILE( fd_poh_returned_lock ) = 1UL;
    2302           0 :   FD_COMPILER_MFENCE();
    2303           0 :   for(;;) {
    2304           0 :     if( FD_UNLIKELY( !FD_VOLATILE_CONST( fd_poh_returned_lock ) ) ) break;
    2305           0 :     FD_SPIN_PAUSE();
    2306           0 :   }
    2307           0 :   FD_COMPILER_MFENCE();
    2308             : 
    2309           0 :   if( FD_UNLIKELY( ctx->reset_slot==ULONG_MAX ) ) FD_LOG_ERR(( "PoH was not initialized by Agave client" ));
    2310             : 
    2311           0 :   fd_histf_join( fd_histf_new( ctx->begin_leader_delay, FD_MHIST_SECONDS_MIN( POH, BEGIN_LEADER_DELAY_SECONDS ),
    2312           0 :                                                         FD_MHIST_SECONDS_MAX( POH, BEGIN_LEADER_DELAY_SECONDS ) ) );
    2313           0 :   fd_histf_join( fd_histf_new( ctx->first_microblock_delay, FD_MHIST_SECONDS_MIN( POH, FIRST_MICROBLOCK_DELAY_SECONDS  ),
    2314           0 :                                                             FD_MHIST_SECONDS_MAX( POH, FIRST_MICROBLOCK_DELAY_SECONDS  ) ) );
    2315           0 :   fd_histf_join( fd_histf_new( ctx->slot_done_delay, FD_MHIST_SECONDS_MIN( POH, SLOT_DONE_DELAY_SECONDS  ),
    2316           0 :                                                      FD_MHIST_SECONDS_MAX( POH, SLOT_DONE_DELAY_SECONDS  ) ) );
    2317             : 
    2318           0 :   fd_histf_join( fd_histf_new( ctx->bundle_init_delay, FD_MHIST_SECONDS_MIN( POH, BUNDLE_INITIALIZE_DELAY_SECONDS  ),
    2319           0 :                                                        FD_MHIST_SECONDS_MAX( POH, BUNDLE_INITIALIZE_DELAY_SECONDS  ) ) );
    2320             : 
    2321           0 :   for( ulong i=0UL; i<tile->in_cnt; i++ ) {
    2322           0 :     fd_topo_link_t * link = &topo->links[ tile->in_link_id[ i ] ];
    2323           0 :     fd_topo_wksp_t * link_wksp = &topo->workspaces[ topo->objs[ link->dcache_obj_id ].wksp_id ];
    2324             : 
    2325           0 :     ctx->in[ i ].mem    = link_wksp->wksp;
    2326           0 :     ctx->in[ i ].chunk0 = fd_dcache_compact_chunk0( ctx->in[ i ].mem, link->dcache );
    2327           0 :     ctx->in[ i ].wmark  = fd_dcache_compact_wmark ( ctx->in[ i ].mem, link->dcache, link->mtu );
    2328             : 
    2329           0 :     if(        !strcmp( link->name, "stake_out" ) ) {
    2330           0 :       ctx->in_kind[ i ] = IN_KIND_STAKE;
    2331           0 :     } else if( !strcmp( link->name, "pack_bank" ) ) {
    2332           0 :       ctx->in_kind[ i ] = IN_KIND_PACK;
    2333           0 :     } else if( !strcmp( link->name, "bank_poh"  ) ) {
    2334           0 :       ctx->in_kind[ i ] = IN_KIND_BANK;
    2335           0 :     } else {
    2336           0 :       FD_LOG_ERR(( "unexpected input link name %s", link->name ));
    2337           0 :     }
    2338           0 :   }
    2339             : 
    2340           0 :   *ctx->shred_out = out1( topo, tile, "poh_shred" );
    2341           0 :   *ctx->pack_out  = out1( topo, tile, "poh_pack" );
    2342           0 :   ctx->plugin_out->mem = NULL;
    2343           0 :   if( FD_LIKELY( tile->poh.plugins_enabled ) ) {
    2344           0 :     *ctx->plugin_out = out1( topo, tile, "poh_plugin" );
    2345           0 :   }
    2346             : 
    2347           0 :   ctx->features_activation_avail = 0UL;
    2348           0 :   for( ulong i=0UL; i<FD_SHRED_FEATURES_ACTIVATION_SLOT_CNT; i++ )
    2349           0 :     ctx->features_activation->slots[i] = FD_SHRED_FEATURES_ACTIVATION_SLOT_DISABLED;
    2350             : 
    2351           0 :   ulong scratch_top = FD_SCRATCH_ALLOC_FINI( l, 1UL );
    2352           0 :   if( FD_UNLIKELY( scratch_top > (ulong)scratch + scratch_footprint( tile ) ) )
    2353           0 :     FD_LOG_ERR(( "scratch overflow %lu %lu %lu", scratch_top - (ulong)scratch - scratch_footprint( tile ), scratch_top, (ulong)scratch + scratch_footprint( tile ) ));
    2354           0 : }
    2355             : 
    2356             : /* One tick, one microblock, one plugin slot end, one plugin slot start,
    2357             :    one leader update, and one features activation. */
    2358           0 : #define STEM_BURST (6UL)
    2359             : 
    2360             : /* See explanation in fd_pack */
    2361           0 : #define STEM_LAZY  (128L*3000L)
    2362             : 
    2363           0 : #define STEM_CALLBACK_CONTEXT_TYPE  fd_poh_ctx_t
    2364           0 : #define STEM_CALLBACK_CONTEXT_ALIGN alignof(fd_poh_ctx_t)
    2365             : 
    2366           0 : #define STEM_CALLBACK_DURING_HOUSEKEEPING during_housekeeping
    2367           0 : #define STEM_CALLBACK_METRICS_WRITE       metrics_write
    2368           0 : #define STEM_CALLBACK_AFTER_CREDIT        after_credit
    2369           0 : #define STEM_CALLBACK_BEFORE_FRAG         before_frag
    2370           0 : #define STEM_CALLBACK_DURING_FRAG         during_frag
    2371           0 : #define STEM_CALLBACK_AFTER_FRAG          after_frag
    2372             : 
    2373             : #include "../../disco/stem/fd_stem.c"
    2374             : 
    2375             : fd_topo_run_tile_t fd_tile_poh = {
    2376             :   .name                     = "poh",
    2377             :   .populate_allowed_seccomp = NULL,
    2378             :   .populate_allowed_fds     = NULL,
    2379             :   .scratch_align            = scratch_align,
    2380             :   .scratch_footprint        = scratch_footprint,
    2381             :   .privileged_init          = privileged_init,
    2382             :   .unprivileged_init        = unprivileged_init,
    2383             :   .run                      = stem_run,
    2384             : };

Generated by: LCOV version 1.14