LCOV - code coverage report
Current view: top level - discoh/poh - fd_poh_tile.c (source / functions) Hit Total Coverage
Test: cov.lcov Lines: 0 910 0.0 %
Date: 2025-08-05 05:04:49 Functions: 0 46 0.0 %

          Line data    Source code
       1             : #define _GNU_SOURCE
       2             : 
       3             : /* Let's say there was a computer, the "leader" computer, that acted as
       4             :    a bank.  Users could send it messages saying they wanted to deposit
       5             :    money, or transfer it to someone else.
       6             : 
       7             :    That's how, for example, Bank of America works but there are problems
       8             :    with it.  One simple problem is: the bank can set your balance to
       9             :    zero if they don't like you.
      10             : 
      11             :    You could try to fix this by having the bank periodically publish the
      12             :    list of all account balances and transactions.  If the customers add
      13             :    unforgeable signatures to their deposit slips and transfers, then
      14             :    the bank cannot zero a balance without it being obvious to everyone.
      15             : 
      16             :    There's still problems.  The bank can't lie about your balance now or
      17             :    take your money, but it can just not accept deposits on your behalf
      18             :    by ignoring you.
      19             : 
      20             :    You could fix this by getting a few independent banks together, lets
      21             :    say Bank of America, Bank of England, and Westpac, and having them
      22             :    rotate who operates the leader computer periodically.  If one bank
      23             :    ignores your deposits, you can just wait and send them to the next
      24             :    one.
      25             : 
      26             :    This is Solana.
      27             : 
      28             :    There's still problems of course but they are largely technical.  How
      29             :    do the banks agree who is leader?  How do you recover if a leader
      30             :    misbehaves?  How do customers verify the transactions aren't forged?
      31             :    How do banks receive and publish and verify each others work quickly?
      32             :    These are the main technical innovations that enable Solana to work
      33             :    well.
      34             : 
      35             :    What about Proof of History?
      36             : 
      37             :    One particular niche problem is about the leader schedule.  When the
      38             :    leader computer is moving from one bank to another, the new bank must
      39             :    wait for the old bank to say it's done and provide a final list of
      40             :    balances that it can start working off of.  But: what if the computer
      41             :    at the old bank crashes and never says its done?
      42             : 
      43             :    Does the new leader just take over at some point?  What if the new
      44             :    leader is malicious, and says the past thousand leaders crashed, and
      45             :    there have been no transactions for days?  How do you check?
      46             : 
      47             :    This is what Proof of History solves.  Each bank in the network must
      48             :    constantly do a lot of busywork (compute hashes), even when it is not
      49             :    leader.
      50             : 
      51             :    If the prior thousand leaders crashed, and no transactions happened
      52             :    in an hour, the new leader would have to show they did about an hour
      53             :    of busywork for everyone else to believe them.
      54             : 
      55             :    A better name for this is proof of skipping.  If a leader is skipping
      56             :    slots (building off of a slot that is not the direct parent), it must
      57             :    prove that it waited a good amount of time to do so.
      58             : 
      59             :    It's not a perfect solution.  For one thing, some banks have really
      60             :    fast computers and can compute a lot of busywork in a short amount of
      61             :    time, allowing them to skip prior slot(s) anyway.  But: there is a
      62             :    social component that prevents validators from skipping the prior
      63             :    leader slot.  It is easy to detect when this happens and the network
      64             :    could respond by ignoring their votes or stake.
      65             : 
      66             :    You could come up with other schemes: for example, the network could
      67             :    just use wall clock time.  If a new leader publishes a block without
      68             :    waiting 400 milliseconds for the prior slot to complete, then there
      69             :    is no "proof of skipping" and the nodes ignore the slot.
      70             : 
      71             :    These schemes have a problem in that they are not deterministic
      72             :    across the network (different computers have different clocks), and
      73             :    so they will cause frequent forks which are very expensive to
      74             :    resolve.  Even though the proof of history scheme is not perfect,
      75             :    it is better than any alternative which is not deterministic.
      76             : 
      77             :    With all that background, we can now describe at a high level what
      78             :    this PoH tile actually does,
      79             : 
      80             :     (1) Whenever any other leader in the network finishes a slot, and
      81             :         the slot is determined to be the best one to build off of, this
      82             :         tile gets "reset" onto that block, the so called "reset slot".
      83             : 
      84             :     (2) The tile is constantly doing busy work, hash(hash(hash(...))) on
      85             :         top of the last reset slot, even when it is not leader.
      86             : 
      87             :     (3) When the tile becomes leader, it continues hashing from where it
      88             :         was.  Typically, the prior leader finishes their slot, so the
      89             :         reset slot will be the parent one, and this tile only publishes
      90             :         hashes for its own slot.  But if prior slots were skipped, then
      91             :         there might be a whole chain already waiting.
      92             : 
      93             :     That's pretty much it.  When we are leader, in addition to doing
      94             :     busywork, we publish ticks and microblocks to the shred tile.  A
      95             :     microblock is a non-empty group of transactions whose hashes are
      96             :     mixed-in to the chain, while a tick is a periodic stamp of the
      97             :     current hash, with no transactions (nothing mixed in).  We need
      98             :     to send both to the shred tile, as ticks are important for other
      99             :     validators to verify in parallel.
     100             : 
     101             :     As well, the tile should never become leader for a slot that it has
     102             :     published anything for, otherwise it may create a duplicate block.
     103             : 
     104             :     Some particularly common misunderstandings:
     105             : 
     106             :      - PoH is critical to security.
     107             : 
     108             :        This largely isn't true.  The target hash rate of the network is
     109             :        so slow (1 hash per 500 nanoseconds) that a malicious leader can
     110             :        easily catch up if they start from an old hash, and the only
     111             :        practical attack prevented is the proof of skipping.  Most of the
     112             :        long range attacks in the Solana whitepaper are not relevant.
     113             : 
     114             :      - PoH keeps passage of time.
     115             : 
     116             :        This is also not true.  The way the network keeps time so it can
     117             :        decide who is leader is that, each leader uses their operating
     118             :        system clock to time 400 milliseconds and publishes their block
     119             :        when this timer expires.
     120             : 
     121             :        If a leader just hashed as fast as they could, they could publish
     122             :        a block in tens of milliseconds, and the rest of the network
     123             :        would happily accept it.  This is why the Solana "clock" as
     124             :        determined by PoH is not accurate and drifts over time.
     125             : 
     126             :      - PoH prevents transaction reordering by the leader.
     127             : 
     128             :        The leader can, in theory, wait until the very end of their
     129             :        leader slot to publish anything at all to the network.  They can,
     130             :        in particular, hold all received transactions for 400
     131             :        milliseconds and then reorder and publish some right at the end
     132             :        to advantage certain transactions.
     133             : 
     134             :     You might be wondering... if all the PoH chain is helping us do is
     135             :     prove that slots were skipped correctly, why do we need to "mix in"
     136             :     transactions to the hash value?  Or do anything at all for slots
     137             :     where we don't skip the prior slot?
     138             : 
     139             :     It's a good question, and the answer is that this behavior is not
     140             :     necessary.  An ideal implementation of PoH have no concept of ticks
     141             :     or mixins, and would not be part of the TPU pipeline at all.
     142             :     Instead, there would be a simple field "skip_proof" on the last
     143             :     shred we send for a slot, the hash(hash(...)) value.  This field
     144             :     would only be filled in (and only verified by replayers) in cases
     145             :     where the slot actually skipped a parent.
     146             : 
     147             :     Then what is the "clock?  In Solana, time is constructed as follows:
     148             : 
     149             :     HASHES
     150             : 
     151             :         The base unit of time is a hash.  Hereafter, any values whose
     152             :         units are in hashes are called a "hashcnt" to distinguish them
     153             :         from actual hashed values.
     154             : 
     155             :         Agave generally defines a constant duration for each tick
     156             :         (see below) and then varies the number of hashcnt per tick, but
     157             :         as we consider the hashcnt the base unit of time, Firedancer and
     158             :         this PoH implementation defines everything in terms of hashcnt
     159             :         duration instead.
     160             : 
     161             :         In mainnet-beta, testnet, and devnet the hashcnt ticks over
     162             :         (increments) every 100 nanoseconds.  The hashcnt rate is
     163             :         specified as 500 nanoseconds according to the genesis, but there
     164             :         are several features which increase the number of hashes per
     165             :         tick while keeping tick duration constant, which make the time
     166             :         per hashcnt lower.  These features up to and including the
     167             :         `update_hashes_per_tick6` feature are activated on mainnet-beta,
     168             :         devnet, and testnet, and are described in the TICKS section
     169             :         below.
     170             : 
     171             :         Other chains and development environments might have a different
     172             :         hashcnt rate in the genesis, or they might not have activated
     173             :         the features which increase the rate yet, which we also support.
     174             : 
     175             :         In practice, although each validator follows a hashcnt rate of
     176             :         100 nanoseconds, the overall observed hashcnt rate of the
     177             :         network is a little slower than once every 100 nanoseconds,
     178             :         mostly because there are gaps and clock synchronization issues
     179             :         during handoff between leaders.  This is referred to as clock
     180             :         drift.
     181             : 
     182             :     TICKS
     183             : 
     184             :         The leader needs to periodically checkpoint the hash value
     185             :         associated with a given hashcnt so that they can publish it to
     186             :         other nodes for verification.
     187             : 
     188             :         On mainnet-beta, testnet, and devnet this occurs once every
     189             :         62,500 hashcnts, or approximately once every 6.4 microseconds.
     190             :         This value is determined at genesis time, and according to the
     191             :         features below, and could be different in development
     192             :         environments or on other chains which we support.
     193             : 
     194             :         Due to protocol limitations, when mixing in transactions to the
     195             :         proof-of-history chain, it cannot occur on a tick boundary (but
     196             :         can occur at any other hashcnt).
     197             : 
     198             :         Ticks exist mainly so that verification can happen in parallel.
     199             :         A verifier computer, rather than needing to do hash(hash(...))
     200             :         all in sequence to verify a proof-of-history chain, can do,
     201             : 
     202             :          Core 0: hash(hash(...))
     203             :          Core 1: hash(hash(...))
     204             :          Core 2: hash(hash(...))
     205             :          Core 3: hash(hash(...))
     206             :          ...
     207             : 
     208             :         Between each pair of tick boundaries.
     209             : 
     210             :         Solana sometimes calls the current tick the "tick height",
     211             :         although it makes more sense to think of it as a counter from
     212             :         zero, it's just the number of ticks since the genesis hash.
     213             : 
     214             :         There is a set of features which increase the number of hashcnts
     215             :         per tick.  These are all deployed on mainnet-beta, devnet, and
     216             :         testnet.
     217             : 
     218             :            name:             update_hashes_per_tick
     219             :            id:               3uFHb9oKdGfgZGJK9EHaAXN4USvnQtAFC13Fh5gGFS5B
     220             :            hashes per tick:  12,500
     221             :            hashcnt duration: 500 nanos
     222             : 
     223             :            name:             update_hashes_per_tick2
     224             :            id:               EWme9uFqfy1ikK1jhJs8fM5hxWnK336QJpbscNtizkTU
     225             :            hashes per tick:  17,500
     226             :            hashcnt duration: 357.142857143 nanos
     227             : 
     228             :            name:             update_hashes_per_tick3
     229             :            id:               8C8MCtsab5SsfammbzvYz65HHauuUYdbY2DZ4sznH6h5
     230             :            hashes per tick:  27,500
     231             :            hashcnt duration: 227.272727273 nanos
     232             : 
     233             :            name:             update_hashes_per_tick4
     234             :            id:               8We4E7DPwF2WfAN8tRTtWQNhi98B99Qpuj7JoZ3Aikgg
     235             :            hashes per tick:  47,500
     236             :            hashcnt duration: 131.578947368 nanos
     237             : 
     238             :            name:             update_hashes_per_tick5
     239             :            id:               BsKLKAn1WM4HVhPRDsjosmqSg2J8Tq5xP2s2daDS6Ni4
     240             :            hashes per tick:  57,500
     241             :            hashcnt duration: 108.695652174 nanos
     242             : 
     243             :            name:             update_hashes_per_tick6
     244             :            id:               FKu1qYwLQSiehz644H6Si65U5ZQ2cp9GxsyFUfYcuADv
     245             :            hashes per tick:  62,500
     246             :            hashcnt duration: 100 nanos
     247             : 
     248             :         In development environments, there is a way to configure the
     249             :         hashcnt per tick to be "none" during genesis, for a so-called
     250             :         "low power" tick producer.  The idea is not to spin cores during
     251             :         development.  This is equivalent to setting the hashcnt per tick
     252             :         to be 1, and increasing the hashcnt duration to the desired tick
     253             :         duration.
     254             : 
     255             :     SLOTS
     256             : 
     257             :         Each leader needs to be leader for a fixed amount of time, which
     258             :         is called a slot.  During a slot, a leader has an opportunity to
     259             :         receive transactions and produce a block for the network,
     260             :         although they may miss ("skip") the slot if they are offline or
     261             :         not behaving.
     262             : 
     263             :         In mainnet-beta, testnet, and devnet a slot is 64 ticks, or
     264             :         4,000,000 hashcnts, or approximately 400 milliseconds.
     265             : 
     266             :         Due to the way the leader schedule is constructed, each leader
     267             :         is always given at least four (4) consecutive slots in the
     268             :         schedule. This means when becoming leader you will be leader
     269             :         for at least 4 slots, or 1.6 seconds.
     270             : 
     271             :         It is rare, although can happen that a leader gets more than 4
     272             :         consecutive slots (eg, 8, or 12), if they are lucky with the
     273             :         leader schedule generation.
     274             : 
     275             :         The number of ticks in a slot is fixed at genesis time, and
     276             :         could be different for development or other chains, which we
     277             :         support.  There is nothing special about 4 leader slots in a
     278             :         row, and this might be changed in future, and the proof of
     279             :         history makes no assumptions that this is the case.
     280             : 
     281             :     EPOCHS
     282             : 
     283             :         Infrequently, the network needs to do certain housekeeping,
     284             :         mainly things like collecting rent and deciding on the leader
     285             :         schedule.  The length of an epoch is fixed on mainnet-beta,
     286             :         devnet and testnet at 420,000 slots, or around ~2 (1.94) days.
     287             :         This value is fixed at genesis time, and could be different for
     288             :         other chains including development, which we support.  Typically
     289             :         in development, epochs are every 8,192 slots, or around  ~1 hour
     290             :         (54.61 minutes), although it depends on the number of ticks per
     291             :         slot and the target hashcnt rate of the genesis as well.
     292             : 
     293             :         In development, epochs need not be a fixed length either.  There
     294             :         is a "warmup" option, where epochs start short and grow, which
     295             :         is useful for quickly warming up stake during development.
     296             : 
     297             :         The epoch is important because it is the only time the leader
     298             :         schedule is updated.  The leader schedule is a list of which
     299             :         leader is leader for which slot, and is generated by a special
     300             :         algorithm that is deterministic and known to all nodes.
     301             : 
     302             :         The leader schedule is computed one epoch in advance, so that
     303             :         at slot T, we always know who will be leader up until the end
     304             :         of slot T+EPOCH_LENGTH.  Specifically, the leader schedule for
     305             :         epoch N is computed during the epoch boundary crossing from
     306             :         N-2 to N-1. For mainnet-beta, the slots per epoch is fixed and
     307             :         will always be 420,000. */
     308             : 
     309             : #include "../../disco/tiles.h"
     310             : #include "../../disco/bundle/fd_bundle_crank.h"
     311             : #include "../../disco/pack/fd_pack.h"
     312             : #include "../../ballet/sha256/fd_sha256.h"
     313             : #include "../../disco/metrics/fd_metrics.h"
     314             : #include "../../util/pod/fd_pod.h"
     315             : #include "../../disco/shred/fd_shredder.h"
     316             : #include "../../disco/keyguard/fd_keyload.h"
     317             : #include "../../disco/keyguard/fd_keyswitch.h"
     318             : #include "../../disco/metrics/generated/fd_metrics_poh.h"
     319             : #include "../../disco/plugin/fd_plugin.h"
     320             : #include "../../flamenco/leaders/fd_multi_epoch_leaders.h"
     321             : 
     322             : #include <string.h>
     323             : 
     324             : /* The maximum number of microblocks that pack is allowed to pack into a
     325             :    single slot.  This is not consensus critical, and pack could, if we
     326             :    let it, produce as many microblocks as it wants, and the slot would
     327             :    still be valid.
     328             : 
     329             :    We have this here instead so that PoH can estimate slot completion,
     330             :    and keep the hashcnt up to date as pack progresses through packing
     331             :    the slot.  If this upper bound was not enforced, PoH could tick to
     332             :    the last hash of the slot and have no hashes left to mixin incoming
     333             :    microblocks from pack, so this upper bound is a coordination
     334             :    mechanism so that PoH can progress hashcnts while the slot is active,
     335             :    and know that pack will not need those hashcnts later to do mixins. */
     336           0 : #define MAX_MICROBLOCKS_PER_SLOT (32768UL)
     337             : 
     338             : /* When we are hashing in the background in case a prior leader skips
     339             :    their slot, we need to store the result of each tick hash so we can
     340             :    publish them when we become leader.  The network requires at least
     341             :    one leader slot to publish in each epoch for the leader schedule to
     342             :    generate, so in the worst case we might need two full epochs of slots
     343             :    to store the hashes.  (Eg, if epoch T only had a published slot in
     344             :    position 0 and epoch T+1 only had a published slot right at the end).
     345             : 
     346             :    There is a tighter bound: the block data limit of mainnet-beta is
     347             :    currently FD_PACK_MAX_DATA_PER_BLOCK, or 27,332,342 bytes per slot.
     348             :    At 48 bytes per tick, it is not possible to publish a slot that skips
     349             :    569,424 or more prior slots. */
     350           0 : #define MAX_SKIPPED_TICKS (1UL+(FD_PACK_MAX_DATA_PER_BLOCK/48UL))
     351             : 
     352           0 : #define IN_KIND_BANK  (0)
     353           0 : #define IN_KIND_PACK  (1)
     354           0 : #define IN_KIND_STAKE (2)
     355             : 
     356             : 
     357             : typedef struct {
     358             :   fd_wksp_t * mem;
     359             :   ulong       chunk0;
     360             :   ulong       wmark;
     361             : } fd_poh_in_ctx_t;
     362             : 
     363             : typedef struct {
     364             :   ulong       idx;
     365             :   fd_wksp_t * mem;
     366             :   ulong       chunk0;
     367             :   ulong       wmark;
     368             :   ulong       chunk;
     369             : } fd_poh_out_ctx_t;
     370             : 
     371             : typedef struct {
     372             :   fd_stem_context_t * stem;
     373             : 
     374             :   /* Static configuration determined at genesis creation time.  See
     375             :      long comment above for more information. */
     376             :   ulong  tick_duration_ns;
     377             :   ulong  hashcnt_per_tick;
     378             :   ulong  ticks_per_slot;
     379             : 
     380             :   /* Derived from the above configuration, but we precompute it. */
     381             :   double slot_duration_ns;
     382             :   double hashcnt_duration_ns;
     383             :   ulong  hashcnt_per_slot;
     384             :   /* Constant, fixed at initialization.  The maximum number of
     385             :      microblocks that the pack tile can publish in each slot. */
     386             :   ulong max_microblocks_per_slot;
     387             : 
     388             :   /* Consensus-critical slot cost limits. */
     389             :   struct {
     390             :     ulong slot_max_cost;
     391             :     ulong slot_max_vote_cost;
     392             :     ulong slot_max_write_cost_per_acct;
     393             :   } limits;
     394             : 
     395             :   /* The current slot and hashcnt within that slot of the proof of
     396             :      history, including hashes we have been producing in the background
     397             :      while waiting for our next leader slot. */
     398             :   ulong slot;
     399             :   ulong hashcnt;
     400             :   ulong cus_used;
     401             : 
     402             :   /* When we send a microblock on to the shred tile, we need to tell
     403             :      it how many hashes there have been since the last microblock, so
     404             :      this tracks the hashcnt of the last published microblock.
     405             : 
     406             :      If we are skipping slots prior to our leader slot, the last_slot
     407             :      will be quite old, and potentially much larger than the number of
     408             :      hashcnts in one slot. */
     409             :   ulong last_slot;
     410             :   ulong last_hashcnt;
     411             : 
     412             :   /* If we have published a tick or a microblock for a particular slot
     413             :      to the shred tile, we should never become leader for that slot
     414             :      again, otherwise we could publish a duplicate block.
     415             : 
     416             :      This value tracks the max slot that we have published a tick or
     417             :      microblock for so we can prevent this. */
     418             :   ulong highwater_leader_slot;
     419             : 
     420             :   /* See how this field is used below.  If we have sequential leader
     421             :      slots, we don't reset the expected slot end time between the two,
     422             :      to prevent clock drift.  If we didn't do this, our 2nd slot would
     423             :      end 400ms + `time_for_replay_to_move_slot_and_reset_poh` after
     424             :      our 1st, rather than just strictly 400ms. */
     425             :   int  lagged_consecutive_leader_start;
     426             :   ulong expect_sequential_leader_slot;
     427             : 
     428             :   /* There's a race condition ... let's say two banks A and B, bank A
     429             :      processes some transactions, then releases the account locks, and
     430             :      sends the microblock to PoH to be stamped.  Pack now re-packs the
     431             :      same accounts with a new microblock, sends to bank B, bank B
     432             :      executes and sends the microblock to PoH, and this all happens fast
     433             :      enough that PoH picks the 2nd block to stamp before the 1st.  The
     434             :      accounts database changes now are misordered with respect to PoH so
     435             :      replay could fail.
     436             : 
     437             :      To prevent this race, we order all microblocks and only process
     438             :      them in PoH in the order they are produced by pack.  This is a
     439             :      little bit over-strict, we just need to ensure that microblocks
     440             :      with conflicting accounts execute in order, but this is easiest to
     441             :      implement for now. */
     442             :   uint expect_pack_idx;
     443             : 
     444             :   /* If we have received the slot done message from pack yet.  We are
     445             :      not allowed to fully finish hashing the block until this happens so
     446             :      that we know which slot the slot_done message is arriving for. */
     447             :   int slot_done;
     448             : 
     449             :   /* The PoH tile must never drop microblocks that get committed by the
     450             :      bank, so it needs to always be able to mixin a microblock hash.
     451             :      Mixing in requires incrementing the hashcnt, so we need to ensure
     452             :      at all times that there is enough hascnts left in the slot to
     453             :      mixin whatever future microblocks pack might produce for it.
     454             : 
     455             :      This value tracks that.  At any time, max_microblocks_per_slot
     456             :      - microblocks_lower_bound is an upper bound on the maximum number
     457             :      of microblocks that might still be received in this slot. */
     458             :   ulong microblocks_lower_bound;
     459             : 
     460             :   uchar __attribute__((aligned(32UL))) reset_hash[ 32 ];
     461             :   uchar __attribute__((aligned(32UL))) hash[ 32 ];
     462             : 
     463             :   /* When we are not leader, we need to save the hashes that were
     464             :      produced in case the prior leader skips.  If they skip, we will
     465             :      replay these skipped hashes into our next leader bank so that
     466             :      the slot hashes sysvar can be updated correctly, and also publish
     467             :      them to peer nodes as part of our outgoing shreds. */
     468             :   uchar skipped_tick_hashes[ MAX_SKIPPED_TICKS ][ 32 ];
     469             : 
     470             :   /* The timestamp in nanoseconds of when the reset slot was received.
     471             :      This is the timestamp we are building on top of to determine when
     472             :      our next leader slot starts. */
     473             :   long reset_slot_start_ns;
     474             : 
     475             :   /* The timestamp in nanoseconds of when we got the bank for the
     476             :      current leader slot. */
     477             :   long leader_bank_start_ns;
     478             : 
     479             :   /* The hashcnt corresponding to the start of the current reset slot. */
     480             :   ulong reset_slot;
     481             : 
     482             :   /* The hashcnt at which our next leader slot begins, or ULONG max if
     483             :      we have no known next leader slot. */
     484             :   ulong next_leader_slot;
     485             : 
     486             :   /* If an in progress frag should be skipped */
     487             :   int skip_frag;
     488             : 
     489             :   ulong max_active_descendant;
     490             : 
     491             :   /* If we currently are the leader according the clock AND we have
     492             :      received the leader bank for the slot from the replay stage,
     493             :      this value will be non-NULL.
     494             : 
     495             :      Note that we might be inside our leader slot, but not have a bank
     496             :      yet, in which case this will still be NULL.
     497             : 
     498             :      It will be NULL for a brief race period between consecutive leader
     499             :      slots, as we ping-pong back to replay stage waiting for a new bank.
     500             : 
     501             :      Agave refers to this as the "working bank". */
     502             :   void const * current_leader_bank;
     503             : 
     504             :   fd_sha256_t * sha256;
     505             : 
     506             :   fd_multi_epoch_leaders_t * mleaders;
     507             : 
     508             :   /* The last sequence number of an outgoing fragment to the shred tile,
     509             :      or ULONG max if no such fragment.  See fd_keyswitch.h for details
     510             :      of how this is used. */
     511             :   ulong shred_seq;
     512             : 
     513             :   int halted_switching_key;
     514             : 
     515             :   fd_keyswitch_t * keyswitch;
     516             :   fd_pubkey_t identity_key;
     517             : 
     518             :   /* We need a few pieces of information to compute the right addresses
     519             :      for bundle crank information that we need to send to pack. */
     520             :   struct {
     521             :     int enabled;
     522             :     fd_pubkey_t vote_account;
     523             :     fd_bundle_crank_gen_t gen[1];
     524             :   } bundle;
     525             : 
     526             : 
     527             :   /* The Agave client needs to be notified when the leader changes,
     528             :      so that they can resume the replay stage if it was suspended waiting. */
     529             :   void * signal_leader_change;
     530             : 
     531             :   /* These are temporarily set in during_frag so they can be used in
     532             :      after_frag once the frag has been validated as not overrun. */
     533             :   uchar _txns[ USHORT_MAX ];
     534             :   fd_microblock_trailer_t _microblock_trailer[ 1 ];
     535             : 
     536             :   int in_kind[ 64 ];
     537             :   fd_poh_in_ctx_t in[ 64 ];
     538             : 
     539             :   fd_poh_out_ctx_t shred_out[ 1 ];
     540             :   fd_poh_out_ctx_t pack_out[ 1 ];
     541             :   fd_poh_out_ctx_t plugin_out[ 1 ];
     542             : 
     543             :   fd_histf_t begin_leader_delay[ 1 ];
     544             :   fd_histf_t first_microblock_delay[ 1 ];
     545             :   fd_histf_t slot_done_delay[ 1 ];
     546             :   fd_histf_t bundle_init_delay[ 1 ];
     547             : 
     548             :   ulong features_activation_avail;
     549             :   fd_shred_features_activation_t features_activation[1];
     550             : 
     551             :   ulong parent_slot;
     552             :   uchar parent_block_id[ 32 ];
     553             : 
     554             :   uchar __attribute__((aligned(FD_MULTI_EPOCH_LEADERS_ALIGN))) mleaders_mem[ FD_MULTI_EPOCH_LEADERS_FOOTPRINT ];
     555             : } fd_poh_ctx_t;
     556             : 
     557             : /* The PoH recorder is implemented in Firedancer but for now needs to
     558             :    work with Agave, so we have a locking scheme for them to
     559             :    co-operate.
     560             : 
     561             :    This is because the PoH tile lives in the Agave memory address
     562             :    space and their version of concurrency is locking the PoH recorder
     563             :    and reading arbitrary fields.
     564             : 
     565             :    So we allow them to lock the PoH tile, although with a very bad (for
     566             :    them) locking scheme.  By default, the tile has full and exclusive
     567             :    access to the data.  If part of Agave wishes to read/write they
     568             :    can either,
     569             : 
     570             :      1. Rewrite their concurrency to message passing based on mcache
     571             :         (preferred, but not feasible).
     572             :      2. Signal to the tile they wish to acquire the lock, by setting
     573             :         fd_poh_waiting_lock to 1.
     574             : 
     575             :    During after_credit, the tile will check if the waiting lock is set
     576             :    to 1, and if so, set the returned lock to 1, indicating to the waiter
     577             :    that they may now proceed.
     578             : 
     579             :    When the waiter is done reading and writing, they restore the
     580             :    returned lock value back to zero, and the POH tile continues with its
     581             :    day. */
     582             : 
     583             : static fd_poh_ctx_t * fd_poh_global_ctx;
     584             : 
     585             : static volatile ulong fd_poh_waiting_lock __attribute__((aligned(128UL)));
     586             : static volatile ulong fd_poh_returned_lock __attribute__((aligned(128UL)));
     587             : 
     588             : /* Agave also needs to write to some mcaches, so we trampoline
     589             :    that via. the PoH tile as well. */
     590             : 
     591             : struct poh_link {
     592             :   fd_frag_meta_t * mcache;
     593             :   ulong            depth;
     594             :   ulong            tx_seq;
     595             : 
     596             :   void *           mem;
     597             :   void *           dcache;
     598             :   ulong            chunk0;
     599             :   ulong            wmark;
     600             :   ulong            chunk;
     601             : 
     602             :   ulong            cr_avail;
     603             :   ulong            rx_cnt;
     604             :   ulong *          rx_fseqs[ 32UL ];
     605             : };
     606             : 
     607             : typedef struct poh_link poh_link_t;
     608             : 
     609             : static poh_link_t gossip_dedup;
     610             : static poh_link_t stake_out;
     611             : static poh_link_t crds_shred;
     612             : static poh_link_t replay_resolv;
     613             : static poh_link_t executed_txn;
     614             : 
     615             : static poh_link_t replay_plugin;
     616             : static poh_link_t gossip_plugin;
     617             : static poh_link_t start_progress_plugin;
     618             : static poh_link_t vote_listener_plugin;
     619             : static poh_link_t validator_info_plugin;
     620             : 
     621             : static void
     622           0 : poh_link_wait_credit( poh_link_t * link ) {
     623           0 :   if( FD_LIKELY( link->cr_avail ) ) return;
     624             : 
     625           0 :   while( 1 ) {
     626           0 :     ulong cr_query = ULONG_MAX;
     627           0 :     for( ulong i=0UL; i<link->rx_cnt; i++ ) {
     628           0 :       ulong const * _rx_seq = link->rx_fseqs[ i ];
     629           0 :       ulong rx_seq = FD_VOLATILE_CONST( *_rx_seq );
     630           0 :       ulong rx_cr_query = (ulong)fd_long_max( (long)link->depth - fd_long_max( fd_seq_diff( link->tx_seq, rx_seq ), 0L ), 0L );
     631           0 :       cr_query = fd_ulong_min( rx_cr_query, cr_query );
     632           0 :     }
     633           0 :     if( FD_LIKELY( cr_query>0UL ) ) {
     634           0 :       link->cr_avail = cr_query;
     635           0 :       break;
     636           0 :     }
     637           0 :     FD_SPIN_PAUSE();
     638           0 :   }
     639           0 : }
     640             : 
     641             : static void
     642             : poh_link_publish( poh_link_t *  link,
     643             :                   ulong         sig,
     644             :                   uchar const * data,
     645           0 :                   ulong         data_sz ) {
     646           0 :   while( FD_UNLIKELY( !FD_VOLATILE_CONST( link->mcache ) ) ) FD_SPIN_PAUSE();
     647           0 :   if( FD_UNLIKELY( !link->mem ) ) return; /* link not enabled, don't publish */
     648           0 :   poh_link_wait_credit( link );
     649             : 
     650           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( link->mem, link->chunk );
     651           0 :   fd_memcpy( dst, data, data_sz );
     652           0 :   ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
     653           0 :   fd_mcache_publish( link->mcache, link->depth, link->tx_seq, sig, link->chunk, data_sz, 0UL, 0UL, tspub );
     654           0 :   link->chunk = fd_dcache_compact_next( link->chunk, data_sz, link->chunk0, link->wmark );
     655           0 :   link->cr_avail--;
     656           0 :   link->tx_seq++;
     657           0 : }
     658             : 
     659             : static void
     660             : poh_link_init( poh_link_t *     link,
     661             :                fd_topo_t *      topo,
     662             :                fd_topo_tile_t * tile,
     663           0 :                ulong            out_idx ) {
     664           0 :   fd_topo_link_t * topo_link = &topo->links[ tile->out_link_id[ out_idx ] ];
     665           0 :   fd_topo_wksp_t * wksp = &topo->workspaces[ topo->objs[ topo_link->dcache_obj_id ].wksp_id ];
     666             : 
     667           0 :   link->mem      = wksp->wksp;
     668           0 :   link->depth    = fd_mcache_depth( topo_link->mcache );
     669           0 :   link->tx_seq   = 0UL;
     670           0 :   link->dcache   = topo_link->dcache;
     671           0 :   link->chunk0   = fd_dcache_compact_chunk0( wksp->wksp, topo_link->dcache );
     672           0 :   link->wmark    = fd_dcache_compact_wmark ( wksp->wksp, topo_link->dcache, topo_link->mtu );
     673           0 :   link->chunk    = link->chunk0;
     674           0 :   link->cr_avail = 0UL;
     675           0 :   link->rx_cnt   = 0UL;
     676           0 :   for( ulong i=0UL; i<topo->tile_cnt; i++ ) {
     677           0 :     fd_topo_tile_t * _tile = &topo->tiles[ i ];
     678           0 :     for( ulong j=0UL; j<_tile->in_cnt; j++ ) {
     679           0 :       if( _tile->in_link_id[ j ]==topo_link->id && _tile->in_link_reliable[ j ] ) {
     680           0 :         FD_TEST( link->rx_cnt<32UL );
     681           0 :         link->rx_fseqs[ link->rx_cnt++ ] = _tile->in_link_fseq[ j ];
     682           0 :         break;
     683           0 :       }
     684           0 :     }
     685           0 :   }
     686           0 :   FD_COMPILER_MFENCE();
     687           0 :   link->mcache = topo_link->mcache;
     688           0 :   FD_COMPILER_MFENCE();
     689           0 :   FD_TEST( link->mcache );
     690           0 : }
     691             : 
     692             : /* To help show correctness, functions that might be called from
     693             :    Rust, either directly or indirectly, have this fake "attribute"
     694             :    CALLED_FROM_RUST, which is actually nothing.  Calls from Rust
     695             :    typically execute on threads did not call fd_boot, so they do not
     696             :    have the typical FD_TL variables.  In particular, they cannot use
     697             :    normal metrics, and their log messages don't have full context.
     698             :    Additionally, Rust functions marked CALLED_FROM_RUST cannot call back
     699             :    into a C fd_ext function without causing a deadlock (although the
     700             :    other Rust fd_ext functions have a similar problem).
     701             : 
     702             :    To prevent annotation from polluting the whole codebase, calls to
     703             :    functions outside this file are manually checked and marked as being
     704             :    safe at each call rather than annotated. */
     705             : #define CALLED_FROM_RUST
     706             : 
     707             : static CALLED_FROM_RUST fd_poh_ctx_t *
     708           0 : fd_ext_poh_write_lock( void ) {
     709           0 :   for(;;) {
     710             :     /* Acquire the waiter lock to make sure we are the first writer in the queue. */
     711           0 :     if( FD_LIKELY( !FD_ATOMIC_CAS( &fd_poh_waiting_lock, 0UL, 1UL) ) ) break;
     712           0 :     FD_SPIN_PAUSE();
     713           0 :   }
     714           0 :   FD_COMPILER_MFENCE();
     715           0 :   for(;;) {
     716             :     /* Now wait for the tile to tell us we can proceed. */
     717           0 :     if( FD_LIKELY( FD_VOLATILE_CONST( fd_poh_returned_lock ) ) ) break;
     718           0 :     FD_SPIN_PAUSE();
     719           0 :   }
     720           0 :   FD_COMPILER_MFENCE();
     721           0 :   return fd_poh_global_ctx;
     722           0 : }
     723             : 
     724             : static CALLED_FROM_RUST void
     725           0 : fd_ext_poh_write_unlock( void ) {
     726           0 :   FD_COMPILER_MFENCE();
     727           0 :   FD_VOLATILE( fd_poh_returned_lock ) = 0UL;
     728           0 : }
     729             : 
     730             : /* The PoH tile needs to interact with the Agave address space to
     731             :    do certain operations that Firedancer hasn't reimplemented yet, a.k.a
     732             :    transaction execution.  We have Agave export some wrapper
     733             :    functions that we call into during regular tile execution.  These do
     734             :    not need any locking, since they are called serially from the single
     735             :    PoH tile. */
     736             : 
     737             : extern CALLED_FROM_RUST void fd_ext_bank_acquire( void const * bank );
     738             : extern CALLED_FROM_RUST void fd_ext_bank_release( void const * bank );
     739             : extern CALLED_FROM_RUST void fd_ext_poh_signal_leader_change( void * sender );
     740             : extern                  void fd_ext_poh_register_tick( void const * bank, uchar const * hash );
     741             : 
     742             : /* fd_ext_poh_initialize is called by Agave on startup to
     743             :    initialize the PoH tile with some static configuration, and the
     744             :    initial reset slot and hash which it retrieves from a snapshot.
     745             : 
     746             :    This function is called by some random Agave thread, but
     747             :    it blocks booting of the PoH tile.  The tile will spin until it
     748             :    determines that this initialization has happened.
     749             : 
     750             :    signal_leader_change is an opaque Rust object that is used to
     751             :    tell the replay stage that the leader has changed.  It is a
     752             :    Box::into_raw(Arc::increment_strong(crossbeam::Sender)), so it
     753             :    has infinite lifetime unless this C code releases the refcnt.
     754             : 
     755             :    It can be used with `fd_ext_poh_signal_leader_change` which
     756             :    will just issue a nonblocking send on the channel. */
     757             : 
     758             : CALLED_FROM_RUST void
     759             : fd_ext_poh_initialize( ulong         tick_duration_ns,    /* See clock comments above, will be 6.4 microseconds for mainnet-beta. */
     760             :                        ulong         hashcnt_per_tick,    /* See clock comments above, will be 62,500 for mainnet-beta. */
     761             :                        ulong         ticks_per_slot,      /* See clock comments above, will almost always be 64. */
     762             :                        ulong         tick_height,         /* The counter (height) of the tick to start hashing on top of. */
     763             :                        uchar const * last_entry_hash,     /* Points to start of a 32 byte region of memory, the hash itself at the tick height. */
     764           0 :                        void *        signal_leader_change /* See comment above. */ ) {
     765           0 :   FD_COMPILER_MFENCE();
     766           0 :   for(;;) {
     767             :     /* Make sure the ctx is initialized before trying to take the lock. */
     768           0 :     if( FD_LIKELY( FD_VOLATILE_CONST( fd_poh_global_ctx ) ) ) break;
     769           0 :     FD_SPIN_PAUSE();
     770           0 :   }
     771           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     772             : 
     773           0 :   ctx->slot                = tick_height/ticks_per_slot;
     774           0 :   ctx->hashcnt             = 0UL;
     775           0 :   ctx->cus_used            = 0UL;
     776           0 :   ctx->last_slot           = ctx->slot;
     777           0 :   ctx->last_hashcnt        = 0UL;
     778           0 :   ctx->reset_slot          = ctx->slot;
     779           0 :   ctx->reset_slot_start_ns = fd_log_wallclock(); /* safe to call from Rust */
     780             : 
     781           0 :   memcpy( ctx->reset_hash, last_entry_hash, 32UL );
     782           0 :   memcpy( ctx->hash, last_entry_hash, 32UL );
     783             : 
     784           0 :   ctx->signal_leader_change = signal_leader_change;
     785             : 
     786             :   /* Static configuration about the clock. */
     787           0 :   ctx->tick_duration_ns = tick_duration_ns;
     788           0 :   ctx->hashcnt_per_tick = hashcnt_per_tick;
     789           0 :   ctx->ticks_per_slot   = ticks_per_slot;
     790             : 
     791             :   /* Recompute derived information about the clock. */
     792           0 :   ctx->slot_duration_ns    = (double)ticks_per_slot*(double)tick_duration_ns;
     793           0 :   ctx->hashcnt_duration_ns = (double)tick_duration_ns/(double)hashcnt_per_tick;
     794           0 :   ctx->hashcnt_per_slot    = ticks_per_slot*hashcnt_per_tick;
     795             : 
     796           0 :   if( FD_UNLIKELY( ctx->hashcnt_per_tick==1UL ) ) {
     797             :     /* Low power producer, maximum of one microblock per tick in the slot */
     798           0 :     ctx->max_microblocks_per_slot = ctx->ticks_per_slot;
     799           0 :   } else {
     800             :     /* See the long comment in after_credit for this limit */
     801           0 :     ctx->max_microblocks_per_slot = fd_ulong_min( MAX_MICROBLOCKS_PER_SLOT, ctx->ticks_per_slot*(ctx->hashcnt_per_tick-1UL) );
     802           0 :   }
     803             : 
     804           0 :   fd_ext_poh_write_unlock();
     805           0 : }
     806             : 
     807             : /* fd_ext_poh_acquire_bank gets the current leader bank if there is one
     808             :    currently active.  PoH might think we are leader without having a
     809             :    leader bank if the replay stage has not yet noticed we are leader.
     810             : 
     811             :    The bank that is returned is owned the caller, and must be converted
     812             :    to an Arc<Bank> by calling Arc::from_raw() on it.  PoH increments the
     813             :    reference count before returning the bank, so that it can also keep
     814             :    its internal copy.
     815             : 
     816             :    If there is no leader bank, NULL is returned.  In this case, the
     817             :    caller should not call `Arc::from_raw()`. */
     818             : 
     819             : CALLED_FROM_RUST void const *
     820           0 : fd_ext_poh_acquire_leader_bank( void ) {
     821           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     822           0 :   void const * bank = NULL;
     823           0 :   if( FD_LIKELY( ctx->current_leader_bank ) ) {
     824             :     /* Clone refcount before we release the lock. */
     825           0 :     fd_ext_bank_acquire( ctx->current_leader_bank );
     826           0 :     bank = ctx->current_leader_bank;
     827           0 :   }
     828           0 :   fd_ext_poh_write_unlock();
     829           0 :   return bank;
     830           0 : }
     831             : 
     832             : /* fd_ext_poh_reset_slot returns the slot height one above the last good
     833             :    (unskipped) slot we are building on top of.  This is always a good
     834             :    known value, and will not be ULONG_MAX. */
     835             : 
     836             : CALLED_FROM_RUST ulong
     837           0 : fd_ext_poh_reset_slot( void ) {
     838           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     839           0 :   ulong reset_slot = ctx->reset_slot;
     840           0 :   fd_ext_poh_write_unlock();
     841           0 :   return reset_slot;
     842           0 : }
     843             : 
     844             : CALLED_FROM_RUST void
     845           0 : fd_ext_poh_update_active_descendant( ulong max_active_descendant ) {
     846           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     847           0 :   ctx->max_active_descendant = max_active_descendant;
     848           0 :   fd_ext_poh_write_unlock();
     849           0 : }
     850             : 
     851             : /* fd_ext_poh_reached_leader_slot returns 1 if we have reached a slot
     852             :    where we are leader.  This is used by the replay stage to determine
     853             :    if it should create a new leader bank descendant of the prior reset
     854             :    slot block.
     855             : 
     856             :    Sometimes, even when we reach our slot we do not return 1, as we are
     857             :    giving a grace period to the prior leader to finish publishing their
     858             :    block.
     859             : 
     860             :    out_leader_slot is the slot height of the leader slot we reached, and
     861             :    reset_slot is the slot height of the last good (unskipped) slot we
     862             :    are building on top of. */
     863             : 
     864             : CALLED_FROM_RUST int
     865             : fd_ext_poh_reached_leader_slot( ulong * out_leader_slot,
     866           0 :                                 ulong * out_reset_slot ) {
     867           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     868             : 
     869           0 :   *out_leader_slot = ctx->next_leader_slot;
     870           0 :   *out_reset_slot  = ctx->reset_slot;
     871             : 
     872           0 :   if( FD_UNLIKELY( ctx->next_leader_slot==ULONG_MAX ||
     873           0 :                    ctx->slot<ctx->next_leader_slot ) ) {
     874             :     /* Didn't reach our leader slot yet. */
     875           0 :     fd_ext_poh_write_unlock();
     876           0 :     return 0;
     877           0 :   }
     878             : 
     879           0 :   if( FD_UNLIKELY( ctx->halted_switching_key ) ) {
     880             :     /* Reached our leader slot, but the leader pipeline is halted
     881             :        because we are switching identity key. */
     882           0 :     fd_ext_poh_write_unlock();
     883           0 :     return 0;
     884           0 :   }
     885             : 
     886           0 :   if( FD_LIKELY( ctx->reset_slot==ctx->next_leader_slot ) ) {
     887             :     /* We were reset onto our leader slot, because the prior leader
     888             :        completed theirs, so we should start immediately, no need for a
     889             :        grace period. */
     890           0 :     fd_ext_poh_write_unlock();
     891           0 :     return 1;
     892           0 :   }
     893             : 
     894           0 :   long now_ns = fd_log_wallclock();
     895           0 :   long expected_start_time_ns = ctx->reset_slot_start_ns + (long)((double)(ctx->next_leader_slot-ctx->reset_slot)*ctx->slot_duration_ns);
     896             : 
     897             :   /* If a prior leader is still in the process of publishing their slot,
     898             :      delay ours to let them finish ... unless they are so delayed that
     899             :      we risk getting skipped by the leader following us.  1.2 seconds
     900             :      is a reasonable default here, although any value between 0 and 1.6
     901             :      seconds could be considered reasonable.  This is arbitrary and
     902             :      chosen due to intuition. */
     903             : 
     904           0 :   if( FD_UNLIKELY( now_ns<expected_start_time_ns+(long)(3.0*ctx->slot_duration_ns) ) ) {
     905             :     /* If the max_active_descendant is >= next_leader_slot, we waited
     906             :        too long and a leader after us started publishing to try and skip
     907             :        us.  Just start our leader slot immediately, we might win ... */
     908             : 
     909           0 :     if( FD_LIKELY( ctx->max_active_descendant>=ctx->reset_slot && ctx->max_active_descendant<ctx->next_leader_slot ) ) {
     910             :       /* If one of the leaders between the reset slot and our leader
     911             :          slot is in the process of publishing (they have a descendant
     912             :          bank that is in progress of being replayed), then keep waiting.
     913             :          We probably wouldn't get a leader slot out before they
     914             :          finished.
     915             : 
     916             :          Unless... we are past the deadline to start our slot by more
     917             :          than 1.2 seconds, in which case we should probably start it to
     918             :          avoid getting skipped by the leader behind us. */
     919           0 :       fd_ext_poh_write_unlock();
     920           0 :       return 0;
     921           0 :     }
     922           0 :   }
     923             : 
     924           0 :   fd_ext_poh_write_unlock();
     925           0 :   return 1;
     926           0 : }
     927             : 
     928             : CALLED_FROM_RUST static inline void
     929             : publish_plugin_slot_start( fd_poh_ctx_t * ctx,
     930             :                            ulong          slot,
     931           0 :                            ulong          parent_slot ) {
     932           0 :   if( FD_UNLIKELY( !ctx->plugin_out->mem ) ) return;
     933             : 
     934           0 :   fd_plugin_msg_slot_start_t * slot_start = (fd_plugin_msg_slot_start_t *)fd_chunk_to_laddr( ctx->plugin_out->mem, ctx->plugin_out->chunk );
     935           0 :   *slot_start = (fd_plugin_msg_slot_start_t){ .slot = slot, .parent_slot = parent_slot };
     936           0 :   fd_stem_publish( ctx->stem, ctx->plugin_out->idx, FD_PLUGIN_MSG_SLOT_START, ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_start_t), 0UL, 0UL, 0UL );
     937           0 :   ctx->plugin_out->chunk = fd_dcache_compact_next( ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_start_t), ctx->plugin_out->chunk0, ctx->plugin_out->wmark );
     938           0 : }
     939             : 
     940             : CALLED_FROM_RUST static inline void
     941             : publish_plugin_slot_end( fd_poh_ctx_t * ctx,
     942             :                          ulong          slot,
     943           0 :                          ulong          cus_used ) {
     944           0 :   if( FD_UNLIKELY( !ctx->plugin_out->mem ) ) return;
     945             : 
     946           0 :   fd_plugin_msg_slot_end_t * slot_end = (fd_plugin_msg_slot_end_t *)fd_chunk_to_laddr( ctx->plugin_out->mem, ctx->plugin_out->chunk );
     947           0 :   *slot_end = (fd_plugin_msg_slot_end_t){ .slot = slot, .cus_used = cus_used };
     948           0 :   fd_stem_publish( ctx->stem, ctx->plugin_out->idx, FD_PLUGIN_MSG_SLOT_END, ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_end_t), 0UL, 0UL, 0UL );
     949           0 :   ctx->plugin_out->chunk = fd_dcache_compact_next( ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_end_t), ctx->plugin_out->chunk0, ctx->plugin_out->wmark );
     950           0 : }
     951             : 
     952             : extern int
     953             : fd_ext_bank_load_account( void const *  bank,
     954             :                           int           fixed_root,
     955             :                           uchar const * addr,
     956             :                           uchar *       owner,
     957             :                           uchar *       data,
     958             :                           ulong *       data_sz );
     959             : 
     960             : CALLED_FROM_RUST static void
     961             : publish_became_leader( fd_poh_ctx_t * ctx,
     962             :                        ulong          slot,
     963           0 :                        ulong          epoch ) {
     964           0 :   double tick_per_ns = fd_tempo_tick_per_ns( NULL );
     965           0 :   fd_histf_sample( ctx->begin_leader_delay, (ulong)((double)(fd_log_wallclock()-ctx->reset_slot_start_ns)/tick_per_ns) );
     966             : 
     967           0 :   if( FD_UNLIKELY( ctx->lagged_consecutive_leader_start ) ) {
     968             :     /* If we are mirroring Agave behavior, the wall clock gets reset
     969             :        here so we don't count time spent waiting for a bank to freeze
     970             :        or replay stage to actually start the slot towards our 400ms.
     971             : 
     972             :        See extended comments in the config file on this option. */
     973           0 :     ctx->reset_slot_start_ns = fd_log_wallclock() - (long)((double)(slot-ctx->reset_slot)*ctx->slot_duration_ns);
     974           0 :   }
     975             : 
     976           0 :   fd_bundle_crank_tip_payment_config_t config[1]             = { 0 };
     977           0 :   fd_acct_addr_t                       tip_receiver_owner[1] = { 0 };
     978             : 
     979           0 :   if( FD_UNLIKELY( ctx->bundle.enabled ) ) {
     980           0 :     long bundle_time = -fd_tickcount();
     981           0 :     fd_acct_addr_t tip_payment_config[1];
     982           0 :     fd_acct_addr_t tip_receiver[1];
     983           0 :     fd_bundle_crank_get_addresses( ctx->bundle.gen, epoch, tip_payment_config, tip_receiver );
     984             : 
     985           0 :     fd_acct_addr_t _dummy[1];
     986           0 :     uchar          dummy[1];
     987             : 
     988           0 :     void const * bank = ctx->current_leader_bank;
     989             : 
     990             :     /* Calling rust from a C function that is CALLED_FROM_RUST risks
     991             :        deadlock.  In this case, I checked the load_account function and
     992             :        ensured it never calls any C functions that acquire the lock. */
     993           0 :     ulong sz1 = sizeof(config), sz2 = 1UL;
     994           0 :     int found1 = fd_ext_bank_load_account( bank, 0, tip_payment_config->b, _dummy->b,             (uchar *)config, &sz1 );
     995           0 :     int found2 = fd_ext_bank_load_account( bank, 0, tip_receiver->b,       tip_receiver_owner->b,          dummy,  &sz2 );
     996             :     /* The bundle crank code detects whether the accounts were found by
     997             :        whether they have non-zero values (since found and uninitialized
     998             :        should be treated the same), so we actually don't really care
     999             :        about the value of found{1,2}. */
    1000           0 :     (void)found1; (void)found2;
    1001           0 :     bundle_time += fd_tickcount();
    1002           0 :     fd_histf_sample( ctx->bundle_init_delay, (ulong)bundle_time );
    1003           0 :   }
    1004             : 
    1005           0 :   long slot_start_ns = ctx->reset_slot_start_ns + (long)((double)(slot-ctx->reset_slot)*ctx->slot_duration_ns);
    1006             : 
    1007             :   /* No need to check flow control, there are always credits became when we
    1008             :      are leader, we will not "become" leader again until we are done, so at
    1009             :      most one frag in flight at a time. */
    1010             : 
    1011           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->pack_out->mem, ctx->pack_out->chunk );
    1012             : 
    1013           0 :   fd_became_leader_t * leader = (fd_became_leader_t *)dst;
    1014           0 :   leader->slot_start_ns           = slot_start_ns;
    1015           0 :   leader->slot_end_ns             = (long)((double)slot_start_ns + ctx->slot_duration_ns);
    1016           0 :   leader->bank                    = ctx->current_leader_bank;
    1017           0 :   leader->max_microblocks_in_slot = ctx->max_microblocks_per_slot;
    1018           0 :   leader->ticks_per_slot          = ctx->ticks_per_slot;
    1019           0 :   leader->total_skipped_ticks     = ctx->ticks_per_slot*(slot-ctx->reset_slot);
    1020           0 :   leader->epoch                   = epoch;
    1021           0 :   leader->bundle->config[0]       = config[0];
    1022             : 
    1023           0 :   leader->limits.slot_max_cost                = ctx->limits.slot_max_cost;
    1024           0 :   leader->limits.slot_max_vote_cost           = ctx->limits.slot_max_vote_cost;
    1025           0 :   leader->limits.slot_max_write_cost_per_acct = ctx->limits.slot_max_write_cost_per_acct;
    1026             : 
    1027           0 :   memcpy( leader->bundle->last_blockhash,     ctx->reset_hash,    32UL );
    1028           0 :   memcpy( leader->bundle->tip_receiver_owner, tip_receiver_owner, 32UL );
    1029             : 
    1030           0 :   if( FD_UNLIKELY( leader->ticks_per_slot+leader->total_skipped_ticks>=MAX_SKIPPED_TICKS ) )
    1031           0 :     FD_LOG_ERR(( "Too many skipped ticks %lu for slot %lu, chain must halt", leader->ticks_per_slot+leader->total_skipped_ticks, slot ));
    1032             : 
    1033           0 :   ulong sig = fd_disco_poh_sig( slot, POH_PKT_TYPE_BECAME_LEADER, 0UL );
    1034           0 :   fd_stem_publish( ctx->stem, ctx->pack_out->idx, sig, ctx->pack_out->chunk, sizeof(fd_became_leader_t), 0UL, 0UL, 0UL );
    1035           0 :   ctx->pack_out->chunk = fd_dcache_compact_next( ctx->pack_out->chunk, sizeof(fd_became_leader_t), ctx->pack_out->chunk0, ctx->pack_out->wmark );
    1036           0 : }
    1037             : 
    1038             : /* The PoH tile knows when it should become leader by waiting for its
    1039             :    leader slot (with the operating system clock).  This function is so
    1040             :    that when it becomes the leader, it can be told what the leader bank
    1041             :    is by the replay stage.  See the notes in the long comment above for
    1042             :    more on how this works. */
    1043             : 
    1044             : CALLED_FROM_RUST void
    1045             : fd_ext_poh_begin_leader( void const * bank,
    1046             :                          ulong        slot,
    1047             :                          ulong        epoch,
    1048             :                          ulong        hashcnt_per_tick,
    1049             :                          ulong        cus_block_limit,
    1050             :                          ulong        cus_vote_cost_limit,
    1051           0 :                          ulong        cus_account_cost_limit ) {
    1052           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
    1053             : 
    1054           0 :   FD_TEST( !ctx->current_leader_bank );
    1055             : 
    1056           0 :   if( FD_UNLIKELY( slot!=ctx->slot ) )             FD_LOG_ERR(( "Trying to begin leader slot %lu but we are now on slot %lu", slot, ctx->slot ));
    1057           0 :   if( FD_UNLIKELY( slot!=ctx->next_leader_slot ) ) FD_LOG_ERR(( "Trying to begin leader slot %lu but next leader slot is %lu", slot, ctx->next_leader_slot ));
    1058             : 
    1059           0 :   if( FD_UNLIKELY( ctx->hashcnt_per_tick!=hashcnt_per_tick ) ) {
    1060           0 :     FD_LOG_WARNING(( "hashes per tick changed from %lu to %lu", ctx->hashcnt_per_tick, hashcnt_per_tick ));
    1061             : 
    1062             :     /* Recompute derived information about the clock. */
    1063           0 :     ctx->hashcnt_duration_ns = (double)ctx->tick_duration_ns/(double)hashcnt_per_tick;
    1064           0 :     ctx->hashcnt_per_slot = ctx->ticks_per_slot*hashcnt_per_tick;
    1065           0 :     ctx->hashcnt_per_tick = hashcnt_per_tick;
    1066             : 
    1067           0 :     if( FD_UNLIKELY( ctx->hashcnt_per_tick==1UL ) ) {
    1068             :       /* Low power producer, maximum of one microblock per tick in the slot */
    1069           0 :       ctx->max_microblocks_per_slot = ctx->ticks_per_slot;
    1070           0 :     } else {
    1071             :       /* See the long comment in after_credit for this limit */
    1072           0 :       ctx->max_microblocks_per_slot = fd_ulong_min( MAX_MICROBLOCKS_PER_SLOT, ctx->ticks_per_slot*(ctx->hashcnt_per_tick-1UL) );
    1073           0 :     }
    1074             : 
    1075             :     /* Discard any ticks we might have done in the interim.  They will
    1076             :        have the wrong number of hashes per tick.  We can just catch back
    1077             :        up quickly if not too many slots were skipped and hopefully
    1078             :        publish on time.  Note that tick production and verification of
    1079             :        skipped slots is done for the eventual bank that publishes a
    1080             :        slot, for example:
    1081             : 
    1082             :         Reset Slot:            998
    1083             :         Epoch Transition Slot: 1000
    1084             :         Leader Slot:           1002
    1085             : 
    1086             :        In this case, if a feature changing the hashcnt_per_tick is
    1087             :        activated in slot 1000, and we are publishing empty ticks for
    1088             :        slots 998, 999, 1000, and 1001, they should all have the new
    1089             :        hashes_per_tick number of hashes, rather than the older one, or
    1090             :        some combination. */
    1091             : 
    1092           0 :     FD_TEST( ctx->last_slot==ctx->reset_slot );
    1093           0 :     FD_TEST( !ctx->last_hashcnt );
    1094           0 :     ctx->slot = ctx->reset_slot;
    1095           0 :     ctx->hashcnt = 0UL;
    1096           0 :   }
    1097             : 
    1098           0 :   ctx->current_leader_bank     = bank;
    1099           0 :   ctx->slot_done               = 0;
    1100           0 :   ctx->microblocks_lower_bound = 0UL;
    1101           0 :   ctx->cus_used                = 0UL;
    1102             : 
    1103           0 :   ctx->limits.slot_max_cost                = cus_block_limit;
    1104           0 :   ctx->limits.slot_max_vote_cost           = cus_vote_cost_limit;
    1105           0 :   ctx->limits.slot_max_write_cost_per_acct = cus_account_cost_limit;
    1106             : 
    1107             :   /* clamp and warn if we are underutilizing CUs */
    1108           0 :   if( FD_UNLIKELY( ctx->limits.slot_max_cost > FD_PACK_MAX_COST_PER_BLOCK_UPPER_BOUND ) ) {
    1109           0 :     FD_LOG_WARNING(( "Underutilizing protocol slot CU limit. protocol_limit=%lu validator_limit=%lu", ctx->limits.slot_max_cost, FD_PACK_MAX_COST_PER_BLOCK_UPPER_BOUND ));
    1110           0 :     ctx->limits.slot_max_cost = FD_PACK_MAX_COST_PER_BLOCK_UPPER_BOUND;
    1111           0 :   }
    1112           0 :   if( FD_UNLIKELY( ctx->limits.slot_max_vote_cost > FD_PACK_MAX_VOTE_COST_PER_BLOCK_UPPER_BOUND ) ) {
    1113           0 :     FD_LOG_WARNING(( "Underutilizing protocol vote CU limit. protocol_limit=%lu validator_limit=%lu", ctx->limits.slot_max_vote_cost, FD_PACK_MAX_VOTE_COST_PER_BLOCK_UPPER_BOUND ));
    1114           0 :     ctx->limits.slot_max_vote_cost = FD_PACK_MAX_VOTE_COST_PER_BLOCK_UPPER_BOUND;
    1115           0 :   }
    1116           0 :   if( FD_UNLIKELY( ctx->limits.slot_max_write_cost_per_acct > FD_PACK_MAX_WRITE_COST_PER_ACCT_UPPER_BOUND ) ) {
    1117           0 :     FD_LOG_WARNING(( "Underutilizing protocol write CU limit. protocol_limit=%lu validator_limit=%lu", ctx->limits.slot_max_write_cost_per_acct, FD_PACK_MAX_WRITE_COST_PER_ACCT_UPPER_BOUND ));
    1118           0 :     ctx->limits.slot_max_write_cost_per_acct = FD_PACK_MAX_WRITE_COST_PER_ACCT_UPPER_BOUND;
    1119           0 :   }
    1120             : 
    1121             :   /* We are about to start publishing to the shred tile for this slot
    1122             :      so update the highwater mark so we never republish in this slot
    1123             :      again.  Also check that the leader slot is greater than the
    1124             :      highwater, which should have been ensured earlier. */
    1125             : 
    1126           0 :   FD_TEST( ctx->highwater_leader_slot==ULONG_MAX || slot>=ctx->highwater_leader_slot );
    1127           0 :   ctx->highwater_leader_slot = fd_ulong_max( fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ), slot );
    1128             : 
    1129           0 :   publish_became_leader( ctx, slot, epoch );
    1130           0 :   FD_LOG_INFO(( "fd_ext_poh_begin_leader(slot=%lu, highwater_leader_slot=%lu, last_slot=%lu, last_hashcnt=%lu)", slot, ctx->highwater_leader_slot, ctx->last_slot, ctx->last_hashcnt ));
    1131             : 
    1132           0 :   fd_ext_poh_write_unlock();
    1133           0 : }
    1134             : 
    1135             : /* Determine what the next slot is in the leader schedule is that we are
    1136             :    leader.  Includes the current slot.  If we are not leader in what
    1137             :    remains of the current and next epoch, return ULONG_MAX. */
    1138             : 
    1139             : static inline CALLED_FROM_RUST ulong
    1140           0 : next_leader_slot( fd_poh_ctx_t * ctx ) {
    1141             :   /* If we have published anything in a particular slot, then we
    1142             :      should never become leader for that slot again. */
    1143           0 :   ulong min_leader_slot = fd_ulong_max( ctx->slot, fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ) );
    1144           0 :   return fd_multi_epoch_leaders_get_next_slot( ctx->mleaders, min_leader_slot, &ctx->identity_key );
    1145           0 : }
    1146             : 
    1147             : extern int
    1148             : fd_ext_admin_rpc_set_identity( uchar const * identity_keypair,
    1149             :                                int           require_tower );
    1150             : 
    1151             : static inline int FD_FN_SENSITIVE
    1152             : maybe_change_identity( fd_poh_ctx_t * ctx,
    1153           0 :                        int            definitely_not_leader ) {
    1154           0 :   if( FD_UNLIKELY( ctx->halted_switching_key && fd_keyswitch_state_query( ctx->keyswitch )==FD_KEYSWITCH_STATE_UNHALT_PENDING ) ) {
    1155           0 :     ctx->halted_switching_key = 0;
    1156           0 :     fd_keyswitch_state( ctx->keyswitch, FD_KEYSWITCH_STATE_COMPLETED );
    1157           0 :     return 1;
    1158           0 :   }
    1159             : 
    1160             :   /* Cannot change identity while in the middle of a leader slot, else
    1161             :      poh state machine would become corrupt. */
    1162             : 
    1163           0 :   int is_leader = !definitely_not_leader && ctx->next_leader_slot!=ULONG_MAX && ctx->slot>=ctx->next_leader_slot;
    1164           0 :   if( FD_UNLIKELY( is_leader ) ) return 0;
    1165             : 
    1166           0 :   if( FD_UNLIKELY( fd_keyswitch_state_query( ctx->keyswitch )==FD_KEYSWITCH_STATE_SWITCH_PENDING ) ) {
    1167           0 :     int failed = fd_ext_admin_rpc_set_identity( ctx->keyswitch->bytes, fd_keyswitch_param_query( ctx->keyswitch )==1 );
    1168           0 :     explicit_bzero( ctx->keyswitch->bytes, 32UL );
    1169           0 :     FD_COMPILER_MFENCE();
    1170           0 :     if( FD_UNLIKELY( failed==-1 ) ) {
    1171           0 :       fd_keyswitch_state( ctx->keyswitch, FD_KEYSWITCH_STATE_FAILED );
    1172           0 :       return 0;
    1173           0 :     }
    1174             : 
    1175           0 :     memcpy( ctx->identity_key.uc, ctx->keyswitch->bytes+32UL, 32UL );
    1176             : 
    1177             :     /* When we switch key, we might have ticked part way through a slot
    1178             :        that we are now leader in.  This violates the contract of the
    1179             :        tile, that when we become leader, we have not ticked in that slot
    1180             :        at all.  To see why this would be bad, consider the case where we
    1181             :        have ticked almost to the end, and there isn't enough space left
    1182             :        to reserve the minimum amount of microblocks needed by pack.
    1183             : 
    1184             :        To resolve this, we just reset PoH back to the reset slot, and
    1185             :        let it try to catch back up quickly. This is OK since the network
    1186             :        rarely skips. */
    1187           0 :     ctx->slot    = ctx->reset_slot;
    1188           0 :     ctx->hashcnt = 0UL;
    1189           0 :     memcpy( ctx->hash, ctx->reset_hash, 32UL );
    1190             : 
    1191           0 :     ctx->halted_switching_key = 1;
    1192           0 :     ctx->keyswitch->result    = ctx->shred_seq;
    1193           0 :     fd_keyswitch_state( ctx->keyswitch, FD_KEYSWITCH_STATE_COMPLETED );
    1194           0 :   }
    1195             : 
    1196           0 :   return 0;
    1197           0 : }
    1198             : 
    1199             : static CALLED_FROM_RUST void
    1200           0 : no_longer_leader( fd_poh_ctx_t * ctx ) {
    1201           0 :   if( FD_UNLIKELY( ctx->current_leader_bank ) ) fd_ext_bank_release( ctx->current_leader_bank );
    1202             :   /* If we stop being leader in a slot, we can never become leader in
    1203             :       that slot again, and all in-flight microblocks for that slot
    1204             :       should be dropped. */
    1205           0 :   ctx->highwater_leader_slot = fd_ulong_max( fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ), ctx->slot );
    1206           0 :   ctx->current_leader_bank = NULL;
    1207           0 :   int identity_changed = maybe_change_identity( ctx, 1 );
    1208           0 :   ctx->next_leader_slot = next_leader_slot( ctx );
    1209           0 :   if( FD_UNLIKELY( identity_changed ) ) {
    1210           0 :     FD_LOG_INFO(( "fd_poh_identity_changed(next_leader_slot=%lu)", ctx->next_leader_slot ));
    1211           0 :   }
    1212             : 
    1213           0 :   FD_COMPILER_MFENCE();
    1214           0 :   fd_ext_poh_signal_leader_change( ctx->signal_leader_change );
    1215           0 :   FD_LOG_INFO(( "no_longer_leader(next_leader_slot=%lu)", ctx->next_leader_slot ));
    1216           0 : }
    1217             : 
    1218             : /* fd_ext_poh_reset is called by the Agave client when a slot on
    1219             :    the active fork has finished a block and we need to reset our PoH to
    1220             :    be ticking on top of the block it produced. */
    1221             : 
    1222             : CALLED_FROM_RUST void
    1223             : fd_ext_poh_reset( ulong         completed_bank_slot, /* The slot that successfully produced a block */
    1224             :                   uchar const * reset_blockhash,     /* The hash of the last tick in the produced block */
    1225             :                   ulong         hashcnt_per_tick,    /* The hashcnt per tick of the bank that completed */
    1226             :                   uchar const * parent_block_id,     /* The block id of the parent block */
    1227           0 :                   ulong const * features_activation  /* The activation slot of shred-tile features */ ) {
    1228           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
    1229             : 
    1230           0 :   ulong slot_before_reset = ctx->slot;
    1231           0 :   int leader_before_reset = ctx->slot>=ctx->next_leader_slot;
    1232           0 :   if( FD_UNLIKELY( leader_before_reset && ctx->current_leader_bank ) ) {
    1233             :     /* If we were in the middle of a leader slot that we notified pack
    1234             :        pack to start packing for we can never publish into that slot
    1235             :        again, mark all in-flight microblocks to be dropped. */
    1236           0 :     ctx->highwater_leader_slot = fd_ulong_max( fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ), 1UL+ctx->slot );
    1237           0 :   }
    1238             : 
    1239           0 :   ctx->leader_bank_start_ns = fd_log_wallclock(); /* safe to call from Rust */
    1240           0 :   if( FD_UNLIKELY( ctx->expect_sequential_leader_slot==(completed_bank_slot+1UL) ) ) {
    1241             :     /* If we are being reset onto a slot, it means some block was fully
    1242             :        processed, so we reset to build on top of it.  Typically we want
    1243             :        to update the reset_slot_start_ns to the current time, because
    1244             :        the network will give the next leader 400ms to publish,
    1245             :        regardless of how long the prior leader took.
    1246             : 
    1247             :        But: if we were leader in the prior slot, and the block was our
    1248             :        own we can do better.  We know that the next slot should start
    1249             :        exactly 400ms after the prior one started, so we can use that as
    1250             :        the reset slot start time instead. */
    1251           0 :     ctx->reset_slot_start_ns = ctx->reset_slot_start_ns + (long)((double)((completed_bank_slot+1UL)-ctx->reset_slot)*ctx->slot_duration_ns);
    1252           0 :   } else {
    1253           0 :     ctx->reset_slot_start_ns = ctx->leader_bank_start_ns;
    1254           0 :   }
    1255           0 :   ctx->expect_sequential_leader_slot = ULONG_MAX;
    1256             : 
    1257           0 :   memcpy( ctx->reset_hash, reset_blockhash, 32UL );
    1258           0 :   memcpy( ctx->hash, reset_blockhash, 32UL );
    1259           0 :   if( FD_LIKELY( parent_block_id!=NULL ) ) {
    1260           0 :     ctx->parent_slot = completed_bank_slot;
    1261           0 :     memcpy( ctx->parent_block_id, parent_block_id, 32UL );
    1262           0 :   } else {
    1263           0 :     FD_LOG_WARNING(( "fd_ext_poh_reset(block_id=null,reset_slot=%lu,parent_slot=%lu) - ignored", completed_bank_slot, ctx->parent_slot ));
    1264           0 :   }
    1265           0 :   ctx->slot         = completed_bank_slot+1UL;
    1266           0 :   ctx->hashcnt      = 0UL;
    1267           0 :   ctx->last_slot    = ctx->slot;
    1268           0 :   ctx->last_hashcnt = 0UL;
    1269           0 :   ctx->reset_slot   = ctx->slot;
    1270             : 
    1271           0 :   if( FD_UNLIKELY( ctx->hashcnt_per_tick!=hashcnt_per_tick ) ) {
    1272           0 :     FD_LOG_WARNING(( "hashes per tick changed from %lu to %lu", ctx->hashcnt_per_tick, hashcnt_per_tick ));
    1273             : 
    1274             :     /* Recompute derived information about the clock. */
    1275           0 :     ctx->hashcnt_duration_ns = (double)ctx->tick_duration_ns/(double)hashcnt_per_tick;
    1276           0 :     ctx->hashcnt_per_slot = ctx->ticks_per_slot*hashcnt_per_tick;
    1277           0 :     ctx->hashcnt_per_tick = hashcnt_per_tick;
    1278             : 
    1279           0 :     if( FD_UNLIKELY( ctx->hashcnt_per_tick==1UL ) ) {
    1280             :       /* Low power producer, maximum of one microblock per tick in the slot */
    1281           0 :       ctx->max_microblocks_per_slot = ctx->ticks_per_slot;
    1282           0 :     } else {
    1283             :       /* See the long comment in after_credit for this limit */
    1284           0 :       ctx->max_microblocks_per_slot = fd_ulong_min( MAX_MICROBLOCKS_PER_SLOT, ctx->ticks_per_slot*(ctx->hashcnt_per_tick-1UL) );
    1285           0 :     }
    1286           0 :   }
    1287             : 
    1288             :   /* When we reset, we need to allow PoH to tick freely again rather
    1289             :      than being constrained.  If we are leader after the reset, this
    1290             :      is OK because we won't tick until we get a bank, and the lower
    1291             :      bound will be reset with the value from the bank. */
    1292           0 :   ctx->microblocks_lower_bound = ctx->max_microblocks_per_slot;
    1293             : 
    1294           0 :   if( FD_UNLIKELY( leader_before_reset ) ) {
    1295             :     /* No longer have a leader bank if we are reset. Replay stage will
    1296             :        call back again to give us a new one if we should become leader
    1297             :        for the reset slot.
    1298             : 
    1299             :        The order is important here, ctx->hashcnt must be updated before
    1300             :        calling no_longer_leader. */
    1301           0 :     no_longer_leader( ctx );
    1302           0 :   }
    1303           0 :   ctx->next_leader_slot = next_leader_slot( ctx );
    1304           0 :   FD_LOG_INFO(( "fd_ext_poh_reset(slot=%lu,next_leader_slot=%lu)", ctx->reset_slot, ctx->next_leader_slot ));
    1305             : 
    1306           0 :   if( FD_UNLIKELY( ctx->slot>=ctx->next_leader_slot ) ) {
    1307             :     /* We are leader after the reset... two cases: */
    1308           0 :     if( FD_LIKELY( ctx->slot==slot_before_reset ) ) {
    1309             :       /* 1. We are reset onto the same slot we are already leader on.
    1310             :             This is a common case when we have two leader slots in a
    1311             :             row, replay stage will reset us to our own slot.  No need to
    1312             :             do anything here, we already sent a SLOT_START. */
    1313           0 :       FD_TEST( leader_before_reset );
    1314           0 :     } else {
    1315             :       /* 2. We are reset onto a different slot. If we were leader
    1316             :             before, we should first end that slot, then begin the new
    1317             :             one if we are newly leader now. */
    1318           0 :       if( FD_LIKELY( leader_before_reset ) ) publish_plugin_slot_end( ctx, slot_before_reset, ctx->cus_used );
    1319           0 :       else                                   publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->reset_slot );
    1320           0 :     }
    1321           0 :   } else {
    1322           0 :     if( FD_UNLIKELY( leader_before_reset ) ) publish_plugin_slot_end( ctx, slot_before_reset, ctx->cus_used );
    1323           0 :   }
    1324             : 
    1325             :   /* There is a subset of FD_SHRED_FEATURES_ACTIVATION_... slots that
    1326             :       the shred tile needs to be aware of.  Since their computation
    1327             :       requires the bank, we are forced (so far) to receive them here
    1328             :       from the Rust side, before forwarding them to the shred tile as
    1329             :       POH_PKT_TYPE_FEAT_ACT_SLOT.  This is not elegant, and it should
    1330             :       be revised in the future (TODO), but it provides a "temporary"
    1331             :       working solution to handle features activation. */
    1332           0 :   fd_memcpy( ctx->features_activation->slots, features_activation, sizeof(fd_shred_features_activation_t) );
    1333           0 :   ctx->features_activation_avail = 1UL;
    1334             : 
    1335           0 :   fd_ext_poh_write_unlock();
    1336           0 : }
    1337             : 
    1338             : /* Since it can't easily return an Option<Pubkey>, return 1 for Some and
    1339             :    0 for None. */
    1340             : CALLED_FROM_RUST int
    1341             : fd_ext_poh_get_leader_after_n_slots( ulong n,
    1342           0 :                                      uchar out_pubkey[ static 32 ] ) {
    1343           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
    1344           0 :   ulong slot = ctx->slot + n;
    1345           0 :   fd_pubkey_t const * leader = fd_multi_epoch_leaders_get_leader_for_slot( ctx->mleaders, slot );
    1346             : 
    1347           0 :   int copied = 0;
    1348           0 :   if( FD_LIKELY( leader ) ) {
    1349           0 :     memcpy( out_pubkey, leader, 32UL );
    1350           0 :     copied = 1;
    1351           0 :   }
    1352           0 :   fd_ext_poh_write_unlock();
    1353           0 :   return copied;
    1354           0 : }
    1355             : 
    1356             : FD_FN_CONST static inline ulong
    1357           0 : scratch_align( void ) {
    1358           0 :   return 128UL;
    1359           0 : }
    1360             : 
    1361             : FD_FN_PURE static inline ulong
    1362           0 : scratch_footprint( fd_topo_tile_t const * tile ) {
    1363           0 :   (void)tile;
    1364           0 :   ulong l = FD_LAYOUT_INIT;
    1365           0 :   l = FD_LAYOUT_APPEND( l, alignof( fd_poh_ctx_t ), sizeof( fd_poh_ctx_t ) );
    1366           0 :   l = FD_LAYOUT_APPEND( l, FD_SHA256_ALIGN, FD_SHA256_FOOTPRINT );
    1367           0 :   return FD_LAYOUT_FINI( l, scratch_align() );
    1368           0 : }
    1369             : 
    1370             : static void
    1371             : publish_tick( fd_poh_ctx_t *      ctx,
    1372             :               fd_stem_context_t * stem,
    1373             :               uchar               hash[ static 32 ],
    1374           0 :               int                 is_skipped ) {
    1375           0 :   ulong hashcnt = ctx->hashcnt_per_tick*(1UL+(ctx->last_hashcnt/ctx->hashcnt_per_tick));
    1376             : 
    1377           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->shred_out->mem, ctx->shred_out->chunk );
    1378             : 
    1379           0 :   FD_TEST( ctx->last_slot>=ctx->reset_slot );
    1380           0 :   fd_entry_batch_meta_t * meta = (fd_entry_batch_meta_t *)dst;
    1381           0 :   if( FD_UNLIKELY( is_skipped ) ) {
    1382             :     /* We are publishing ticks for a skipped slot, the reference tick
    1383             :        and block complete flags should always be zero. */
    1384           0 :     meta->reference_tick = 0UL;
    1385           0 :     meta->block_complete = 0;
    1386           0 :   } else {
    1387           0 :     meta->reference_tick = hashcnt/ctx->hashcnt_per_tick;
    1388           0 :     meta->block_complete = hashcnt==ctx->hashcnt_per_slot;
    1389           0 :   }
    1390             : 
    1391           0 :   ulong slot = fd_ulong_if( meta->block_complete, ctx->slot-1UL, ctx->slot );
    1392           0 :   meta->parent_offset = 1UL+slot-ctx->reset_slot;
    1393             : 
    1394             :   /* From poh_reset we received the block_id for ctx->parent_slot.
    1395             :      Now we're telling shred tile to build on parent: (slot-meta->parent_offset).
    1396             :      The block_id that we're passing is valid iff the two are the same,
    1397             :      i.e. ctx->parent_slot == (slot-meta->parent_offset). */
    1398           0 :   meta->parent_block_id_valid = ctx->parent_slot == (slot-meta->parent_offset);
    1399           0 :   if( FD_LIKELY( meta->parent_block_id_valid ) ) {
    1400           0 :     fd_memcpy( meta->parent_block_id, ctx->parent_block_id, 32UL );
    1401           0 :   }
    1402             : 
    1403           0 :   FD_TEST( hashcnt>ctx->last_hashcnt );
    1404           0 :   ulong hash_delta = hashcnt-ctx->last_hashcnt;
    1405             : 
    1406           0 :   dst += sizeof(fd_entry_batch_meta_t);
    1407           0 :   fd_entry_batch_header_t * tick = (fd_entry_batch_header_t *)dst;
    1408           0 :   tick->hashcnt_delta = hash_delta;
    1409           0 :   fd_memcpy( tick->hash, hash, 32UL );
    1410           0 :   tick->txn_cnt = 0UL;
    1411             : 
    1412           0 :   ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
    1413           0 :   ulong sz = sizeof(fd_entry_batch_meta_t)+sizeof(fd_entry_batch_header_t);
    1414           0 :   ulong sig = fd_disco_poh_sig( slot, POH_PKT_TYPE_MICROBLOCK, 0UL );
    1415           0 :   fd_stem_publish( stem, ctx->shred_out->idx, sig, ctx->shred_out->chunk, sz, 0UL, 0UL, tspub );
    1416           0 :   ctx->shred_seq = stem->seqs[ ctx->shred_out->idx ];
    1417           0 :   ctx->shred_out->chunk = fd_dcache_compact_next( ctx->shred_out->chunk, sz, ctx->shred_out->chunk0, ctx->shred_out->wmark );
    1418             : 
    1419           0 :   if( FD_UNLIKELY( hashcnt==ctx->hashcnt_per_slot ) ) {
    1420           0 :     ctx->last_slot++;
    1421           0 :     ctx->last_hashcnt = 0UL;
    1422           0 :   } else {
    1423           0 :     ctx->last_hashcnt = hashcnt;
    1424           0 :   }
    1425           0 : }
    1426             : 
    1427             : static inline void
    1428             : publish_features_activation(  fd_poh_ctx_t *      ctx,
    1429           0 :                               fd_stem_context_t * stem ) {
    1430           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->shred_out->mem, ctx->shred_out->chunk );
    1431           0 :   fd_shred_features_activation_t * act_data = (fd_shred_features_activation_t *)dst;
    1432           0 :   fd_memcpy( act_data, ctx->features_activation, sizeof(fd_shred_features_activation_t) );
    1433             : 
    1434           0 :   ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
    1435           0 :   ulong sz = sizeof(fd_shred_features_activation_t);
    1436           0 :   ulong sig = fd_disco_poh_sig( ctx->slot, POH_PKT_TYPE_FEAT_ACT_SLOT, 0UL );
    1437           0 :   fd_stem_publish( stem, ctx->shred_out->idx, sig, ctx->shred_out->chunk, sz, 0UL, 0UL, tspub );
    1438           0 :   ctx->shred_seq = stem->seqs[ ctx->shred_out->idx ];
    1439           0 :   ctx->shred_out->chunk = fd_dcache_compact_next( ctx->shred_out->chunk, sz, ctx->shred_out->chunk0, ctx->shred_out->wmark );
    1440           0 : }
    1441             : 
    1442             : static inline void
    1443             : after_credit( fd_poh_ctx_t *      ctx,
    1444             :               fd_stem_context_t * stem,
    1445             :               int *               opt_poll_in,
    1446           0 :               int *               charge_busy ) {
    1447           0 :   ctx->stem = stem;
    1448             : 
    1449           0 :   FD_COMPILER_MFENCE();
    1450           0 :   if( FD_UNLIKELY( fd_poh_waiting_lock ) )  {
    1451           0 :     FD_VOLATILE( fd_poh_returned_lock ) = 1UL;
    1452           0 :     FD_COMPILER_MFENCE();
    1453           0 :     for(;;) {
    1454           0 :       if( FD_UNLIKELY( !FD_VOLATILE_CONST( fd_poh_returned_lock ) ) ) break;
    1455           0 :       FD_SPIN_PAUSE();
    1456           0 :     }
    1457           0 :     FD_COMPILER_MFENCE();
    1458           0 :     FD_VOLATILE( fd_poh_waiting_lock ) = 0UL;
    1459           0 :     *opt_poll_in = 0;
    1460           0 :     *charge_busy = 1;
    1461           0 :     return;
    1462           0 :   }
    1463           0 :   FD_COMPILER_MFENCE();
    1464             : 
    1465           0 :   if( FD_UNLIKELY( ctx->features_activation_avail ) ) {
    1466             :     /* If we have received an update on features_activation, then
    1467             :         forward them to the shred tile.  In principle, this should
    1468             :         happen at most once per slot. */
    1469           0 :     publish_features_activation( ctx, stem );
    1470           0 :     ctx->features_activation_avail = 0UL;
    1471           0 :   }
    1472             : 
    1473           0 :   int is_leader = ctx->next_leader_slot!=ULONG_MAX && ctx->slot>=ctx->next_leader_slot;
    1474           0 :   if( FD_UNLIKELY( is_leader && !ctx->current_leader_bank ) ) {
    1475             :     /* If we are the leader, but we didn't yet learn what the leader
    1476             :        bank object is from the replay stage, do not do any hashing.
    1477             : 
    1478             :        This is not ideal, but greatly simplifies the control flow. */
    1479           0 :     return;
    1480           0 :   }
    1481             : 
    1482             :   /* If we have skipped ticks pending because we skipped some slots to
    1483             :      become leader, register them now one at a time. */
    1484           0 :   if( FD_UNLIKELY( is_leader && ctx->last_slot<ctx->slot ) ) {
    1485           0 :     ulong publish_hashcnt = ctx->last_hashcnt+ctx->hashcnt_per_tick;
    1486           0 :     ulong tick_idx = (ctx->last_slot*ctx->ticks_per_slot+publish_hashcnt/ctx->hashcnt_per_tick)%MAX_SKIPPED_TICKS;
    1487             : 
    1488           0 :     fd_ext_poh_register_tick( ctx->current_leader_bank, ctx->skipped_tick_hashes[ tick_idx ] );
    1489           0 :     publish_tick( ctx, stem, ctx->skipped_tick_hashes[ tick_idx ], 1 );
    1490             : 
    1491             :     /* If we are catching up now and publishing a bunch of skipped
    1492             :        ticks, we do not want to process any incoming microblocks until
    1493             :        all the skipped ticks have been published out; otherwise we would
    1494             :        intersperse skipped tick messages with microblocks. */
    1495           0 :     *opt_poll_in = 0;
    1496           0 :     *charge_busy = 1;
    1497           0 :     return;
    1498           0 :   }
    1499             : 
    1500           0 :   int low_power_mode = ctx->hashcnt_per_tick==1UL;
    1501             : 
    1502             :   /* If we are the leader, always leave enough capacity in the slot so
    1503             :      that we can mixin any potential microblocks still coming from the
    1504             :      pack tile for this slot. */
    1505           0 :   ulong max_remaining_microblocks = ctx->max_microblocks_per_slot - ctx->microblocks_lower_bound;
    1506             : 
    1507             :   /* We don't want to tick over (finish) the slot until pack tell us
    1508             :      it's done.  If we're waiting on pack, then we clamp to [0, 1] */
    1509           0 :   if( FD_LIKELY( !ctx->slot_done && is_leader ) ) max_remaining_microblocks = fd_ulong_max( fd_ulong_min( 1UL, max_remaining_microblocks ), max_remaining_microblocks );
    1510             : 
    1511             :   /* With hashcnt_per_tick hashes per tick, we actually get
    1512             :      hashcnt_per_tick-1 chances to mixin a microblock.  For each tick
    1513             :      span that we need to reserve, we also need to reserve the hashcnt
    1514             :      for the tick, hence the +
    1515             :      max_remaining_microblocks/(hashcnt_per_tick-1) rounded up.
    1516             : 
    1517             :      However, if hashcnt_per_tick is 1 because we're in low power mode,
    1518             :      this should probably just be max_remaining_microblocks. */
    1519           0 :   ulong max_remaining_ticks_or_microblocks = max_remaining_microblocks;
    1520           0 :   if( FD_LIKELY( !low_power_mode ) ) max_remaining_ticks_or_microblocks += (max_remaining_microblocks+ctx->hashcnt_per_tick-2UL)/(ctx->hashcnt_per_tick-1UL);
    1521             : 
    1522           0 :   ulong restricted_hashcnt = fd_ulong_if( ctx->hashcnt_per_slot>=max_remaining_ticks_or_microblocks, ctx->hashcnt_per_slot-max_remaining_ticks_or_microblocks, 0UL );
    1523             : 
    1524           0 :   ulong min_hashcnt = ctx->hashcnt;
    1525             : 
    1526           0 :   if( FD_LIKELY( !low_power_mode ) ) {
    1527             :     /* Recall that there are two kinds of events that will get published
    1528             :        to the shredder,
    1529             : 
    1530             :          (a) Ticks. These occur every 62,500 (hashcnt_per_tick) hashcnts,
    1531             :              and there will be 64 (ticks_per_slot) of them in each slot.
    1532             : 
    1533             :              Ticks must not have any transactions mixed into the hash.
    1534             :              This is not strictly needed in theory, but is required by the
    1535             :              current consensus protocol.  They get published here in
    1536             :              after_credit.
    1537             : 
    1538             :          (b) Microblocks.  These can occur at any other hashcnt, as long
    1539             :              as it is not a tick.  Microblocks cannot be empty, and must
    1540             :              have at least one transactions mixed in.  These get
    1541             :              published in after_frag.
    1542             : 
    1543             :        If hashcnt_per_tick is 1, then we are in low power mode and the
    1544             :        following does not apply, since we can mix in transactions at any
    1545             :        time.
    1546             : 
    1547             :        In the normal, non-low-power mode, though, we have to be careful
    1548             :        to make sure that we do not publish microblocks on tick
    1549             :        boundaries.  To do that, we need to obey two rules:
    1550             :          (i)  after_credit must not leave hashcnt one before a tick
    1551             :               boundary
    1552             :          (ii) if after_credit begins one before a tick boundary, it must
    1553             :               advance hashcnt and publish the tick
    1554             : 
    1555             :        There's some interplay between min_hashcnt and restricted_hashcnt
    1556             :        here, and we need to show that there's always a value of
    1557             :        target_hashcnt we can pick such that
    1558             :            min_hashcnt <= target_hashcnt <= restricted_hashcnt.
    1559             :        We'll prove this by induction for current_slot==0 and
    1560             :        is_leader==true, since all other slots should be the same.
    1561             : 
    1562             :        Let m_j and r_j be the min_hashcnt and restricted_hashcnt
    1563             :        (respectively) for the jth call to after_credit in a slot.  We
    1564             :        want to show that for all values of j, it's possible to pick a
    1565             :        value h_j, the value of target_hashcnt for the jth call to
    1566             :        after_credit (which is also the value of hashcnt after
    1567             :        after_credit has completed) such that m_j<=h_j<=r_j.
    1568             : 
    1569             :        Additionally, let T be hashcnt_per_tick and N be ticks_per_slot.
    1570             : 
    1571             :        Starting with the base case, j==0.  m_j=0, and
    1572             :          r_0 = N*T - max_microblocks_per_slot
    1573             :                    - ceil(max_microblocks_per_slot/(T-1)).
    1574             : 
    1575             :        This is monotonic decreasing in max_microblocks_per_slot, so it
    1576             :        achieves its minimum when max_microblocks_per_slot is its
    1577             :        maximum.
    1578             :            r_0 >= N*T - N*(T-1) - ceil( (N*(T-1))/(T-1))
    1579             :                 = N*T - N*(T-1)-N = 0.
    1580             :        Thus, m_0 <= r_0, as desired.
    1581             : 
    1582             : 
    1583             : 
    1584             :        Then, for the inductive step, assume there exists h_j such that
    1585             :        m_j<=h_j<=r_j, and we want to show that there exists h_{j+1},
    1586             :        which is the same as showing m_{j+1}<=r_{j+1}.
    1587             : 
    1588             :        Let a_j be 1 if we had a microblock immediately following the jth
    1589             :        call to after_credit, and 0 otherwise.  Then hashcnt at the start
    1590             :        of the (j+1)th call to after_frag is h_j+a_j.
    1591             :        Also, set b_{j+1}=1 if we are in the case covered by rule (ii)
    1592             :        above during the (j+1)th call to after_credit, i.e. if
    1593             :        (h_j+a_j)%T==T-1.  Thus, m_{j+1} = h_j + a_j + b_{j+1}.
    1594             : 
    1595             :        If we received an additional microblock, then
    1596             :        max_remaining_microblocks goes down by 1, and
    1597             :        max_remaining_ticks_or_microblocks goes down by either 1 or 2,
    1598             :        which means restricted_hashcnt goes up by either 1 or 2.  In
    1599             :        particular, it goes up by 2 if the new value of
    1600             :        max_remaining_microblocks (at the start of the (j+1)th call to
    1601             :        after_credit) is congruent to 0 mod T-1.  Let b'_{j+1} be 1 if
    1602             :        this condition is met and 0 otherwise.  If we receive a
    1603             :        done_packing message, restricted_hashcnt can go up by more, but
    1604             :        we can ignore that case, since it is less restrictive.
    1605             :        Thus, r_{j+1}=r_j+a_j+b'_{j+1}.
    1606             : 
    1607             :        If h_j < r_j (strictly less), then h_j+a_j < r_j+a_j.  And thus,
    1608             :        since b_{j+1}<=b'_{j+1}+1, just by virtue of them both being
    1609             :        binary,
    1610             :              h_j + a_j + b_{j+1} <  r_j + a_j + b'_{j+1} + 1,
    1611             :        which is the same (for integers) as
    1612             :              h_j + a_j + b_{j+1} <= r_j + a_j + b'_{j+1},
    1613             :                  m_{j+1}         <= r_{j+1}
    1614             : 
    1615             :        On the other hand, if h_j==r_j, this is easy unless b_{j+1}==1,
    1616             :        which can also only happen if a_j==1.  Then (h_j+a_j)%T==T-1,
    1617             :        which means there's an integer k such that
    1618             : 
    1619             :              h_j+a_j==(ticks_per_slot-k)*T-1
    1620             :              h_j    ==ticks_per_slot*T -  k*(T-1)-1  - k-1
    1621             :                     ==ticks_per_slot*T - (k*(T-1)+1) - ceil( (k*(T-1)+1)/(T-1) )
    1622             : 
    1623             :        Since h_j==r_j in this case, and
    1624             :        r_j==(ticks_per_slot*T) - max_remaining_microblocks_j - ceil(max_remaining_microblocks_j/(T-1)),
    1625             :        we can see that the value of max_remaining_microblocks at the
    1626             :        start of the jth call to after_credit is k*(T-1)+1.  Again, since
    1627             :        a_j==1, then the value of max_remaining_microblocks at the start
    1628             :        of the j+1th call to after_credit decreases by 1 to k*(T-1),
    1629             :        which means b'_{j+1}=1.
    1630             : 
    1631             :        Thus, h_j + a_j + b_{j+1} == r_j + a_j + b'_{j+1}, so, in
    1632             :        particular, h_{j+1}<=r_{j+1} as desired. */
    1633           0 :      min_hashcnt += (ulong)(min_hashcnt%ctx->hashcnt_per_tick == (ctx->hashcnt_per_tick-1UL)); /* add b_{j+1}, enforcing rule (ii) */
    1634           0 :   }
    1635             :   /* Now figure out how many hashes are needed to "catch up" the hash
    1636             :      count to the current system clock, and clamp it to the allowed
    1637             :      range. */
    1638           0 :   long now = fd_log_wallclock();
    1639           0 :   ulong target_hashcnt;
    1640           0 :   if( FD_LIKELY( !is_leader ) ) {
    1641           0 :     target_hashcnt = (ulong)((double)(now - ctx->reset_slot_start_ns) / ctx->hashcnt_duration_ns) - (ctx->slot-ctx->reset_slot)*ctx->hashcnt_per_slot;
    1642           0 :   } else {
    1643             :     /* We might have gotten very behind on hashes, but if we are leader
    1644             :        we want to catch up gradually over the remainder of our leader
    1645             :        slot, not all at once right now.  This helps keep the tile from
    1646             :        being oversubscribed and taking a long time to process incoming
    1647             :        microblocks. */
    1648           0 :     long expected_slot_start_ns = ctx->reset_slot_start_ns + (long)((double)(ctx->slot-ctx->reset_slot)*ctx->slot_duration_ns);
    1649           0 :     double actual_slot_duration_ns = ctx->slot_duration_ns<(double)(ctx->leader_bank_start_ns - expected_slot_start_ns) ? 0.0 : ctx->slot_duration_ns - (double)(ctx->leader_bank_start_ns - expected_slot_start_ns);
    1650           0 :     double actual_hashcnt_duration_ns = actual_slot_duration_ns / (double)ctx->hashcnt_per_slot;
    1651           0 :     target_hashcnt = fd_ulong_if( actual_hashcnt_duration_ns==0.0, restricted_hashcnt, (ulong)((double)(now - ctx->leader_bank_start_ns) / actual_hashcnt_duration_ns) );
    1652           0 :   }
    1653             :   /* Clamp to [min_hashcnt, restricted_hashcnt] as above */
    1654           0 :   target_hashcnt = fd_ulong_max( fd_ulong_min( target_hashcnt, restricted_hashcnt ), min_hashcnt );
    1655             : 
    1656             :   /* The above proof showed that it was always possible to pick a value
    1657             :      of target_hashcnt, but we still have a lot of freedom in how to
    1658             :      pick it.  It simplifies the code a lot if we don't keep going after
    1659             :      a tick in this function.  In particular, we want to publish at most
    1660             :      1 tick in this call, since otherwise we could consume infinite
    1661             :      credits to publish here.  The credits are set so that we should
    1662             :      only ever publish one tick during this loop.  Also, all the extra
    1663             :      stuff (leader transitions, publishing ticks, etc.) we have to do
    1664             :      happens at tick boundaries, so this lets us consolidate all those
    1665             :      cases.
    1666             : 
    1667             :      Mathematically, since the current value of hashcnt is h_j+a_j, the
    1668             :      next tick (advancing a full tick if we're currently at a tick) is
    1669             :      t_{j+1} = T*(floor( (h_j+a_j)/T )+1).  We need to show that if we set
    1670             :      h'_{j+1} = min( h_{j+1}, t_{j+1} ), it is still valid.
    1671             : 
    1672             :      First, h'_{j+1} <= h_{j+1} <= r_{j+1}, so we're okay in that
    1673             :      direction.
    1674             : 
    1675             :      Next, observe that t_{j+1}>=h_j + a_j + 1, and recall that b_{j+1}
    1676             :      is 0 or 1. So then,
    1677             :                     t_{j+1} >= h_j+a_j+b_{j+1} = m_{j+1}.
    1678             : 
    1679             :      We know h_{j+1) >= m_{j+1} from before, so then h'_{j+1} >=
    1680             :      m_{j+1}, as desired. */
    1681             : 
    1682           0 :   ulong next_tick_hashcnt = ctx->hashcnt_per_tick * (1UL+(ctx->hashcnt/ctx->hashcnt_per_tick));
    1683           0 :   target_hashcnt = fd_ulong_min( target_hashcnt, next_tick_hashcnt );
    1684             : 
    1685             :   /* We still need to enforce rule (i). We know that min_hashcnt%T !=
    1686             :      T-1 because of rule (ii).  That means that if target_hashcnt%T ==
    1687             :      T-1 at this point, target_hashcnt > min_hashcnt (notice the
    1688             :      strict), so target_hashcnt-1 >= min_hashcnt and is thus still a
    1689             :      valid choice for target_hashcnt. */
    1690           0 :   target_hashcnt -= (ulong)( (!low_power_mode) & ((target_hashcnt%ctx->hashcnt_per_tick)==(ctx->hashcnt_per_tick-1UL)) );
    1691             : 
    1692           0 :   FD_TEST( target_hashcnt >= ctx->hashcnt       );
    1693           0 :   FD_TEST( target_hashcnt >= min_hashcnt        );
    1694           0 :   FD_TEST( target_hashcnt <= restricted_hashcnt );
    1695             : 
    1696           0 :   if( FD_UNLIKELY( ctx->hashcnt==target_hashcnt ) ) return; /* Nothing to do, don't publish a tick twice */
    1697             : 
    1698           0 :   *charge_busy = 1;
    1699             : 
    1700           0 :   if( FD_LIKELY( ctx->hashcnt<target_hashcnt ) ) {
    1701           0 :     fd_sha256_hash_32_repeated( ctx->hash, ctx->hash, target_hashcnt-ctx->hashcnt );
    1702           0 :     ctx->hashcnt = target_hashcnt;
    1703           0 :   }
    1704             : 
    1705           0 :   if( FD_UNLIKELY( ctx->hashcnt==ctx->hashcnt_per_slot ) ) {
    1706           0 :     ctx->slot++;
    1707           0 :     ctx->hashcnt = 0UL;
    1708           0 :   }
    1709             : 
    1710           0 :   if( FD_UNLIKELY( !is_leader && !(ctx->hashcnt%ctx->hashcnt_per_tick ) ) ) {
    1711             :     /* We finished a tick while not leader... save the current hash so
    1712             :        it can be played back into the bank when we become the leader. */
    1713           0 :     ulong tick_idx = (ctx->slot*ctx->ticks_per_slot+ctx->hashcnt/ctx->hashcnt_per_tick)%MAX_SKIPPED_TICKS;
    1714           0 :     fd_memcpy( ctx->skipped_tick_hashes[ tick_idx ], ctx->hash, 32UL );
    1715             : 
    1716           0 :     ulong initial_tick_idx = (ctx->last_slot*ctx->ticks_per_slot+ctx->last_hashcnt/ctx->hashcnt_per_tick)%MAX_SKIPPED_TICKS;
    1717           0 :     if( FD_UNLIKELY( tick_idx==initial_tick_idx ) ) FD_LOG_ERR(( "Too many skipped ticks from slot %lu to slot %lu, chain must halt", ctx->last_slot, ctx->slot ));
    1718           0 :   }
    1719             : 
    1720           0 :   if( FD_UNLIKELY( is_leader && !(ctx->hashcnt%ctx->hashcnt_per_tick) ) ) {
    1721             :     /* We ticked while leader... tell the leader bank. */
    1722           0 :     fd_ext_poh_register_tick( ctx->current_leader_bank, ctx->hash );
    1723             : 
    1724             :     /* And send an empty microblock (a tick) to the shred tile. */
    1725           0 :     publish_tick( ctx, stem, ctx->hash, 0 );
    1726           0 :   }
    1727             : 
    1728           0 :   if( FD_UNLIKELY( !is_leader && ctx->slot>=ctx->next_leader_slot ) ) {
    1729             :     /* We ticked while not leader and are now leader... transition
    1730             :        the state machine. */
    1731           0 :     publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->reset_slot );
    1732           0 :     FD_LOG_INFO(( "fd_poh_ticked_into_leader(slot=%lu, reset_slot=%lu)", ctx->next_leader_slot, ctx->reset_slot ));
    1733           0 :   }
    1734             : 
    1735           0 :   if( FD_UNLIKELY( is_leader && ctx->slot>ctx->next_leader_slot ) ) {
    1736             :     /* We ticked while leader and are no longer leader... transition
    1737             :        the state machine. */
    1738           0 :     FD_TEST( !max_remaining_microblocks );
    1739           0 :     publish_plugin_slot_end( ctx, ctx->next_leader_slot, ctx->cus_used );
    1740           0 :     FD_LOG_INFO(( "fd_poh_ticked_outof_leader(slot=%lu)", ctx->next_leader_slot ));
    1741             : 
    1742           0 :     no_longer_leader( ctx );
    1743           0 :     ctx->expect_sequential_leader_slot = ctx->slot;
    1744             : 
    1745           0 :     double tick_per_ns = fd_tempo_tick_per_ns( NULL );
    1746           0 :     fd_histf_sample( ctx->slot_done_delay, (ulong)((double)(fd_log_wallclock()-ctx->reset_slot_start_ns)/tick_per_ns) );
    1747           0 :     ctx->next_leader_slot = next_leader_slot( ctx );
    1748             : 
    1749           0 :     if( FD_UNLIKELY( ctx->slot>=ctx->next_leader_slot ) ) {
    1750             :       /* We finished a leader slot, and are immediately leader for the
    1751             :          following slot... transition. */
    1752           0 :       publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->next_leader_slot-1UL );
    1753           0 :       FD_LOG_INFO(( "fd_poh_ticked_into_leader(slot=%lu, reset_slot=%lu)", ctx->next_leader_slot, ctx->next_leader_slot-1UL ));
    1754           0 :     }
    1755           0 :   }
    1756           0 : }
    1757             : 
    1758             : static inline void
    1759           0 : during_housekeeping( fd_poh_ctx_t * ctx ) {
    1760           0 :   if( FD_UNLIKELY( maybe_change_identity( ctx, 0 ) ) ) {
    1761           0 :     ctx->next_leader_slot = next_leader_slot( ctx );
    1762           0 :     FD_LOG_INFO(( "fd_poh_identity_changed(next_leader_slot=%lu)", ctx->next_leader_slot ));
    1763             : 
    1764             :     /* Signal replay to check if we are leader again, in-case it's stuck
    1765             :        because everything already replayed. */
    1766           0 :     FD_COMPILER_MFENCE();
    1767           0 :     fd_ext_poh_signal_leader_change( ctx->signal_leader_change );
    1768           0 :   }
    1769           0 : }
    1770             : 
    1771             : static inline void
    1772           0 : metrics_write( fd_poh_ctx_t * ctx ) {
    1773           0 :   FD_MHIST_COPY( POH, BEGIN_LEADER_DELAY_SECONDS,      ctx->begin_leader_delay     );
    1774           0 :   FD_MHIST_COPY( POH, FIRST_MICROBLOCK_DELAY_SECONDS,  ctx->first_microblock_delay );
    1775           0 :   FD_MHIST_COPY( POH, SLOT_DONE_DELAY_SECONDS,         ctx->slot_done_delay        );
    1776           0 :   FD_MHIST_COPY( POH, BUNDLE_INITIALIZE_DELAY_SECONDS, ctx->bundle_init_delay      );
    1777           0 : }
    1778             : 
    1779             : static int
    1780             : before_frag( fd_poh_ctx_t * ctx,
    1781             :              ulong          in_idx,
    1782             :              ulong          seq,
    1783           0 :              ulong          sig ) {
    1784           0 :   (void)seq;
    1785             : 
    1786           0 :   if( FD_LIKELY( ctx->in_kind[ in_idx ]!=IN_KIND_BANK && ctx->in_kind[ in_idx ]!=IN_KIND_PACK ) ) return 0;
    1787             : 
    1788           0 :   uint pack_idx = (uint)fd_disco_bank_sig_pack_idx( sig );
    1789           0 :   FD_TEST( ((int)(pack_idx-ctx->expect_pack_idx))>=0L );
    1790           0 :   if( FD_UNLIKELY( pack_idx!=ctx->expect_pack_idx ) ) return -1;
    1791           0 :   ctx->expect_pack_idx++;
    1792             : 
    1793           0 :   return 0;
    1794           0 : }
    1795             : 
    1796             : static inline void
    1797             : during_frag( fd_poh_ctx_t * ctx,
    1798             :              ulong          in_idx,
    1799             :              ulong          seq FD_PARAM_UNUSED,
    1800             :              ulong          sig,
    1801             :              ulong          chunk,
    1802             :              ulong          sz,
    1803           0 :              ulong          ctl FD_PARAM_UNUSED ) {
    1804           0 :   ctx->skip_frag = 0;
    1805             : 
    1806           0 :   if( FD_UNLIKELY( ctx->in_kind[ in_idx ]==IN_KIND_STAKE ) ) {
    1807           0 :     if( FD_UNLIKELY( chunk<ctx->in[ in_idx ].chunk0 || chunk>ctx->in[ in_idx ].wmark ) )
    1808           0 :       FD_LOG_ERR(( "chunk %lu %lu corrupt, not in range [%lu,%lu]", chunk, sz,
    1809           0 :             ctx->in[ in_idx ].chunk0, ctx->in[ in_idx ].wmark ));
    1810             : 
    1811           0 :     uchar const * dcache_entry = fd_chunk_to_laddr_const( ctx->in[ in_idx ].mem, chunk );
    1812           0 :     fd_multi_epoch_leaders_stake_msg_init( ctx->mleaders, fd_type_pun_const( dcache_entry ) );
    1813           0 :     return;
    1814           0 :   }
    1815             : 
    1816           0 :   ulong slot;
    1817           0 :   switch( ctx->in_kind[ in_idx ] ) {
    1818           0 :     case IN_KIND_BANK:
    1819           0 :     case IN_KIND_PACK: {
    1820           0 :       slot = fd_disco_bank_sig_slot( sig );
    1821           0 :       break;
    1822           0 :     }
    1823           0 :     default:
    1824           0 :       FD_LOG_ERR(( "unexpected in_kind %d", ctx->in_kind[ in_idx ] ));
    1825           0 :   }
    1826             : 
    1827             :   /* The following sequence is possible...
    1828             : 
    1829             :       1. We become leader in slot 10
    1830             :       2. While leader, we switch to a fork that is on slot 8, where
    1831             :           we are leader
    1832             :       3. We get the in-flight microblocks for slot 10
    1833             : 
    1834             :     These in-flight microblocks need to be dropped, so we check
    1835             :     against the high water mark (highwater_leader_slot) rather than
    1836             :     the current hashcnt here when determining what to drop.
    1837             : 
    1838             :     We know if the slot is lower than the high water mark it's from a stale
    1839             :     leader slot, because we will not become leader for the same slot twice
    1840             :     even if we are reset back in time (to prevent duplicate blocks). */
    1841           0 :   int is_frag_for_prior_leader_slot = slot<ctx->highwater_leader_slot;
    1842             : 
    1843           0 :   if( FD_UNLIKELY( ctx->in_kind[ in_idx ]==IN_KIND_PACK ) ) {
    1844             :     /* We now know the real amount of microblocks published, so set an
    1845             :        exact bound for once we receive them. */
    1846           0 :     ctx->skip_frag = 1;
    1847           0 :     if( FD_UNLIKELY( is_frag_for_prior_leader_slot ) ) return;
    1848             : 
    1849           0 :     FD_TEST( ctx->microblocks_lower_bound<=ctx->max_microblocks_per_slot );
    1850           0 :     fd_done_packing_t const * done_packing = fd_chunk_to_laddr( ctx->in[ in_idx ].mem, chunk );
    1851           0 :     FD_LOG_INFO(( "done_packing(slot=%lu,seen_microblocks=%lu,microblocks_in_slot=%lu)",
    1852           0 :                   ctx->slot,
    1853           0 :                   ctx->microblocks_lower_bound,
    1854           0 :                   done_packing->microblocks_in_slot ));
    1855           0 :     ctx->slot_done = 1;
    1856           0 :     ctx->microblocks_lower_bound += ctx->max_microblocks_per_slot - done_packing->microblocks_in_slot;
    1857           0 :     return;
    1858           0 :   } else {
    1859           0 :     if( FD_UNLIKELY( chunk<ctx->in[ in_idx ].chunk0 || chunk>ctx->in[ in_idx ].wmark || sz>USHORT_MAX ) )
    1860           0 :       FD_LOG_ERR(( "chunk %lu %lu corrupt, not in range [%lu,%lu]", chunk, sz, ctx->in[ in_idx ].chunk0, ctx->in[ in_idx ].wmark ));
    1861             : 
    1862           0 :     uchar * src = (uchar *)fd_chunk_to_laddr( ctx->in[ in_idx ].mem, chunk );
    1863             : 
    1864           0 :     fd_memcpy( ctx->_txns, src, sz-sizeof(fd_microblock_trailer_t) );
    1865           0 :     fd_memcpy( ctx->_microblock_trailer, src+sz-sizeof(fd_microblock_trailer_t), sizeof(fd_microblock_trailer_t) );
    1866             : 
    1867           0 :     ctx->skip_frag = is_frag_for_prior_leader_slot;
    1868           0 :   }
    1869           0 : }
    1870             : 
    1871             : static void
    1872             : publish_microblock( fd_poh_ctx_t *      ctx,
    1873             :                     fd_stem_context_t * stem,
    1874             :                     ulong               slot,
    1875             :                     ulong               hashcnt_delta,
    1876           0 :                     ulong               txn_cnt ) {
    1877           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->shred_out->mem, ctx->shred_out->chunk );
    1878           0 :   FD_TEST( slot>=ctx->reset_slot );
    1879           0 :   fd_entry_batch_meta_t * meta = (fd_entry_batch_meta_t *)dst;
    1880           0 :   meta->parent_offset = 1UL+slot-ctx->reset_slot;
    1881           0 :   meta->reference_tick = (ctx->hashcnt/ctx->hashcnt_per_tick) % ctx->ticks_per_slot;
    1882           0 :   meta->block_complete = !ctx->hashcnt;
    1883             : 
    1884             :   /* Refer to publish_tick() for details on meta->parent_block_id_valid. */
    1885           0 :   meta->parent_block_id_valid = ctx->parent_slot == (slot-meta->parent_offset);
    1886           0 :   if( FD_LIKELY( meta->parent_block_id_valid ) ) {
    1887           0 :     fd_memcpy( meta->parent_block_id, ctx->parent_block_id, 32UL );
    1888           0 :   }
    1889             : 
    1890           0 :   dst += sizeof(fd_entry_batch_meta_t);
    1891           0 :   fd_entry_batch_header_t * header = (fd_entry_batch_header_t *)dst;
    1892           0 :   header->hashcnt_delta = hashcnt_delta;
    1893           0 :   fd_memcpy( header->hash, ctx->hash, 32UL );
    1894             : 
    1895           0 :   dst += sizeof(fd_entry_batch_header_t);
    1896           0 :   ulong payload_sz = 0UL;
    1897           0 :   ulong included_txn_cnt = 0UL;
    1898           0 :   for( ulong i=0UL; i<txn_cnt; i++ ) {
    1899           0 :     fd_txn_p_t * txn = (fd_txn_p_t *)(ctx->_txns + i*sizeof(fd_txn_p_t));
    1900           0 :     if( FD_UNLIKELY( !(txn->flags & FD_TXN_P_FLAGS_EXECUTE_SUCCESS) ) ) continue;
    1901             : 
    1902           0 :     fd_memcpy( dst, txn->payload, txn->payload_sz );
    1903           0 :     payload_sz += txn->payload_sz;
    1904           0 :     dst        += txn->payload_sz;
    1905           0 :     included_txn_cnt++;
    1906           0 :   }
    1907           0 :   header->txn_cnt = included_txn_cnt;
    1908             : 
    1909             :   /* We always have credits to publish here, because we have a burst
    1910             :      value of 3 credits, and at most we will publish_tick() once and
    1911             :      then publish_became_leader() once, leaving one credit here to
    1912             :      publish the microblock. */
    1913           0 :   ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
    1914           0 :   ulong sz = sizeof(fd_entry_batch_meta_t)+sizeof(fd_entry_batch_header_t)+payload_sz;
    1915           0 :   ulong new_sig = fd_disco_poh_sig( slot, POH_PKT_TYPE_MICROBLOCK, 0UL );
    1916           0 :   fd_stem_publish( stem, ctx->shred_out->idx, new_sig, ctx->shred_out->chunk, sz, 0UL, 0UL, tspub );
    1917           0 :   ctx->shred_seq = stem->seqs[ ctx->shred_out->idx ];
    1918           0 :   ctx->shred_out->chunk = fd_dcache_compact_next( ctx->shred_out->chunk, sz, ctx->shred_out->chunk0, ctx->shred_out->wmark );
    1919           0 : }
    1920             : 
    1921             : static inline void
    1922             : after_frag( fd_poh_ctx_t *      ctx,
    1923             :             ulong               in_idx,
    1924             :             ulong               seq,
    1925             :             ulong               sig,
    1926             :             ulong               sz,
    1927             :             ulong               tsorig,
    1928             :             ulong               tspub,
    1929           0 :             fd_stem_context_t * stem ) {
    1930           0 :   (void)in_idx;
    1931           0 :   (void)seq;
    1932           0 :   (void)tsorig;
    1933           0 :   (void)tspub;
    1934             : 
    1935           0 :   if( FD_UNLIKELY( ctx->skip_frag ) ) return;
    1936             : 
    1937           0 :   if( FD_UNLIKELY( ctx->in_kind[ in_idx ]==IN_KIND_STAKE ) ) {
    1938           0 :     fd_multi_epoch_leaders_stake_msg_fini( ctx->mleaders );
    1939             :     /* It might seem like we do not need to do state transitions in and
    1940             :        out of being the leader here, since leader schedule updates are
    1941             :        always one epoch in advance (whether we are leader or not would
    1942             :        never change for the currently executing slot) but this is not
    1943             :        true for new ledgers when the validator first boots.  We will
    1944             :        likely be the leader in slot 1, and get notified of the leader
    1945             :        schedule for that slot while we are still in it.
    1946             : 
    1947             :        For safety we just handle both transitions, in and out, although
    1948             :        the only one possible should be into leader. */
    1949           0 :     ulong next_leader_slot_after_frag = next_leader_slot( ctx );
    1950             : 
    1951           0 :     int currently_leader  = ctx->slot>=ctx->next_leader_slot;
    1952           0 :     int leader_after_frag = ctx->slot>=next_leader_slot_after_frag;
    1953             : 
    1954           0 :     FD_LOG_INFO(( "stake_update(before_leader=%lu,after_leader=%lu)",
    1955           0 :                   ctx->next_leader_slot,
    1956           0 :                   next_leader_slot_after_frag ));
    1957             : 
    1958           0 :     ctx->next_leader_slot = next_leader_slot_after_frag;
    1959           0 :     if( FD_UNLIKELY( currently_leader && !leader_after_frag ) ) {
    1960             :       /* Shouldn't ever happen, otherwise we need to do a state
    1961             :          transition out of being leader. */
    1962           0 :       FD_LOG_ERR(( "stake update caused us to no longer be leader in an active slot" ));
    1963           0 :     }
    1964             : 
    1965             :     /* Nothing to do if we transition into being leader, since it
    1966             :        will just get picked up by the regular tick loop. */
    1967           0 :     if( FD_UNLIKELY( !currently_leader && leader_after_frag ) ) {
    1968           0 :       publish_plugin_slot_start( ctx, next_leader_slot_after_frag, ctx->reset_slot );
    1969           0 :     }
    1970             : 
    1971           0 :     return;
    1972           0 :   }
    1973             : 
    1974           0 :   if( FD_UNLIKELY( !ctx->microblocks_lower_bound ) ) {
    1975           0 :     double tick_per_ns = fd_tempo_tick_per_ns( NULL );
    1976           0 :     fd_histf_sample( ctx->first_microblock_delay, (ulong)((double)(fd_log_wallclock()-ctx->reset_slot_start_ns)/tick_per_ns) );
    1977           0 :   }
    1978             : 
    1979           0 :   ulong target_slot = fd_disco_bank_sig_slot( sig );
    1980             : 
    1981           0 :   if( FD_UNLIKELY( target_slot!=ctx->next_leader_slot || target_slot!=ctx->slot ) ) {
    1982           0 :     FD_LOG_ERR(( "packed too early or late target_slot=%lu, current_slot=%lu. highwater_leader_slot=%lu",
    1983           0 :                  target_slot, ctx->slot, ctx->highwater_leader_slot ));
    1984           0 :   }
    1985             : 
    1986           0 :   FD_TEST( ctx->current_leader_bank );
    1987           0 :   FD_TEST( ctx->microblocks_lower_bound<ctx->max_microblocks_per_slot );
    1988           0 :   ctx->microblocks_lower_bound += 1UL;
    1989             : 
    1990           0 :   ulong txn_cnt = (sz-sizeof(fd_microblock_trailer_t))/sizeof(fd_txn_p_t);
    1991           0 :   fd_txn_p_t * txns = (fd_txn_p_t *)(ctx->_txns);
    1992           0 :   ulong executed_txn_cnt = 0UL;
    1993           0 :   ulong cus_used         = 0UL;
    1994           0 :   for( ulong i=0UL; i<txn_cnt; i++ ) {
    1995             :     /* It's important that we check if a transaction is included in the
    1996             :        block with FD_TXN_P_FLAGS_EXECUTE_SUCCESS since
    1997             :        actual_consumed_cus may have a nonzero value for excluded
    1998             :        transactions used for monitoring purposes */
    1999           0 :     if( FD_LIKELY( txns[ i ].flags & FD_TXN_P_FLAGS_EXECUTE_SUCCESS ) ) {
    2000           0 :       executed_txn_cnt++;
    2001           0 :       cus_used += txns[ i ].bank_cu.actual_consumed_cus;
    2002           0 :     }
    2003           0 :   }
    2004             : 
    2005             :   /* We don't publish transactions that fail to execute.  If all the
    2006             :      transactions failed to execute, the microblock would be empty,
    2007             :      causing agave to think it's a tick and complain.  Instead, we just
    2008             :      skip the microblock and don't hash or update the hashcnt. */
    2009           0 :   if( FD_UNLIKELY( !executed_txn_cnt ) ) return;
    2010             : 
    2011           0 :   uchar data[ 64 ];
    2012           0 :   fd_memcpy( data, ctx->hash, 32UL );
    2013           0 :   fd_memcpy( data+32UL, ctx->_microblock_trailer->hash, 32UL );
    2014           0 :   fd_sha256_hash( data, 64UL, ctx->hash );
    2015             : 
    2016           0 :   ctx->hashcnt++;
    2017           0 :   FD_TEST( ctx->hashcnt>ctx->last_hashcnt );
    2018           0 :   ulong hashcnt_delta = ctx->hashcnt - ctx->last_hashcnt;
    2019             : 
    2020             :   /* The hashing loop above will never leave us exactly one away from
    2021             :      crossing a tick boundary, so this increment will never cause the
    2022             :      current tick (or the slot) to change, except in low power mode
    2023             :      for development, in which case we do need to register the tick
    2024             :      with the leader bank.  We don't need to publish the tick since
    2025             :      sending the microblock below is the publishing action. */
    2026           0 :   if( FD_UNLIKELY( !(ctx->hashcnt%ctx->hashcnt_per_slot ) ) ) {
    2027           0 :     ctx->slot++;
    2028           0 :     ctx->hashcnt = 0UL;
    2029           0 :   }
    2030             : 
    2031           0 :   ctx->last_slot    = ctx->slot;
    2032           0 :   ctx->last_hashcnt = ctx->hashcnt;
    2033             : 
    2034           0 :   ctx->cus_used += cus_used;
    2035             : 
    2036           0 :   if( FD_UNLIKELY( !(ctx->hashcnt%ctx->hashcnt_per_tick ) ) ) {
    2037           0 :     fd_ext_poh_register_tick( ctx->current_leader_bank, ctx->hash );
    2038           0 :     if( FD_UNLIKELY( ctx->slot>ctx->next_leader_slot ) ) {
    2039             :       /* We ticked while leader and are no longer leader... transition
    2040             :          the state machine. */
    2041           0 :       publish_plugin_slot_end( ctx, ctx->next_leader_slot, ctx->cus_used );
    2042             : 
    2043           0 :       no_longer_leader( ctx );
    2044             : 
    2045           0 :       if( FD_UNLIKELY( ctx->slot>=ctx->next_leader_slot ) ) {
    2046             :         /* We finished a leader slot, and are immediately leader for the
    2047             :            following slot... transition. */
    2048           0 :         publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->next_leader_slot-1UL );
    2049           0 :       }
    2050           0 :     }
    2051           0 :   }
    2052             : 
    2053           0 :   publish_microblock( ctx, stem, target_slot, hashcnt_delta, txn_cnt );
    2054           0 : }
    2055             : 
    2056             : static void
    2057             : privileged_init( fd_topo_t *      topo,
    2058           0 :                  fd_topo_tile_t * tile ) {
    2059           0 :   void * scratch = fd_topo_obj_laddr( topo, tile->tile_obj_id );
    2060             : 
    2061           0 :   FD_SCRATCH_ALLOC_INIT( l, scratch );
    2062           0 :   fd_poh_ctx_t * ctx = FD_SCRATCH_ALLOC_APPEND( l, alignof( fd_poh_ctx_t ), sizeof( fd_poh_ctx_t ) );
    2063             : 
    2064           0 :   if( FD_UNLIKELY( !strcmp( tile->poh.identity_key_path, "" ) ) )
    2065           0 :     FD_LOG_ERR(( "identity_key_path not set" ));
    2066             : 
    2067           0 :   const uchar * identity_key = fd_keyload_load( tile->poh.identity_key_path, /* pubkey only: */ 1 );
    2068           0 :   fd_memcpy( ctx->identity_key.uc, identity_key, 32UL );
    2069             : 
    2070           0 :   if( FD_UNLIKELY( !tile->poh.bundle.vote_account_path[0] ) ) {
    2071           0 :     tile->poh.bundle.enabled = 0;
    2072           0 :   }
    2073           0 :   if( FD_UNLIKELY( tile->poh.bundle.enabled ) ) {
    2074           0 :     if( FD_UNLIKELY( !fd_base58_decode_32( tile->poh.bundle.vote_account_path, ctx->bundle.vote_account.uc ) ) ) {
    2075           0 :       const uchar * vote_key = fd_keyload_load( tile->poh.bundle.vote_account_path, /* pubkey only: */ 1 );
    2076           0 :       fd_memcpy( ctx->bundle.vote_account.uc, vote_key, 32UL );
    2077           0 :     }
    2078           0 :   }
    2079           0 : }
    2080             : 
    2081             : /* The Agave client needs to communicate to the shred tile what
    2082             :    the shred version is on boot, but shred tile does not live in the
    2083             :    same address space, so have the PoH tile pass the value through
    2084             :    via. a shared memory ulong. */
    2085             : 
    2086             : static volatile ulong * fd_shred_version;
    2087             : 
    2088             : void
    2089           0 : fd_ext_shred_set_shred_version( ulong shred_version ) {
    2090           0 :   while( FD_UNLIKELY( !fd_shred_version ) ) FD_SPIN_PAUSE();
    2091           0 :   *fd_shred_version = shred_version;
    2092           0 : }
    2093             : 
    2094             : void
    2095             : fd_ext_poh_publish_gossip_vote( uchar * data,
    2096           0 :                                 ulong   data_len ) {
    2097           0 :   poh_link_publish( &gossip_dedup, 1UL, data, data_len );
    2098           0 : }
    2099             : 
    2100             : void
    2101             : fd_ext_poh_publish_leader_schedule( uchar * data,
    2102           0 :                                     ulong   data_len ) {
    2103           0 :   poh_link_publish( &stake_out, 2UL, data, data_len );
    2104           0 : }
    2105             : 
    2106             : void
    2107             : fd_ext_poh_publish_cluster_info( uchar * data,
    2108           0 :                                  ulong   data_len ) {
    2109           0 :   poh_link_publish( &crds_shred, 2UL, data, data_len );
    2110           0 : }
    2111             : 
    2112             : void
    2113           0 : fd_ext_poh_publish_executed_txn( uchar const * data  ) {
    2114           0 :   static int lock = 0;
    2115             : 
    2116             :   /* Need to lock since the link publisher is not concurrent, and replay
    2117             :      happens on a thread pool. */
    2118           0 :   for(;;) {
    2119           0 :     if( FD_LIKELY( FD_ATOMIC_CAS( &lock, 0, 1 )==0 ) ) break;
    2120           0 :     FD_SPIN_PAUSE();
    2121           0 :   }
    2122             : 
    2123           0 :   FD_COMPILER_MFENCE();
    2124           0 :   poh_link_publish( &executed_txn, 0UL, data, 64UL );
    2125           0 :   FD_COMPILER_MFENCE();
    2126             : 
    2127           0 :   FD_VOLATILE(lock) = 0;
    2128           0 : }
    2129             : 
    2130             : void
    2131             : fd_ext_plugin_publish_replay_stage( ulong   sig,
    2132             :                                     uchar * data,
    2133           0 :                                     ulong   data_len ) {
    2134           0 :   poh_link_publish( &replay_plugin, sig, data, data_len );
    2135           0 : }
    2136             : 
    2137             : void
    2138             : fd_ext_plugin_publish_genesis_hash( ulong   sig,
    2139             :                                     uchar * data,
    2140           0 :                                     ulong   data_len ) {
    2141           0 :   poh_link_publish( &replay_plugin, sig, data, data_len );
    2142           0 : }
    2143             : 
    2144             : void
    2145             : fd_ext_plugin_publish_start_progress( ulong   sig,
    2146             :                                       uchar * data,
    2147           0 :                                       ulong   data_len ) {
    2148           0 :   poh_link_publish( &start_progress_plugin, sig, data, data_len );
    2149           0 : }
    2150             : 
    2151             : void
    2152             : fd_ext_plugin_publish_vote_listener( ulong   sig,
    2153             :                                      uchar * data,
    2154           0 :                                      ulong   data_len ) {
    2155           0 :   poh_link_publish( &vote_listener_plugin, sig, data, data_len );
    2156           0 : }
    2157             : 
    2158             : void
    2159             : fd_ext_plugin_publish_validator_info( ulong   sig,
    2160             :                                       uchar * data,
    2161           0 :                                       ulong   data_len ) {
    2162           0 :   poh_link_publish( &validator_info_plugin, sig, data, data_len );
    2163           0 : }
    2164             : 
    2165             : void
    2166             : fd_ext_plugin_publish_periodic( ulong   sig,
    2167             :                                 uchar * data,
    2168           0 :                                 ulong   data_len ) {
    2169           0 :   poh_link_publish( &gossip_plugin, sig, data, data_len );
    2170           0 : }
    2171             : 
    2172             : void
    2173             : fd_ext_resolv_publish_root_bank( uchar * data,
    2174           0 :                                  ulong   data_len ) {
    2175           0 :   poh_link_publish( &replay_resolv, 0UL, data, data_len );
    2176           0 : }
    2177             : 
    2178             : void
    2179             : fd_ext_resolv_publish_completed_blockhash( uchar * data,
    2180           0 :                                            ulong   data_len ) {
    2181           0 :   poh_link_publish( &replay_resolv, 1UL, data, data_len );
    2182           0 : }
    2183             : 
    2184             : static inline fd_poh_out_ctx_t
    2185             : out1( fd_topo_t const *      topo,
    2186             :       fd_topo_tile_t const * tile,
    2187           0 :       char const *           name ) {
    2188           0 :   ulong idx = ULONG_MAX;
    2189             : 
    2190           0 :   for( ulong i=0UL; i<tile->out_cnt; i++ ) {
    2191           0 :     fd_topo_link_t const * link = &topo->links[ tile->out_link_id[ i ] ];
    2192           0 :     if( !strcmp( link->name, name ) ) {
    2193           0 :       if( FD_UNLIKELY( idx!=ULONG_MAX ) ) FD_LOG_ERR(( "tile %s:%lu had multiple output links named %s but expected one", tile->name, tile->kind_id, name ));
    2194           0 :       idx = i;
    2195           0 :     }
    2196           0 :   }
    2197             : 
    2198           0 :   if( FD_UNLIKELY( idx==ULONG_MAX ) ) FD_LOG_ERR(( "tile %s:%lu had no output link named %s", tile->name, tile->kind_id, name ));
    2199             : 
    2200           0 :   void * mem = topo->workspaces[ topo->objs[ topo->links[ tile->out_link_id[ idx ] ].dcache_obj_id ].wksp_id ].wksp;
    2201           0 :   ulong chunk0 = fd_dcache_compact_chunk0( mem, topo->links[ tile->out_link_id[ idx ] ].dcache );
    2202           0 :   ulong wmark  = fd_dcache_compact_wmark ( mem, topo->links[ tile->out_link_id[ idx ] ].dcache, topo->links[ tile->out_link_id[ idx ] ].mtu );
    2203             : 
    2204           0 :   return (fd_poh_out_ctx_t){ .idx = idx, .mem = mem, .chunk0 = chunk0, .wmark = wmark, .chunk = chunk0 };
    2205           0 : }
    2206             : 
    2207             : static void
    2208             : unprivileged_init( fd_topo_t *      topo,
    2209           0 :                    fd_topo_tile_t * tile ) {
    2210           0 :   void * scratch = fd_topo_obj_laddr( topo, tile->tile_obj_id );
    2211             : 
    2212           0 :   FD_SCRATCH_ALLOC_INIT( l, scratch );
    2213           0 :   fd_poh_ctx_t * ctx = FD_SCRATCH_ALLOC_APPEND( l, alignof( fd_poh_ctx_t ), sizeof( fd_poh_ctx_t ) );
    2214           0 :   void * sha256   = FD_SCRATCH_ALLOC_APPEND( l, FD_SHA256_ALIGN,                  FD_SHA256_FOOTPRINT                );
    2215             : 
    2216           0 : #define NONNULL( x ) (__extension__({                                        \
    2217           0 :       __typeof__((x)) __x = (x);                                             \
    2218           0 :       if( FD_UNLIKELY( !__x ) ) FD_LOG_ERR(( #x " was unexpectedly NULL" )); \
    2219           0 :       __x; }))
    2220             : 
    2221           0 :   ctx->mleaders = NONNULL( fd_multi_epoch_leaders_join( fd_multi_epoch_leaders_new( ctx->mleaders_mem ) ) );
    2222           0 :   ctx->sha256   = NONNULL( fd_sha256_join( fd_sha256_new( sha256 ) ) );
    2223           0 :   ctx->current_leader_bank = NULL;
    2224           0 :   ctx->signal_leader_change = NULL;
    2225             : 
    2226           0 :   ctx->shred_seq = ULONG_MAX;
    2227           0 :   ctx->halted_switching_key = 0;
    2228           0 :   ctx->keyswitch = fd_keyswitch_join( fd_topo_obj_laddr( topo, tile->keyswitch_obj_id ) );
    2229           0 :   FD_TEST( ctx->keyswitch );
    2230             : 
    2231           0 :   ctx->slot                  = 0UL;
    2232           0 :   ctx->hashcnt               = 0UL;
    2233           0 :   ctx->last_hashcnt          = 0UL;
    2234           0 :   ctx->highwater_leader_slot = ULONG_MAX;
    2235           0 :   ctx->next_leader_slot      = ULONG_MAX;
    2236           0 :   ctx->reset_slot            = ULONG_MAX;
    2237             : 
    2238           0 :   ctx->lagged_consecutive_leader_start = tile->poh.lagged_consecutive_leader_start;
    2239           0 :   ctx->expect_sequential_leader_slot = ULONG_MAX;
    2240             : 
    2241           0 :   ctx->slot_done               = 1;
    2242           0 :   ctx->expect_pack_idx         = 0U;
    2243           0 :   ctx->microblocks_lower_bound = 0UL;
    2244             : 
    2245           0 :   ctx->max_active_descendant = 0UL;
    2246             : 
    2247           0 :   if( FD_UNLIKELY( tile->poh.bundle.enabled ) ) {
    2248           0 :     ctx->bundle.enabled = 1;
    2249           0 :     NONNULL( fd_bundle_crank_gen_init( ctx->bundle.gen, (fd_acct_addr_t const *)tile->poh.bundle.tip_distribution_program_addr,
    2250           0 :              (fd_acct_addr_t const *)tile->poh.bundle.tip_payment_program_addr,
    2251           0 :              (fd_acct_addr_t const *)ctx->bundle.vote_account.uc,
    2252           0 :              (fd_acct_addr_t const *)ctx->bundle.vote_account.uc, "NAN", 0UL ) ); /* last three arguments are properly bogus */
    2253           0 :   } else {
    2254           0 :     ctx->bundle.enabled = 0;
    2255           0 :   }
    2256             : 
    2257           0 :   ulong poh_shred_obj_id = fd_pod_query_ulong( topo->props, "poh_shred", ULONG_MAX );
    2258           0 :   FD_TEST( poh_shred_obj_id!=ULONG_MAX );
    2259             : 
    2260           0 :   fd_shred_version = fd_fseq_join( fd_topo_obj_laddr( topo, poh_shred_obj_id ) );
    2261           0 :   FD_TEST( fd_shred_version );
    2262             : 
    2263           0 :   poh_link_init( &gossip_dedup,          topo, tile, out1( topo, tile, "gossip_dedup" ).idx );
    2264           0 :   poh_link_init( &stake_out,             topo, tile, out1( topo, tile, "stake_out"    ).idx );
    2265           0 :   poh_link_init( &crds_shred,            topo, tile, out1( topo, tile, "crds_shred"   ).idx );
    2266           0 :   poh_link_init( &replay_resolv,         topo, tile, out1( topo, tile, "replay_resol" ).idx );
    2267           0 :   poh_link_init( &executed_txn,          topo, tile, out1( topo, tile, "executed_txn" ).idx );
    2268             : 
    2269           0 :   if( FD_LIKELY( tile->poh.plugins_enabled ) ) {
    2270           0 :     poh_link_init( &replay_plugin,         topo, tile, out1( topo, tile, "replay_plugi" ).idx );
    2271           0 :     poh_link_init( &gossip_plugin,         topo, tile, out1( topo, tile, "gossip_plugi" ).idx );
    2272           0 :     poh_link_init( &start_progress_plugin, topo, tile, out1( topo, tile, "startp_plugi" ).idx );
    2273           0 :     poh_link_init( &vote_listener_plugin,  topo, tile, out1( topo, tile, "votel_plugin" ).idx );
    2274           0 :     poh_link_init( &validator_info_plugin, topo, tile, out1( topo, tile, "valcfg_plugi" ).idx );
    2275           0 :   } else {
    2276             :     /* Mark these mcaches as "available", so the system boots, but the
    2277             :        memory is not set so nothing will actually get published via.
    2278             :        the links. */
    2279           0 :     FD_COMPILER_MFENCE();
    2280           0 :     replay_plugin.mcache = (fd_frag_meta_t*)1;
    2281           0 :     gossip_plugin.mcache = (fd_frag_meta_t*)1;
    2282           0 :     start_progress_plugin.mcache = (fd_frag_meta_t*)1;
    2283           0 :     vote_listener_plugin.mcache = (fd_frag_meta_t*)1;
    2284           0 :     validator_info_plugin.mcache = (fd_frag_meta_t*)1;
    2285           0 :     FD_COMPILER_MFENCE();
    2286           0 :   }
    2287             : 
    2288           0 :   FD_LOG_INFO(( "PoH waiting to be initialized by Agave client... %lu %lu", fd_poh_waiting_lock, fd_poh_returned_lock ));
    2289           0 :   FD_VOLATILE( fd_poh_global_ctx ) = ctx;
    2290           0 :   FD_COMPILER_MFENCE();
    2291           0 :   for(;;) {
    2292           0 :     if( FD_LIKELY( FD_VOLATILE_CONST( fd_poh_waiting_lock ) ) ) break;
    2293           0 :     FD_SPIN_PAUSE();
    2294           0 :   }
    2295           0 :   FD_VOLATILE( fd_poh_waiting_lock ) = 0UL;
    2296           0 :   FD_VOLATILE( fd_poh_returned_lock ) = 1UL;
    2297           0 :   FD_COMPILER_MFENCE();
    2298           0 :   for(;;) {
    2299           0 :     if( FD_UNLIKELY( !FD_VOLATILE_CONST( fd_poh_returned_lock ) ) ) break;
    2300           0 :     FD_SPIN_PAUSE();
    2301           0 :   }
    2302           0 :   FD_COMPILER_MFENCE();
    2303             : 
    2304           0 :   if( FD_UNLIKELY( ctx->reset_slot==ULONG_MAX ) ) FD_LOG_ERR(( "PoH was not initialized by Agave client" ));
    2305             : 
    2306           0 :   fd_histf_join( fd_histf_new( ctx->begin_leader_delay, FD_MHIST_SECONDS_MIN( POH, BEGIN_LEADER_DELAY_SECONDS ),
    2307           0 :                                                         FD_MHIST_SECONDS_MAX( POH, BEGIN_LEADER_DELAY_SECONDS ) ) );
    2308           0 :   fd_histf_join( fd_histf_new( ctx->first_microblock_delay, FD_MHIST_SECONDS_MIN( POH, FIRST_MICROBLOCK_DELAY_SECONDS  ),
    2309           0 :                                                             FD_MHIST_SECONDS_MAX( POH, FIRST_MICROBLOCK_DELAY_SECONDS  ) ) );
    2310           0 :   fd_histf_join( fd_histf_new( ctx->slot_done_delay, FD_MHIST_SECONDS_MIN( POH, SLOT_DONE_DELAY_SECONDS  ),
    2311           0 :                                                      FD_MHIST_SECONDS_MAX( POH, SLOT_DONE_DELAY_SECONDS  ) ) );
    2312             : 
    2313           0 :   fd_histf_join( fd_histf_new( ctx->bundle_init_delay, FD_MHIST_SECONDS_MIN( POH, BUNDLE_INITIALIZE_DELAY_SECONDS  ),
    2314           0 :                                                        FD_MHIST_SECONDS_MAX( POH, BUNDLE_INITIALIZE_DELAY_SECONDS  ) ) );
    2315             : 
    2316           0 :   for( ulong i=0UL; i<tile->in_cnt; i++ ) {
    2317           0 :     fd_topo_link_t * link = &topo->links[ tile->in_link_id[ i ] ];
    2318           0 :     fd_topo_wksp_t * link_wksp = &topo->workspaces[ topo->objs[ link->dcache_obj_id ].wksp_id ];
    2319             : 
    2320           0 :     ctx->in[ i ].mem    = link_wksp->wksp;
    2321           0 :     ctx->in[ i ].chunk0 = fd_dcache_compact_chunk0( ctx->in[ i ].mem, link->dcache );
    2322           0 :     ctx->in[ i ].wmark  = fd_dcache_compact_wmark ( ctx->in[ i ].mem, link->dcache, link->mtu );
    2323             : 
    2324           0 :     if(        !strcmp( link->name, "stake_out" ) ) {
    2325           0 :       ctx->in_kind[ i ] = IN_KIND_STAKE;
    2326           0 :     } else if( !strcmp( link->name, "pack_poh" ) ) {
    2327           0 :       ctx->in_kind[ i ] = IN_KIND_PACK;
    2328           0 :     } else if( !strcmp( link->name, "bank_poh"  ) ) {
    2329           0 :       ctx->in_kind[ i ] = IN_KIND_BANK;
    2330           0 :     } else {
    2331           0 :       FD_LOG_ERR(( "unexpected input link name %s", link->name ));
    2332           0 :     }
    2333           0 :   }
    2334             : 
    2335           0 :   *ctx->shred_out = out1( topo, tile, "poh_shred" );
    2336           0 :   *ctx->pack_out  = out1( topo, tile, "poh_pack" );
    2337           0 :   ctx->plugin_out->mem = NULL;
    2338           0 :   if( FD_LIKELY( tile->poh.plugins_enabled ) ) {
    2339           0 :     *ctx->plugin_out = out1( topo, tile, "poh_plugin" );
    2340           0 :   }
    2341             : 
    2342           0 :   ctx->features_activation_avail = 0UL;
    2343           0 :   for( ulong i=0UL; i<FD_SHRED_FEATURES_ACTIVATION_SLOT_CNT; i++ )
    2344           0 :     ctx->features_activation->slots[i] = FD_SHRED_FEATURES_ACTIVATION_SLOT_DISABLED;
    2345             : 
    2346           0 :   ulong scratch_top = FD_SCRATCH_ALLOC_FINI( l, 1UL );
    2347           0 :   if( FD_UNLIKELY( scratch_top > (ulong)scratch + scratch_footprint( tile ) ) )
    2348           0 :     FD_LOG_ERR(( "scratch overflow %lu %lu %lu", scratch_top - (ulong)scratch - scratch_footprint( tile ), scratch_top, (ulong)scratch + scratch_footprint( tile ) ));
    2349           0 : }
    2350             : 
    2351             : /* One tick, one microblock, one plugin slot end, one plugin slot start,
    2352             :    one leader update, and one features activation. */
    2353           0 : #define STEM_BURST (6UL)
    2354             : 
    2355             : /* See explanation in fd_pack */
    2356           0 : #define STEM_LAZY  (128L*3000L)
    2357             : 
    2358           0 : #define STEM_CALLBACK_CONTEXT_TYPE  fd_poh_ctx_t
    2359           0 : #define STEM_CALLBACK_CONTEXT_ALIGN alignof(fd_poh_ctx_t)
    2360             : 
    2361           0 : #define STEM_CALLBACK_DURING_HOUSEKEEPING during_housekeeping
    2362           0 : #define STEM_CALLBACK_METRICS_WRITE       metrics_write
    2363           0 : #define STEM_CALLBACK_AFTER_CREDIT        after_credit
    2364           0 : #define STEM_CALLBACK_BEFORE_FRAG         before_frag
    2365           0 : #define STEM_CALLBACK_DURING_FRAG         during_frag
    2366           0 : #define STEM_CALLBACK_AFTER_FRAG          after_frag
    2367             : 
    2368             : #include "../../disco/stem/fd_stem.c"
    2369             : 
    2370             : fd_topo_run_tile_t fd_tile_poh = {
    2371             :   .name                     = "poh",
    2372             :   .populate_allowed_seccomp = NULL,
    2373             :   .populate_allowed_fds     = NULL,
    2374             :   .scratch_align            = scratch_align,
    2375             :   .scratch_footprint        = scratch_footprint,
    2376             :   .privileged_init          = privileged_init,
    2377             :   .unprivileged_init        = unprivileged_init,
    2378             :   .run                      = stem_run,
    2379             : };

Generated by: LCOV version 1.14