LCOV - code coverage report
Current view: top level - discoh/poh - fd_poh_tile.c (source / functions) Hit Total Coverage
Test: cov.lcov Lines: 11 866 1.3 %
Date: 2025-03-20 12:08:36 Functions: 2 44 4.5 %

          Line data    Source code
       1             : #define _GNU_SOURCE
       2             : 
       3             : /* Let's say there was a computer, the "leader" computer, that acted as
       4             :    a bank.  Users could send it messages saying they wanted to deposit
       5             :    money, or transfer it to someone else.
       6             : 
       7             :    That's how, for example, Bank of America works but there are problems
       8             :    with it.  One simple problem is: the bank can set your balance to
       9             :    zero if they don't like you.
      10             : 
      11             :    You could try to fix this by having the bank periodically publish the
      12             :    list of all account balances and transactions.  If the customers add
      13             :    unforgeable signatures to their deposit slips and transfers, then
      14             :    the bank cannot zero a balance without it being obvious to everyone.
      15             : 
      16             :    There's still problems.  The bank can't lie about your balance now or
      17             :    take your money, but it can just not accept deposits on your behalf
      18             :    by ignoring you.
      19             : 
      20             :    You could fix this by getting a few independent banks together, lets
      21             :    say Bank of America, Bank of England, and Westpac, and having them
      22             :    rotate who operates the leader computer periodically.  If one bank
      23             :    ignores your deposits, you can just wait and send them to the next
      24             :    one.
      25             : 
      26             :    This is Solana.
      27             : 
      28             :    There's still problems of course but they are largely technical.  How
      29             :    do the banks agree who is leader?  How do you recover if a leader
      30             :    misbehaves?  How do customers verify the transactions aren't forged?
      31             :    How do banks receive and publish and verify each others work quickly?
      32             :    These are the main technical innovations that enable Solana to work
      33             :    well.
      34             : 
      35             :    What about Proof of History?
      36             : 
      37             :    One particular niche problem is about the leader schedule.  When the
      38             :    leader computer is moving from one bank to another, the new bank must
      39             :    wait for the old bank to say it's done and provide a final list of
      40             :    balances that it can start working off of.  But: what if the computer
      41             :    at the old bank crashes and never says its done?
      42             : 
      43             :    Does the new leader just take over at some point?  What if the new
      44             :    leader is malicious, and says the past thousand leaders crashed, and
      45             :    there have been no transactions for days?  How do you check?
      46             : 
      47             :    This is what Proof of History solves.  Each bank in the network must
      48             :    constantly do a lot of busywork (compute hashes), even when it is not
      49             :    leader.
      50             : 
      51             :    If the prior thousand leaders crashed, and no transactions happened
      52             :    in an hour, the new leader would have to show they did about an hour
      53             :    of busywork for everyone else to believe them.
      54             : 
      55             :    A better name for this is proof of skipping.  If a leader is skipping
      56             :    slots (building off of a slot that is not the direct parent), it must
      57             :    prove that it waited a good amount of time to do so.
      58             : 
      59             :    It's not a perfect solution.  For one thing, some banks have really
      60             :    fast computers and can compute a lot of busywork in a short amount of
      61             :    time, allowing them to skip prior slot(s) anyway.  But: there is a
      62             :    social component that prevents validators from skipping the prior
      63             :    leader slot.  It is easy to detect when this happens and the network
      64             :    could respond by ignoring their votes or stake.
      65             : 
      66             :    You could come up with other schemes: for example, the network could
      67             :    just use wall clock time.  If a new leader publishes a block without
      68             :    waiting 400 milliseconds for the prior slot to complete, then there
      69             :    is no "proof of skipping" and the nodes ignore the slot.
      70             : 
      71             :    These schemes have a problem in that they are not deterministic
      72             :    across the network (different computers have different clocks), and
      73             :    so they will cause frequent forks which are very expensive to
      74             :    resolve.  Even though the proof of history scheme is not perfect,
      75             :    it is better than any alternative which is not deterministic.
      76             : 
      77             :    With all that background, we can now describe at a high level what
      78             :    this PoH tile actually does,
      79             : 
      80             :     (1) Whenever any other leader in the network finishes a slot, and
      81             :         the slot is determined to be the best one to build off of, this
      82             :         tile gets "reset" onto that block, the so called "reset slot".
      83             : 
      84             :     (2) The tile is constantly doing busy work, hash(hash(hash(...))) on
      85             :         top of the last reset slot, even when it is not leader.
      86             : 
      87             :     (3) When the tile becomes leader, it continues hashing from where it
      88             :         was.  Typically, the prior leader finishes their slot, so the
      89             :         reset slot will be the parent one, and this tile only publishes
      90             :         hashes for its own slot.  But if prior slots were skipped, then
      91             :         there might be a whole chain already waiting.
      92             : 
      93             :     That's pretty much it.  When we are leader, in addition to doing
      94             :     busywork, we publish ticks and microblocks to the shred tile.  A
      95             :     microblock is a non-empty group of transactions whose hashes are
      96             :     mixed-in to the chain, while a tick is a periodic stamp of the
      97             :     current hash, with no transactions (nothing mixed in).  We need
      98             :     to send both to the shred tile, as ticks are important for other
      99             :     validators to verify in parallel.
     100             : 
     101             :     As well, the tile should never become leader for a slot that it has
     102             :     published anything for, otherwise it may create a duplicate block.
     103             : 
     104             :     Some particularly common misunderstandings:
     105             : 
     106             :      - PoH is critical to security.
     107             : 
     108             :        This largely isn't true.  The target hash rate of the network is
     109             :        so slow (1 hash per 500 nanoseconds) that a malicious leader can
     110             :        easily catch up if they start from an old hash, and the only
     111             :        practical attack prevented is the proof of skipping.  Most of the
     112             :        long range attacks in the Solana whitepaper are not relevant.
     113             : 
     114             :      - PoH keeps passage of time.
     115             : 
     116             :        This is also not true.  The way the network keeps time so it can
     117             :        decide who is leader is that, each leader uses their operating
     118             :        system clock to time 400 milliseconds and publishes their block
     119             :        when this timer expires.
     120             : 
     121             :        If a leader just hashed as fast as they could, they could publish
     122             :        a block in tens of milliseconds, and the rest of the network
     123             :        would happily accept it.  This is why the Solana "clock" as
     124             :        determined by PoH is not accurate and drifts over time.
     125             : 
     126             :      - PoH prevents transaction reordering by the leader.
     127             : 
     128             :        The leader can, in theory, wait until the very end of their
     129             :        leader slot to publish anything at all to the network.  They can,
     130             :        in particular, hold all received transactions for 400
     131             :        milliseconds and then reorder and publish some right at the end
     132             :        to advantage certain transactions.
     133             : 
     134             :     You might be wondering... if all the PoH chain is helping us do is
     135             :     prove that slots were skipped correctly, why do we need to "mix in"
     136             :     transactions to the hash value?  Or do anything at all for slots
     137             :     where we don't skip the prior slot?
     138             : 
     139             :     It's a good question, and the answer is that this behavior is not
     140             :     necessary.  An ideal implementation of PoH have no concept of ticks
     141             :     or mixins, and would not be part of the TPU pipeline at all.
     142             :     Instead, there would be a simple field "skip_proof" on the last
     143             :     shred we send for a slot, the hash(hash(...)) value.  This field
     144             :     would only be filled in (and only verified by replayers) in cases
     145             :     where the slot actually skipped a parent.
     146             : 
     147             :     Then what is the "clock?  In Solana, time is constructed as follows:
     148             : 
     149             :     HASHES
     150             : 
     151             :         The base unit of time is a hash.  Hereafter, any values whose
     152             :         units are in hashes are called a "hashcnt" to distinguish them
     153             :         from actual hashed values.
     154             : 
     155             :         Agave generally defines a constant duration for each tick
     156             :         (see below) and then varies the number of hashcnt per tick, but
     157             :         as we consider the hashcnt the base unit of time, Firedancer and
     158             :         this PoH implementation defines everything in terms of hashcnt
     159             :         duration instead.
     160             : 
     161             :         In mainnet-beta, testnet, and devnet the hashcnt ticks over
     162             :         (increments) every 100 nanoseconds.  The hashcnt rate is
     163             :         specified as 500 nanoseconds according to the genesis, but there
     164             :         are several features which increase the number of hashes per
     165             :         tick while keeping tick duration constant, which make the time
     166             :         per hashcnt lower.  These features up to and including the
     167             :         `update_hashes_per_tick6` feature are activated on mainnet-beta,
     168             :         devnet, and testnet, and are described in the TICKS section
     169             :         below.
     170             : 
     171             :         Other chains and development environments might have a different
     172             :         hashcnt rate in the genesis, or they might not have activated
     173             :         the features which increase the rate yet, which we also support.
     174             : 
     175             :         In practice, although each validator follows a hashcnt rate of
     176             :         100 nanoseconds, the overall observed hashcnt rate of the
     177             :         network is a little slower than once every 100 nanoseconds,
     178             :         mostly because there are gaps and clock synchronization issues
     179             :         during handoff between leaders.  This is referred to as clock
     180             :         drift.
     181             : 
     182             :     TICKS
     183             : 
     184             :         The leader needs to periodically checkpoint the hash value
     185             :         associated with a given hashcnt so that they can publish it to
     186             :         other nodes for verification.
     187             : 
     188             :         On mainnet-beta, testnet, and devnet this occurs once every
     189             :         62,500 hashcnts, or approximately once every 6.4 microseconds.
     190             :         This value is determined at genesis time, and according to the
     191             :         features below, and could be different in development
     192             :         environments or on other chains which we support.
     193             : 
     194             :         Due to protocol limitations, when mixing in transactions to the
     195             :         proof-of-history chain, it cannot occur on a tick boundary (but
     196             :         can occur at any other hashcnt).
     197             : 
     198             :         Ticks exist mainly so that verification can happen in parallel.
     199             :         A verifier computer, rather than needing to do hash(hash(...))
     200             :         all in sequence to verify a proof-of-history chain, can do,
     201             : 
     202             :          Core 0: hash(hash(...))
     203             :          Core 1: hash(hash(...))
     204             :          Core 2: hash(hash(...))
     205             :          Core 3: hash(hash(...))
     206             :          ...
     207             : 
     208             :         Between each pair of tick boundaries.
     209             : 
     210             :         Solana sometimes calls the current tick the "tick height",
     211             :         although it makes more sense to think of it as a counter from
     212             :         zero, it's just the number of ticks since the genesis hash.
     213             : 
     214             :         There is a set of features which increase the number of hashcnts
     215             :         per tick.  These are all deployed on mainnet-beta, devnet, and
     216             :         testnet.
     217             : 
     218             :            name:             update_hashes_per_tick
     219             :            id:               3uFHb9oKdGfgZGJK9EHaAXN4USvnQtAFC13Fh5gGFS5B
     220             :            hashes per tick:  12,500
     221             :            hashcnt duration: 500 nanos
     222             : 
     223             :            name:             update_hashes_per_tick2
     224             :            id:               EWme9uFqfy1ikK1jhJs8fM5hxWnK336QJpbscNtizkTU
     225             :            hashes per tick:  17,500
     226             :            hashcnt duration: 357.142857143 nanos
     227             : 
     228             :            name:             update_hashes_per_tick3
     229             :            id:               8C8MCtsab5SsfammbzvYz65HHauuUYdbY2DZ4sznH6h5
     230             :            hashes per tick:  27,500
     231             :            hashcnt duration: 227.272727273 nanos
     232             : 
     233             :            name:             update_hashes_per_tick4
     234             :            id:               8We4E7DPwF2WfAN8tRTtWQNhi98B99Qpuj7JoZ3Aikgg
     235             :            hashes per tick:  47,500
     236             :            hashcnt duration: 131.578947368 nanos
     237             : 
     238             :            name:             update_hashes_per_tick5
     239             :            id:               BsKLKAn1WM4HVhPRDsjosmqSg2J8Tq5xP2s2daDS6Ni4
     240             :            hashes per tick:  57,500
     241             :            hashcnt duration: 108.695652174 nanos
     242             : 
     243             :            name:             update_hashes_per_tick6
     244             :            id:               FKu1qYwLQSiehz644H6Si65U5ZQ2cp9GxsyFUfYcuADv
     245             :            hashes per tick:  62,500
     246             :            hashcnt duration: 100 nanos
     247             : 
     248             :         In development environments, there is a way to configure the
     249             :         hashcnt per tick to be "none" during genesis, for a so-called
     250             :         "low power" tick producer.  The idea is not to spin cores during
     251             :         development.  This is equivalent to setting the hashcnt per tick
     252             :         to be 1, and increasing the hashcnt duration to the desired tick
     253             :         duration.
     254             : 
     255             :     SLOTS
     256             : 
     257             :         Each leader needs to be leader for a fixed amount of time, which
     258             :         is called a slot.  During a slot, a leader has an opportunity to
     259             :         receive transactions and produce a block for the network,
     260             :         although they may miss ("skip") the slot if they are offline or
     261             :         not behaving.
     262             : 
     263             :         In mainnet-beta, testnet, and devnet a slot is 64 ticks, or
     264             :         4,000,000 hashcnts, or approximately 400 milliseconds.
     265             : 
     266             :         Due to the way the leader schedule is constructed, each leader
     267             :         is always given at least four (4) consecutive slots in the
     268             :         schedule. This means when becoming leader you will be leader
     269             :         for at least 4 slots, or 1.6 seconds.
     270             : 
     271             :         It is rare, although can happen that a leader gets more than 4
     272             :         consecutive slots (eg, 8, or 12), if they are lucky with the
     273             :         leader schedule generation.
     274             : 
     275             :         The number of ticks in a slot is fixed at genesis time, and
     276             :         could be different for development or other chains, which we
     277             :         support.  There is nothing special about 4 leader slots in a
     278             :         row, and this might be changed in future, and the proof of
     279             :         history makes no assumptions that this is the case.
     280             : 
     281             :     EPOCHS
     282             : 
     283             :         Infrequently, the network needs to do certain housekeeping,
     284             :         mainly things like collecting rent and deciding on the leader
     285             :         schedule.  The length of an epoch is fixed on mainnet-beta,
     286             :         devnet and testnet at 420,000 slots, or around ~2 (1.94) days.
     287             :         This value is fixed at genesis time, and could be different for
     288             :         other chains including development, which we support.  Typically
     289             :         in development, epochs are every 8,192 slots, or around  ~1 hour
     290             :         (54.61 minutes), although it depends on the number of ticks per
     291             :         slot and the target hashcnt rate of the genesis as well.
     292             : 
     293             :         In development, epochs need not be a fixed length either.  There
     294             :         is a "warmup" option, where epochs start short and grow, which
     295             :         is useful for quickly warming up stake during development.
     296             : 
     297             :         The epoch is important because it is the only time the leader
     298             :         schedule is updated.  The leader schedule is a list of which
     299             :         leader is leader for which slot, and is generated by a special
     300             :         algorithm that is deterministic and known to all nodes.
     301             : 
     302             :         The leader schedule is computed one epoch in advance, so that
     303             :         at slot T, we always know who will be leader up until the end
     304             :         of slot T+EPOCH_LENGTH.  Specifically, the leader schedule for
     305             :         epoch N is computed during the epoch boundary crossing from
     306             :         N-2 to N-1. For mainnet-beta, the slots per epoch is fixed and
     307             :         will always be 420,000. */
     308             : 
     309             : #include "../bank/fd_bank_abi.h"
     310             : 
     311             : #include "../../disco/tiles.h"
     312             : #include "../../disco/plugin/fd_bundle_crank.h"
     313             : #include "../../disco/pack/fd_pack.h"
     314             : #include "../../ballet/sha256/fd_sha256.h"
     315             : #include "../../disco/metrics/fd_metrics.h"
     316             : #include "../../disco/topo/fd_pod_format.h"
     317             : #include "../../disco/shred/fd_shredder.h"
     318             : #include "../../disco/shred/fd_stake_ci.h"
     319             : #include "../../disco/keyguard/fd_keyload.h"
     320             : #include "../../disco/keyguard/fd_keyswitch.h"
     321             : #include "../../disco/metrics/generated/fd_metrics_poh.h"
     322             : #include "../../disco/plugin/fd_plugin.h"
     323             : #include "../../flamenco/leaders/fd_leaders.h"
     324             : 
     325             : #include <string.h>
     326             : 
     327             : /* The maximum number of microblocks that pack is allowed to pack into a
     328             :    single slot.  This is not consensus critical, and pack could, if we
     329             :    let it, produce as many microblocks as it wants, and the slot would
     330             :    still be valid.
     331             : 
     332             :    We have this here instead so that PoH can estimate slot completion,
     333             :    and keep the hashcnt up to date as pack progresses through packing
     334             :    the slot.  If this upper bound was not enforced, PoH could tick to
     335             :    the last hash of the slot and have no hashes left to mixin incoming
     336             :    microblocks from pack, so this upper bound is a coordination
     337             :    mechanism so that PoH can progress hashcnts while the slot is active,
     338             :    and know that pack will not need those hashcnts later to do mixins. */
     339           0 : #define MAX_MICROBLOCKS_PER_SLOT (32768UL)
     340             : 
     341             : /* When we are hashing in the background in case a prior leader skips
     342             :    their slot, we need to store the result of each tick hash so we can
     343             :    publish them when we become leader.  The network requires at least
     344             :    one leader slot to publish in each epoch for the leader schedule to
     345             :    generate, so in the worst case we might need two full epochs of slots
     346             :    to store the hashes.  (Eg, if epoch T only had a published slot in
     347             :    position 0 and epoch T+1 only had a published slot right at the end).
     348             : 
     349             :    There is a tighter bound: the block data limit of mainnet-beta is
     350             :    currently FD_PACK_MAX_DATA_PER_BLOCK, or 27,332,342 bytes per slot.
     351             :    At 48 bytes per tick, it is not possible to publish a slot that skips
     352             :    569,424 or more prior slots. */
     353           0 : #define MAX_SKIPPED_TICKS (1UL+(FD_PACK_MAX_DATA_PER_BLOCK/48UL))
     354             : 
     355           0 : #define IN_KIND_BANK  (0)
     356           0 : #define IN_KIND_PACK  (1)
     357           0 : #define IN_KIND_STAKE (2)
     358             : 
     359             : 
     360             : typedef struct {
     361             :   fd_wksp_t * mem;
     362             :   ulong       chunk0;
     363             :   ulong       wmark;
     364             : } fd_poh_in_ctx_t;
     365             : 
     366             : typedef struct {
     367             :   ulong       idx;
     368             :   fd_wksp_t * mem;
     369             :   ulong       chunk0;
     370             :   ulong       wmark;
     371             :   ulong       chunk;
     372             : } fd_poh_out_ctx_t;
     373             : 
     374             : typedef struct {
     375             :   fd_stem_context_t * stem;
     376             : 
     377             :   /* Static configuration determined at genesis creation time.  See
     378             :      long comment above for more information. */
     379             :   ulong  tick_duration_ns;
     380             :   ulong  hashcnt_per_tick;
     381             :   ulong  ticks_per_slot;
     382             : 
     383             :   /* Derived from the above configuration, but we precompute it. */
     384             :   double slot_duration_ns;
     385             :   double hashcnt_duration_ns;
     386             :   ulong  hashcnt_per_slot;
     387             :   /* Constant, fixed at initialization.  The maximum number of
     388             :      microblocks that the pack tile can publish in each slot. */
     389             :   ulong max_microblocks_per_slot;
     390             : 
     391             :   /* The current slot and hashcnt within that slot of the proof of
     392             :      history, including hashes we have been producing in the background
     393             :      while waiting for our next leader slot. */
     394             :   ulong slot;
     395             :   ulong hashcnt;
     396             :   ulong cus_used;
     397             : 
     398             :   /* When we send a microblock on to the shred tile, we need to tell
     399             :      it how many hashes there have been since the last microblock, so
     400             :      this tracks the hashcnt of the last published microblock.
     401             : 
     402             :      If we are skipping slots prior to our leader slot, the last_slot
     403             :      will be quite old, and potentially much larger than the number of
     404             :      hashcnts in one slot. */
     405             :   ulong last_slot;
     406             :   ulong last_hashcnt;
     407             : 
     408             :   /* If we have published a tick or a microblock for a particular slot
     409             :      to the shred tile, we should never become leader for that slot
     410             :      again, otherwise we could publish a duplicate block.
     411             : 
     412             :      This value tracks the max slot that we have published a tick or
     413             :      microblock for so we can prevent this. */
     414             :   ulong highwater_leader_slot;
     415             : 
     416             :   /* See how this field is used below.  If we have sequential leader
     417             :      slots, we don't reset the expected slot end time between the two,
     418             :      to prevent clock drift.  If we didn't do this, our 2nd slot would
     419             :      end 400ms + `time_for_replay_to_move_slot_and_reset_poh` after
     420             :      our 1st, rather than just strictly 400ms. */
     421             :   int  lagged_consecutive_leader_start;
     422             :   ulong expect_sequential_leader_slot;
     423             : 
     424             :   /* There's a race condition ... let's say two banks A and B, bank A
     425             :      processes some transactions, then releases the account locks, and
     426             :      sends the microblock to PoH to be stamped.  Pack now re-packs the
     427             :      same accounts with a new microblock, sends to bank B, bank B
     428             :      executes and sends the microblock to PoH, and this all happens fast
     429             :      enough that PoH picks the 2nd block to stamp before the 1st.  The
     430             :      accounts database changes now are misordered with respect to PoH so
     431             :      replay could fail.
     432             : 
     433             :      To prevent this race, we order all microblocks and only process
     434             :      them in PoH in the order they are produced by pack.  This is a
     435             :      little bit over-strict, we just need to ensure that microblocks
     436             :      with conflicting accounts execute in order, but this is easiest to
     437             :      implement for now. */
     438             :   ulong expect_microblock_idx;
     439             : 
     440             :   /* The PoH tile must never drop microblocks that get committed by the
     441             :      bank, so it needs to always be able to mixin a microblock hash.
     442             :      Mixing in requires incrementing the hashcnt, so we need to ensure
     443             :      at all times that there is enough hascnts left in the slot to
     444             :      mixin whatever future microblocks pack might produce for it.
     445             : 
     446             :      This value tracks that.  At any time, max_microblocks_per_slot
     447             :      - microblocks_lower_bound is an upper bound on the maximum number
     448             :      of microblocks that might still be received in this slot. */
     449             :   ulong microblocks_lower_bound;
     450             : 
     451             :   uchar __attribute__((aligned(32UL))) reset_hash[ 32 ];
     452             :   uchar __attribute__((aligned(32UL))) hash[ 32 ];
     453             : 
     454             :   /* When we are not leader, we need to save the hashes that were
     455             :      produced in case the prior leader skips.  If they skip, we will
     456             :      replay these skipped hashes into our next leader bank so that
     457             :      the slot hashes sysvar can be updated correctly, and also publish
     458             :      them to peer nodes as part of our outgoing shreds. */
     459             :   uchar skipped_tick_hashes[ MAX_SKIPPED_TICKS ][ 32 ];
     460             : 
     461             :   /* The timestamp in nanoseconds of when the reset slot was received.
     462             :      This is the timestamp we are building on top of to determine when
     463             :      our next leader slot starts. */
     464             :   long reset_slot_start_ns;
     465             : 
     466             :   /* The timestamp in nanoseconds of when we got the bank for the
     467             :      current leader slot. */
     468             :   long leader_bank_start_ns;
     469             : 
     470             :   /* The hashcnt corresponding to the start of the current reset slot. */
     471             :   ulong reset_slot;
     472             : 
     473             :   /* The hashcnt at which our next leader slot begins, or ULONG max if
     474             :      we have no known next leader slot. */
     475             :   ulong next_leader_slot;
     476             : 
     477             :   /* If an in progress frag should be skipped */
     478             :   int skip_frag;
     479             : 
     480             :   ulong max_active_descendant;
     481             : 
     482             :   /* If we currently are the leader according the clock AND we have
     483             :      received the leader bank for the slot from the replay stage,
     484             :      this value will be non-NULL.
     485             : 
     486             :      Note that we might be inside our leader slot, but not have a bank
     487             :      yet, in which case this will still be NULL.
     488             : 
     489             :      It will be NULL for a brief race period between consecutive leader
     490             :      slots, as we ping-pong back to replay stage waiting for a new bank.
     491             : 
     492             :      Agave refers to this as the "working bank". */
     493             :   void const * current_leader_bank;
     494             : 
     495             :   fd_sha256_t * sha256;
     496             : 
     497             :   fd_stake_ci_t * stake_ci;
     498             : 
     499             :   /* The last sequence number of an outgoing fragment to the shred tile,
     500             :      or ULONG max if no such fragment.  See fd_keyswitch.h for details
     501             :      of how this is used. */
     502             :   ulong shred_seq;
     503             : 
     504             :   int halted_switching_key;
     505             : 
     506             :   fd_keyswitch_t * keyswitch;
     507             :   fd_pubkey_t identity_key;
     508             : 
     509             :   /* We need a few pieces of information to compute the right addresses
     510             :      for bundle crank information that we need to send to pack. */
     511             :   struct {
     512             :     int enabled;
     513             :     fd_pubkey_t vote_account;
     514             :     fd_bundle_crank_gen_t gen[1];
     515             :   } bundle;
     516             : 
     517             : 
     518             :   /* The Agave client needs to be notified when the leader changes,
     519             :      so that they can resume the replay stage if it was suspended waiting. */
     520             :   void * signal_leader_change;
     521             : 
     522             :   /* These are temporarily set in during_frag so they can be used in
     523             :      after_frag once the frag has been validated as not overrun. */
     524             :   uchar _txns[ USHORT_MAX ];
     525             :   fd_microblock_trailer_t _microblock_trailer[ 1 ];
     526             : 
     527             :   int in_kind[ 64 ];
     528             :   fd_poh_in_ctx_t in[ 64 ];
     529             : 
     530             :   fd_poh_out_ctx_t shred_out[ 1 ];
     531             :   fd_poh_out_ctx_t pack_out[ 1 ];
     532             :   fd_poh_out_ctx_t plugin_out[ 1 ];
     533             : 
     534             :   fd_histf_t begin_leader_delay[ 1 ];
     535             :   fd_histf_t first_microblock_delay[ 1 ];
     536             :   fd_histf_t slot_done_delay[ 1 ];
     537             :   fd_histf_t bundle_init_delay[ 1 ];
     538             : } fd_poh_ctx_t;
     539             : 
     540             : /* The PoH recorder is implemented in Firedancer but for now needs to
     541             :    work with Agave, so we have a locking scheme for them to
     542             :    co-operate.
     543             : 
     544             :    This is because the PoH tile lives in the Agave memory address
     545             :    space and their version of concurrency is locking the PoH recorder
     546             :    and reading arbitrary fields.
     547             : 
     548             :    So we allow them to lock the PoH tile, although with a very bad (for
     549             :    them) locking scheme.  By default, the tile has full and exclusive
     550             :    access to the data.  If part of Agave wishes to read/write they
     551             :    can either,
     552             : 
     553             :      1. Rewrite their concurrency to message passing based on mcache
     554             :         (preferred, but not feasible).
     555             :      2. Signal to the tile they wish to acquire the lock, by setting
     556             :         fd_poh_waiting_lock to 1.
     557             : 
     558             :    During after_credit, the tile will check if the waiting lock is set
     559             :    to 1, and if so, set the returned lock to 1, indicating to the waiter
     560             :    that they may now proceed.
     561             : 
     562             :    When the waiter is done reading and writing, they restore the
     563             :    returned lock value back to zero, and the POH tile continues with its
     564             :    day. */
     565             : 
     566             : static fd_poh_ctx_t * fd_poh_global_ctx;
     567             : 
     568             : static volatile ulong fd_poh_waiting_lock __attribute__((aligned(128UL)));
     569             : static volatile ulong fd_poh_returned_lock __attribute__((aligned(128UL)));
     570             : 
     571             : /* Agave also needs to write to some mcaches, so we trampoline
     572             :    that via. the PoH tile as well. */
     573             : 
     574             : struct poh_link {
     575             :   fd_frag_meta_t * mcache;
     576             :   ulong            depth;
     577             :   ulong            tx_seq;
     578             : 
     579             :   void *           mem;
     580             :   void *           dcache;
     581             :   ulong            chunk0;
     582             :   ulong            wmark;
     583             :   ulong            chunk;
     584             : 
     585             :   ulong            cr_avail;
     586             :   ulong            rx_cnt;
     587             :   ulong *          rx_fseqs[ 32UL ];
     588             : };
     589             : 
     590             : typedef struct poh_link poh_link_t;
     591             : 
     592             : poh_link_t gossip_dedup;
     593             : poh_link_t stake_out;
     594             : poh_link_t crds_shred;
     595             : poh_link_t replay_resolv;
     596             : 
     597             : poh_link_t replay_plugin;
     598             : poh_link_t gossip_plugin;
     599             : poh_link_t start_progress_plugin;
     600             : poh_link_t vote_listener_plugin;
     601             : poh_link_t validator_info_plugin;
     602             : 
     603             : static void
     604           0 : poh_link_wait_credit( poh_link_t * link ) {
     605           0 :   if( FD_LIKELY( link->cr_avail ) ) return;
     606             : 
     607           0 :   while( 1 ) {
     608           0 :     ulong cr_query = ULONG_MAX;
     609           0 :     for( ulong i=0UL; i<link->rx_cnt; i++ ) {
     610           0 :       ulong const * _rx_seq = link->rx_fseqs[ i ];
     611           0 :       ulong rx_seq = FD_VOLATILE_CONST( *_rx_seq );
     612           0 :       ulong rx_cr_query = (ulong)fd_long_max( (long)link->depth - fd_long_max( fd_seq_diff( link->tx_seq, rx_seq ), 0L ), 0L );
     613           0 :       cr_query = fd_ulong_min( rx_cr_query, cr_query );
     614           0 :     }
     615           0 :     if( FD_LIKELY( cr_query>0UL ) ) {
     616           0 :       link->cr_avail = cr_query;
     617           0 :       break;
     618           0 :     }
     619           0 :     FD_SPIN_PAUSE();
     620           0 :   }
     621           0 : }
     622             : 
     623             : static void
     624             : poh_link_publish( poh_link_t *  link,
     625             :                   ulong         sig,
     626             :                   uchar const * data,
     627           0 :                   ulong         data_sz ) {
     628           0 :   while( FD_UNLIKELY( !FD_VOLATILE_CONST( link->mcache ) ) ) FD_SPIN_PAUSE();
     629           0 :   if( FD_UNLIKELY( !link->mem ) ) return; /* link not enabled, don't publish */
     630           0 :   poh_link_wait_credit( link );
     631             : 
     632           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( link->mem, link->chunk );
     633           0 :   fd_memcpy( dst, data, data_sz );
     634           0 :   ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
     635           0 :   fd_mcache_publish( link->mcache, link->depth, link->tx_seq, sig, link->chunk, data_sz, 0UL, 0UL, tspub );
     636           0 :   link->chunk = fd_dcache_compact_next( link->chunk, data_sz, link->chunk0, link->wmark );
     637           0 :   link->cr_avail--;
     638           0 :   link->tx_seq++;
     639           0 : }
     640             : 
     641             : static void
     642             : poh_link_init( poh_link_t *     link,
     643             :                fd_topo_t *      topo,
     644             :                fd_topo_tile_t * tile,
     645           0 :                ulong            out_idx ) {
     646           0 :   fd_topo_link_t * topo_link = &topo->links[ tile->out_link_id[ out_idx ] ];
     647           0 :   fd_topo_wksp_t * wksp = &topo->workspaces[ topo->objs[ topo_link->dcache_obj_id ].wksp_id ];
     648             : 
     649           0 :   link->mem      = wksp->wksp;
     650           0 :   link->depth    = fd_mcache_depth( topo_link->mcache );
     651           0 :   link->tx_seq   = 0UL;
     652           0 :   link->dcache   = topo_link->dcache;
     653           0 :   link->chunk0   = fd_dcache_compact_chunk0( wksp->wksp, topo_link->dcache );
     654           0 :   link->wmark    = fd_dcache_compact_wmark ( wksp->wksp, topo_link->dcache, topo_link->mtu );
     655           0 :   link->chunk    = link->chunk0;
     656           0 :   link->cr_avail = 0UL;
     657           0 :   link->rx_cnt   = 0UL;
     658           0 :   for( ulong i=0UL; i<topo->tile_cnt; i++ ) {
     659           0 :     fd_topo_tile_t * _tile = &topo->tiles[ i ];
     660           0 :     for( ulong j=0UL; j<_tile->in_cnt; j++ ) {
     661           0 :       if( _tile->in_link_id[ j ]==topo_link->id && _tile->in_link_reliable[ j ] ) {
     662           0 :         FD_TEST( link->rx_cnt<32UL );
     663           0 :         link->rx_fseqs[ link->rx_cnt++ ] = _tile->in_link_fseq[ j ];
     664           0 :         break;
     665           0 :       }
     666           0 :     }
     667           0 :   }
     668           0 :   FD_COMPILER_MFENCE();
     669           0 :   link->mcache = topo_link->mcache;
     670           0 :   FD_COMPILER_MFENCE();
     671           0 :   FD_TEST( link->mcache );
     672           0 : }
     673             : 
     674             : /* To help show correctness, functions that might be called from
     675             :    Rust, either directly or indirectly, have this fake "attribute"
     676             :    CALLED_FROM_RUST, which is actually nothing.  Calls from Rust
     677             :    typically execute on threads did not call fd_boot, so they do not
     678             :    have the typical FD_TL variables.  In particular, they cannot use
     679             :    normal metrics, and their log messages don't have full context.
     680             :    Additionally, Rust functions marked CALLED_FROM_RUST cannot call back
     681             :    into a C fd_ext function without causing a deadlock (although the
     682             :    other Rust fd_ext functions have a similar problem).
     683             : 
     684             :    To prevent annotation from polluting the whole codebase, calls to
     685             :    functions outside this file are manually checked and marked as being
     686             :    safe at each call rather than annotated. */
     687             : #define CALLED_FROM_RUST
     688             : 
     689             : static CALLED_FROM_RUST fd_poh_ctx_t *
     690           0 : fd_ext_poh_write_lock( void ) {
     691           0 :   for(;;) {
     692             :     /* Acquire the waiter lock to make sure we are the first writer in the queue. */
     693           0 :     if( FD_LIKELY( !FD_ATOMIC_CAS( &fd_poh_waiting_lock, 0UL, 1UL) ) ) break;
     694           0 :     FD_SPIN_PAUSE();
     695           0 :   }
     696           0 :   FD_COMPILER_MFENCE();
     697           0 :   for(;;) {
     698             :     /* Now wait for the tile to tell us we can proceed. */
     699           0 :     if( FD_LIKELY( FD_VOLATILE_CONST( fd_poh_returned_lock ) ) ) break;
     700           0 :     FD_SPIN_PAUSE();
     701           0 :   }
     702           0 :   FD_COMPILER_MFENCE();
     703           0 :   return fd_poh_global_ctx;
     704           0 : }
     705             : 
     706             : static CALLED_FROM_RUST void
     707           0 : fd_ext_poh_write_unlock( void ) {
     708           0 :   FD_COMPILER_MFENCE();
     709           0 :   FD_VOLATILE( fd_poh_returned_lock ) = 0UL;
     710           0 : }
     711             : 
     712             : /* The PoH tile needs to interact with the Agave address space to
     713             :    do certain operations that Firedancer hasn't reimplemented yet, a.k.a
     714             :    transaction execution.  We have Agave export some wrapper
     715             :    functions that we call into during regular tile execution.  These do
     716             :    not need any locking, since they are called serially from the single
     717             :    PoH tile. */
     718             : 
     719             : extern CALLED_FROM_RUST void fd_ext_bank_acquire( void const * bank );
     720             : extern CALLED_FROM_RUST void fd_ext_bank_release( void const * bank );
     721             : extern CALLED_FROM_RUST void fd_ext_poh_signal_leader_change( void * sender );
     722             : extern                  void fd_ext_poh_register_tick( void const * bank, uchar const * hash );
     723             : 
     724             : /* fd_ext_poh_initialize is called by Agave on startup to
     725             :    initialize the PoH tile with some static configuration, and the
     726             :    initial reset slot and hash which it retrieves from a snapshot.
     727             : 
     728             :    This function is called by some random Agave thread, but
     729             :    it blocks booting of the PoH tile.  The tile will spin until it
     730             :    determines that this initialization has happened.
     731             : 
     732             :    signal_leader_change is an opaque Rust object that is used to
     733             :    tell the replay stage that the leader has changed.  It is a
     734             :    Box::into_raw(Arc::increment_strong(crossbeam::Sender)), so it
     735             :    has infinite lifetime unless this C code releases the refcnt.
     736             : 
     737             :    It can be used with `fd_ext_poh_signal_leader_change` which
     738             :    will just issue a nonblocking send on the channel. */
     739             : 
     740             : CALLED_FROM_RUST void
     741             : fd_ext_poh_initialize( ulong         tick_duration_ns,    /* See clock comments above, will be 6.4 microseconds for mainnet-beta. */
     742             :                        ulong         hashcnt_per_tick,    /* See clock comments above, will be 62,500 for mainnet-beta. */
     743             :                        ulong         ticks_per_slot,      /* See clock comments above, will almost always be 64. */
     744             :                        ulong         tick_height,         /* The counter (height) of the tick to start hashing on top of. */
     745             :                        uchar const * last_entry_hash,     /* Points to start of a 32 byte region of memory, the hash itself at the tick height. */
     746           0 :                        void *        signal_leader_change /* See comment above. */ ) {
     747           0 :   FD_COMPILER_MFENCE();
     748           0 :   for(;;) {
     749             :     /* Make sure the ctx is initialized before trying to take the lock. */
     750           0 :     if( FD_LIKELY( FD_VOLATILE_CONST( fd_poh_global_ctx ) ) ) break;
     751           0 :     FD_SPIN_PAUSE();
     752           0 :   }
     753           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     754             : 
     755           0 :   ctx->slot                = tick_height/ticks_per_slot;
     756           0 :   ctx->hashcnt             = 0UL;
     757           0 :   ctx->cus_used            = 0UL;
     758           0 :   ctx->last_slot           = ctx->slot;
     759           0 :   ctx->last_hashcnt        = 0UL;
     760           0 :   ctx->reset_slot          = ctx->slot;
     761           0 :   ctx->reset_slot_start_ns = fd_log_wallclock(); /* safe to call from Rust */
     762             : 
     763           0 :   memcpy( ctx->reset_hash, last_entry_hash, 32UL );
     764           0 :   memcpy( ctx->hash, last_entry_hash, 32UL );
     765             : 
     766           0 :   ctx->signal_leader_change = signal_leader_change;
     767             : 
     768             :   /* Static configuration about the clock. */
     769           0 :   ctx->tick_duration_ns = tick_duration_ns;
     770           0 :   ctx->hashcnt_per_tick = hashcnt_per_tick;
     771           0 :   ctx->ticks_per_slot   = ticks_per_slot;
     772             : 
     773             :   /* Recompute derived information about the clock. */
     774           0 :   ctx->slot_duration_ns    = (double)ticks_per_slot*(double)tick_duration_ns;
     775           0 :   ctx->hashcnt_duration_ns = (double)tick_duration_ns/(double)hashcnt_per_tick;
     776           0 :   ctx->hashcnt_per_slot    = ticks_per_slot*hashcnt_per_tick;
     777             : 
     778           0 :   if( FD_UNLIKELY( ctx->hashcnt_per_tick==1UL ) ) {
     779             :     /* Low power producer, maximum of one microblock per tick in the slot */
     780           0 :     ctx->max_microblocks_per_slot = ctx->ticks_per_slot;
     781           0 :   } else {
     782             :     /* See the long comment in after_credit for this limit */
     783           0 :     ctx->max_microblocks_per_slot = fd_ulong_min( MAX_MICROBLOCKS_PER_SLOT, ctx->ticks_per_slot*(ctx->hashcnt_per_tick-1UL) );
     784           0 :   }
     785             : 
     786           0 :   fd_ext_poh_write_unlock();
     787           0 : }
     788             : 
     789             : /* fd_ext_poh_acquire_bank gets the current leader bank if there is one
     790             :    currently active.  PoH might think we are leader without having a
     791             :    leader bank if the replay stage has not yet noticed we are leader.
     792             : 
     793             :    The bank that is returned is owned the caller, and must be converted
     794             :    to an Arc<Bank> by calling Arc::from_raw() on it.  PoH increments the
     795             :    reference count before returning the bank, so that it can also keep
     796             :    its internal copy.
     797             : 
     798             :    If there is no leader bank, NULL is returned.  In this case, the
     799             :    caller should not call `Arc::from_raw()`. */
     800             : 
     801             : CALLED_FROM_RUST void const *
     802           0 : fd_ext_poh_acquire_leader_bank( void ) {
     803           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     804           0 :   void const * bank = NULL;
     805           0 :   if( FD_LIKELY( ctx->current_leader_bank ) ) {
     806             :     /* Clone refcount before we release the lock. */
     807           0 :     fd_ext_bank_acquire( ctx->current_leader_bank );
     808           0 :     bank = ctx->current_leader_bank;
     809           0 :   }
     810           0 :   fd_ext_poh_write_unlock();
     811           0 :   return bank;
     812           0 : }
     813             : 
     814             : /* fd_ext_poh_reset_slot returns the slot height one above the last good
     815             :    (unskipped) slot we are building on top of.  This is always a good
     816             :    known value, and will not be ULONG_MAX. */
     817             : 
     818             : CALLED_FROM_RUST ulong
     819           0 : fd_ext_poh_reset_slot( void ) {
     820           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     821           0 :   ulong reset_slot = ctx->reset_slot;
     822           0 :   fd_ext_poh_write_unlock();
     823           0 :   return reset_slot;
     824           0 : }
     825             : 
     826             : CALLED_FROM_RUST void
     827           0 : fd_ext_poh_update_active_descendant( ulong max_active_descendant ) {
     828           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     829           0 :   ctx->max_active_descendant = max_active_descendant;
     830           0 :   fd_ext_poh_write_unlock();
     831           0 : }
     832             : 
     833             : /* fd_ext_poh_reached_leader_slot returns 1 if we have reached a slot
     834             :    where we are leader.  This is used by the replay stage to determine
     835             :    if it should create a new leader bank descendant of the prior reset
     836             :    slot block.
     837             : 
     838             :    Sometimes, even when we reach our slot we do not return 1, as we are
     839             :    giving a grace period to the prior leader to finish publishing their
     840             :    block.
     841             : 
     842             :    out_leader_slot is the slot height of the leader slot we reached, and
     843             :    reset_slot is the slot height of the last good (unskipped) slot we
     844             :    are building on top of. */
     845             : 
     846             : CALLED_FROM_RUST int
     847             : fd_ext_poh_reached_leader_slot( ulong * out_leader_slot,
     848           0 :                                 ulong * out_reset_slot ) {
     849           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
     850             : 
     851           0 :   *out_leader_slot = ctx->next_leader_slot;
     852           0 :   *out_reset_slot  = ctx->reset_slot;
     853             : 
     854           0 :   if( FD_UNLIKELY( ctx->next_leader_slot==ULONG_MAX ||
     855           0 :                    ctx->slot<ctx->next_leader_slot ) ) {
     856             :     /* Didn't reach our leader slot yet. */
     857           0 :     fd_ext_poh_write_unlock();
     858           0 :     return 0;
     859           0 :   }
     860             : 
     861           0 :   if( FD_UNLIKELY( ctx->halted_switching_key ) ) {
     862             :     /* Reached our leader slot, but the leader pipeline is halted
     863             :        because we are switching identity key. */
     864           0 :     fd_ext_poh_write_unlock();
     865           0 :     return 0;
     866           0 :   }
     867             : 
     868           0 :   if( FD_LIKELY( ctx->reset_slot==ctx->next_leader_slot ) ) {
     869             :     /* We were reset onto our leader slot, because the prior leader
     870             :        completed theirs, so we should start immediately, no need for a
     871             :        grace period. */
     872           0 :     fd_ext_poh_write_unlock();
     873           0 :     return 1;
     874           0 :   }
     875             : 
     876           0 :   long now_ns = fd_log_wallclock();
     877           0 :   long expected_start_time_ns = ctx->reset_slot_start_ns + (long)((double)(ctx->next_leader_slot-ctx->reset_slot)*ctx->slot_duration_ns);
     878             : 
     879             :   /* If a prior leader is still in the process of publishing their slot,
     880             :      delay ours to let them finish ... unless they are so delayed that
     881             :      we risk getting skipped by the leader following us.  1.2 seconds
     882             :      is a reasonable default here, although any value between 0 and 1.6
     883             :      seconds could be considered reasonable.  This is arbitrary and
     884             :      chosen due to intuition. */
     885             : 
     886           0 :   if( FD_UNLIKELY( now_ns<expected_start_time_ns+(long)(3.0*ctx->slot_duration_ns) ) ) {
     887             :     /* If the max_active_descendant is >= next_leader_slot, we waited
     888             :        too long and a leader after us started publishing to try and skip
     889             :        us.  Just start our leader slot immediately, we might win ... */
     890             : 
     891           0 :     if( FD_LIKELY( ctx->max_active_descendant>=ctx->reset_slot && ctx->max_active_descendant<ctx->next_leader_slot ) ) {
     892             :       /* If one of the leaders between the reset slot and our leader
     893             :          slot is in the process of publishing (they have a descendant
     894             :          bank that is in progress of being replayed), then keep waiting.
     895             :          We probably wouldn't get a leader slot out before they
     896             :          finished.
     897             : 
     898             :          Unless... we are past the deadline to start our slot by more
     899             :          than 1.2 seconds, in which case we should probably start it to
     900             :          avoid getting skipped by the leader behind us. */
     901           0 :       fd_ext_poh_write_unlock();
     902           0 :       return 0;
     903           0 :     }
     904           0 :   }
     905             : 
     906           0 :   fd_ext_poh_write_unlock();
     907           0 :   return 1;
     908           0 : }
     909             : 
     910             : CALLED_FROM_RUST static inline void
     911             : publish_plugin_slot_start( fd_poh_ctx_t * ctx,
     912             :                            ulong          slot,
     913           0 :                            ulong          parent_slot ) {
     914           0 :   if( FD_UNLIKELY( !ctx->plugin_out->mem ) ) return;
     915             : 
     916           0 :   fd_plugin_msg_slot_start_t * slot_start = (fd_plugin_msg_slot_start_t *)fd_chunk_to_laddr( ctx->plugin_out->mem, ctx->plugin_out->chunk );
     917           0 :   *slot_start = (fd_plugin_msg_slot_start_t){ .slot = slot, .parent_slot = parent_slot };
     918           0 :   fd_stem_publish( ctx->stem, ctx->plugin_out->idx, FD_PLUGIN_MSG_SLOT_START, ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_start_t), 0UL, 0UL, 0UL );
     919           0 :   ctx->plugin_out->chunk = fd_dcache_compact_next( ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_start_t), ctx->plugin_out->chunk0, ctx->plugin_out->wmark );
     920           0 : }
     921             : 
     922             : CALLED_FROM_RUST static inline void
     923             : publish_plugin_slot_end( fd_poh_ctx_t * ctx,
     924             :                          ulong          slot,
     925           0 :                          ulong          cus_used ) {
     926           0 :   if( FD_UNLIKELY( !ctx->plugin_out->mem ) ) return;
     927             : 
     928           0 :   fd_plugin_msg_slot_end_t * slot_end = (fd_plugin_msg_slot_end_t *)fd_chunk_to_laddr( ctx->plugin_out->mem, ctx->plugin_out->chunk );
     929           0 :   *slot_end = (fd_plugin_msg_slot_end_t){ .slot = slot, .cus_used = cus_used };
     930           0 :   fd_stem_publish( ctx->stem, ctx->plugin_out->idx, FD_PLUGIN_MSG_SLOT_END, ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_end_t), 0UL, 0UL, 0UL );
     931           0 :   ctx->plugin_out->chunk = fd_dcache_compact_next( ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_end_t), ctx->plugin_out->chunk0, ctx->plugin_out->wmark );
     932           0 : }
     933             : 
     934             : extern int
     935             : fd_ext_bank_load_account( void const *  bank,
     936             :                           int           fixed_root,
     937             :                           uchar const * addr,
     938             :                           uchar *       owner,
     939             :                           uchar *       data,
     940             :                           ulong *       data_sz );
     941             : 
     942             : CALLED_FROM_RUST static void
     943             : publish_became_leader( fd_poh_ctx_t * ctx,
     944             :                        ulong          slot,
     945           0 :                        ulong          epoch ) {
     946           0 :   double tick_per_ns = fd_tempo_tick_per_ns( NULL );
     947           0 :   fd_histf_sample( ctx->begin_leader_delay, (ulong)((double)(fd_log_wallclock()-ctx->reset_slot_start_ns)/tick_per_ns) );
     948             : 
     949           0 :   if( FD_UNLIKELY( ctx->lagged_consecutive_leader_start ) ) {
     950             :     /* If we are mirroring Agave behavior, the wall clock gets reset
     951             :        here so we don't count time spent waiting for a bank to freeze
     952             :        or replay stage to actually start the slot towards our 400ms.
     953             : 
     954             :        See extended comments in the config file on this option. */
     955           0 :     ctx->reset_slot_start_ns = fd_log_wallclock() - (long)((double)(slot-ctx->reset_slot)*ctx->slot_duration_ns);
     956           0 :   }
     957             : 
     958           0 :   fd_bundle_crank_tip_payment_config_t config[1]             = { 0 };
     959           0 :   fd_acct_addr_t                       tip_receiver_owner[1] = { 0 };
     960             : 
     961           0 :   if( FD_UNLIKELY( ctx->bundle.enabled ) ) {
     962           0 :     long bundle_time = -fd_tickcount();
     963           0 :     fd_acct_addr_t tip_payment_config[1];
     964           0 :     fd_acct_addr_t tip_receiver[1];
     965           0 :     fd_bundle_crank_get_addresses( ctx->bundle.gen, epoch, tip_payment_config, tip_receiver );
     966             : 
     967           0 :     fd_acct_addr_t _dummy[1];
     968           0 :     uchar          dummy[1];
     969             : 
     970           0 :     void const * bank = ctx->current_leader_bank;
     971             : 
     972             :     /* Calling rust from a C function that is CALLED_FROM_RUST risks
     973             :        deadlock.  In this case, I checked the load_account function and
     974             :        ensured it never calls any C functions that acquire the lock. */
     975           0 :     ulong sz1 = sizeof(config), sz2 = 1UL;
     976           0 :     int found1 = fd_ext_bank_load_account( bank, 0, tip_payment_config->b, _dummy->b,             (uchar *)config, &sz1 );
     977           0 :     int found2 = fd_ext_bank_load_account( bank, 0, tip_receiver->b,       tip_receiver_owner->b,          dummy,  &sz2 );
     978             :     /* The bundle crank code detects whether the accounts were found by
     979             :        whether they have non-zero values (since found and uninitialized
     980             :        should be treated the same), so we actually don't really care
     981             :        about the value of found{1,2}. */
     982           0 :     (void)found1; (void)found2;
     983           0 :     bundle_time += fd_tickcount();
     984           0 :     fd_histf_sample( ctx->bundle_init_delay, (ulong)bundle_time );
     985           0 :   }
     986             : 
     987           0 :   long slot_start_ns = ctx->reset_slot_start_ns + (long)((double)(slot-ctx->reset_slot)*ctx->slot_duration_ns);
     988             : 
     989             :   /* No need to check flow control, there are always credits became when we
     990             :      are leader, we will not "become" leader again until we are done, so at
     991             :      most one frag in flight at a time. */
     992             : 
     993           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->pack_out->mem, ctx->pack_out->chunk );
     994             : 
     995           0 :   fd_became_leader_t * leader = (fd_became_leader_t *)dst;
     996           0 :   leader->slot_start_ns           = slot_start_ns;
     997           0 :   leader->slot_end_ns             = (long)((double)slot_start_ns + ctx->slot_duration_ns);
     998           0 :   leader->bank                    = ctx->current_leader_bank;
     999           0 :   leader->max_microblocks_in_slot = ctx->max_microblocks_per_slot;
    1000           0 :   leader->ticks_per_slot          = ctx->ticks_per_slot;
    1001           0 :   leader->total_skipped_ticks     = ctx->ticks_per_slot*(slot-ctx->reset_slot);
    1002           0 :   leader->epoch                   = epoch;
    1003           0 :   leader->bundle->config[0]       = config[0];
    1004             : 
    1005           0 :   memcpy( leader->bundle->last_blockhash,     ctx->reset_hash,    32UL );
    1006           0 :   memcpy( leader->bundle->tip_receiver_owner, tip_receiver_owner, 32UL );
    1007             : 
    1008           0 :   if( FD_UNLIKELY( leader->ticks_per_slot+leader->total_skipped_ticks>=MAX_SKIPPED_TICKS ) )
    1009           0 :     FD_LOG_ERR(( "Too many skipped ticks %lu for slot %lu, chain must halt", leader->ticks_per_slot+leader->total_skipped_ticks, slot ));
    1010             : 
    1011           0 :   ulong sig = fd_disco_poh_sig( slot, POH_PKT_TYPE_BECAME_LEADER, 0UL );
    1012           0 :   fd_stem_publish( ctx->stem, ctx->pack_out->idx, sig, ctx->pack_out->chunk, sizeof(fd_became_leader_t), 0UL, 0UL, 0UL );
    1013           0 :   ctx->pack_out->chunk = fd_dcache_compact_next( ctx->pack_out->chunk, sizeof(fd_became_leader_t), ctx->pack_out->chunk0, ctx->pack_out->wmark );
    1014           0 : }
    1015             : 
    1016             : /* The PoH tile knows when it should become leader by waiting for its
    1017             :    leader slot (with the operating system clock).  This function is so
    1018             :    that when it becomes the leader, it can be told what the leader bank
    1019             :    is by the replay stage.  See the notes in the long comment above for
    1020             :    more on how this works. */
    1021             : 
    1022             : CALLED_FROM_RUST void
    1023             : fd_ext_poh_begin_leader( void const * bank,
    1024             :                          ulong        slot,
    1025             :                          ulong        epoch,
    1026           0 :                          ulong        hashcnt_per_tick ) {
    1027           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
    1028             : 
    1029           0 :   FD_TEST( !ctx->current_leader_bank );
    1030             : 
    1031           0 :   if( FD_UNLIKELY( slot!=ctx->slot ) )             FD_LOG_ERR(( "Trying to begin leader slot %lu but we are now on slot %lu", slot, ctx->slot ));
    1032           0 :   if( FD_UNLIKELY( slot!=ctx->next_leader_slot ) ) FD_LOG_ERR(( "Trying to begin leader slot %lu but next leader slot is %lu", slot, ctx->next_leader_slot ));
    1033             : 
    1034           0 :   if( FD_UNLIKELY( ctx->hashcnt_per_tick!=hashcnt_per_tick ) ) {
    1035           0 :     FD_LOG_WARNING(( "hashes per tick changed from %lu to %lu", ctx->hashcnt_per_tick, hashcnt_per_tick ));
    1036             : 
    1037             :     /* Recompute derived information about the clock. */
    1038           0 :     ctx->hashcnt_duration_ns = (double)ctx->tick_duration_ns/(double)hashcnt_per_tick;
    1039           0 :     ctx->hashcnt_per_slot = ctx->ticks_per_slot*hashcnt_per_tick;
    1040           0 :     ctx->hashcnt_per_tick = hashcnt_per_tick;
    1041             : 
    1042           0 :     if( FD_UNLIKELY( ctx->hashcnt_per_tick==1UL ) ) {
    1043             :       /* Low power producer, maximum of one microblock per tick in the slot */
    1044           0 :       ctx->max_microblocks_per_slot = ctx->ticks_per_slot;
    1045           0 :     } else {
    1046             :       /* See the long comment in after_credit for this limit */
    1047           0 :       ctx->max_microblocks_per_slot = fd_ulong_min( MAX_MICROBLOCKS_PER_SLOT, ctx->ticks_per_slot*(ctx->hashcnt_per_tick-1UL) );
    1048           0 :     }
    1049             : 
    1050             :     /* Discard any ticks we might have done in the interim.  They will
    1051             :        have the wrong number of hashes per tick.  We can just catch back
    1052             :        up quickly if not too many slots were skipped and hopefully
    1053             :        publish on time.  Note that tick production and verification of
    1054             :        skipped slots is done for the eventual bank that publishes a
    1055             :        slot, for example:
    1056             : 
    1057             :         Reset Slot:            998
    1058             :         Epoch Transition Slot: 1000
    1059             :         Leader Slot:           1002
    1060             : 
    1061             :        In this case, if a feature changing the hashcnt_per_tick is
    1062             :        activated in slot 1000, and we are publishing empty ticks for
    1063             :        slots 998, 999, 1000, and 1001, they should all have the new
    1064             :        hashes_per_tick number of hashes, rather than the older one, or
    1065             :        some combination. */
    1066             : 
    1067           0 :     FD_TEST( ctx->last_slot==ctx->reset_slot );
    1068           0 :     FD_TEST( !ctx->last_hashcnt );
    1069           0 :     ctx->slot = ctx->reset_slot;
    1070           0 :     ctx->hashcnt = 0UL;
    1071           0 :   }
    1072             : 
    1073           0 :   ctx->current_leader_bank     = bank;
    1074           0 :   ctx->microblocks_lower_bound = 0UL;
    1075           0 :   ctx->cus_used                = 0UL;
    1076           0 :   ctx->expect_microblock_idx   = 0UL;
    1077             : 
    1078             :   /* We are about to start publishing to the shred tile for this slot
    1079             :      so update the highwater mark so we never republish in this slot
    1080             :      again.  Also check that the leader slot is greater than the
    1081             :      highwater, which should have been ensured earlier. */
    1082             : 
    1083           0 :   FD_TEST( ctx->highwater_leader_slot==ULONG_MAX || slot>=ctx->highwater_leader_slot );
    1084           0 :   ctx->highwater_leader_slot = fd_ulong_max( fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ), slot );
    1085             : 
    1086           0 :   publish_became_leader( ctx, slot, epoch );
    1087           0 :   FD_LOG_INFO(( "fd_ext_poh_begin_leader(slot=%lu, highwater_leader_slot=%lu, last_slot=%lu, last_hashcnt=%lu)", slot, ctx->highwater_leader_slot, ctx->last_slot, ctx->last_hashcnt ));
    1088             : 
    1089           0 :   fd_ext_poh_write_unlock();
    1090           0 : }
    1091             : 
    1092             : /* Determine what the next slot is in the leader schedule is that we are
    1093             :    leader.  Includes the current slot.  If we are not leader in what
    1094             :    remains of the current and next epoch, return ULONG_MAX. */
    1095             : 
    1096             : static inline CALLED_FROM_RUST ulong
    1097           0 : next_leader_slot( fd_poh_ctx_t * ctx ) {
    1098             :   /* If we have published anything in a particular slot, then we
    1099             :      should never become leader for that slot again. */
    1100           0 :   ulong min_leader_slot = fd_ulong_max( ctx->slot, fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ) );
    1101             : 
    1102           0 :   for(;;) {
    1103           0 :     fd_epoch_leaders_t * leaders = fd_stake_ci_get_lsched_for_slot( ctx->stake_ci, min_leader_slot ); /* Safe to call from Rust */
    1104           0 :     if( FD_UNLIKELY( !leaders ) ) break;
    1105             : 
    1106           0 :     while( min_leader_slot<(leaders->slot0+leaders->slot_cnt) ) {
    1107           0 :       fd_pubkey_t const * leader = fd_epoch_leaders_get( leaders, min_leader_slot ); /* Safe to call from Rust */
    1108           0 :       if( FD_UNLIKELY( !memcmp( leader->key, ctx->identity_key.key, 32UL ) ) ) return min_leader_slot;
    1109           0 :       min_leader_slot++;
    1110           0 :     }
    1111           0 :   }
    1112             : 
    1113           0 :   return ULONG_MAX;
    1114           0 : }
    1115             : 
    1116             : extern int
    1117             : fd_ext_admin_rpc_set_identity( uchar const * identity_keypair,
    1118             :                                int           require_tower );
    1119             : 
    1120             : static inline int FD_FN_SENSITIVE
    1121             : maybe_change_identity( fd_poh_ctx_t * ctx,
    1122           0 :                        int            definitely_not_leader ) {
    1123           0 :   if( FD_UNLIKELY( ctx->halted_switching_key && fd_keyswitch_state_query( ctx->keyswitch )==FD_KEYSWITCH_STATE_UNHALT_PENDING ) ) {
    1124           0 :     ctx->halted_switching_key = 0;
    1125           0 :     fd_keyswitch_state( ctx->keyswitch, FD_KEYSWITCH_STATE_COMPLETED );
    1126           0 :     return 1;
    1127           0 :   }
    1128             : 
    1129             :   /* Cannot change identity while in the middle of a leader slot, else
    1130             :      poh state machine would become corrupt. */
    1131             : 
    1132           0 :   int is_leader = !definitely_not_leader && ctx->next_leader_slot!=ULONG_MAX && ctx->slot>=ctx->next_leader_slot;
    1133           0 :   if( FD_UNLIKELY( is_leader ) ) return 0;
    1134             : 
    1135           0 :   if( FD_UNLIKELY( fd_keyswitch_state_query( ctx->keyswitch )==FD_KEYSWITCH_STATE_SWITCH_PENDING ) ) {
    1136           0 :     int failed = fd_ext_admin_rpc_set_identity( ctx->keyswitch->bytes, fd_keyswitch_param_query( ctx->keyswitch )==1 );
    1137           0 :     explicit_bzero( ctx->keyswitch->bytes, 32UL );
    1138           0 :     FD_COMPILER_MFENCE();
    1139           0 :     if( FD_UNLIKELY( failed==-1 ) ) {
    1140           0 :       fd_keyswitch_state( ctx->keyswitch, FD_KEYSWITCH_STATE_FAILED );
    1141           0 :       return 0;
    1142           0 :     }
    1143             : 
    1144           0 :     memcpy( ctx->identity_key.uc, ctx->keyswitch->bytes+32UL, 32UL );
    1145           0 :     fd_stake_ci_set_identity( ctx->stake_ci, &ctx->identity_key );
    1146             : 
    1147             :     /* When we switch key, we might have ticked part way through a slot
    1148             :        that we are now leader in.  This violates the contract of the
    1149             :        tile, that when we become leader, we have not ticked in that slot
    1150             :        at all.  To see why this would be bad, consider the case where we
    1151             :        have ticked almost to the end, and there isn't enough space left
    1152             :        to reserve the minimum amount of microblocks needed by pack.
    1153             : 
    1154             :        To resolve this, we just reset PoH back to the reset slot, and
    1155             :        let it try to catch back up quickly. This is OK since the network
    1156             :        rarely skips. */
    1157           0 :     ctx->slot    = ctx->reset_slot;
    1158           0 :     ctx->hashcnt = 0UL;
    1159           0 :     memcpy( ctx->hash, ctx->reset_hash, 32UL );
    1160             : 
    1161           0 :     ctx->halted_switching_key = 1;
    1162           0 :     ctx->keyswitch->result    = ctx->shred_seq;
    1163           0 :     fd_keyswitch_state( ctx->keyswitch, FD_KEYSWITCH_STATE_COMPLETED );
    1164           0 :   }
    1165             : 
    1166           0 :   return 0;
    1167           0 : }
    1168             : 
    1169             : static CALLED_FROM_RUST void
    1170           0 : no_longer_leader( fd_poh_ctx_t * ctx ) {
    1171           0 :   if( FD_UNLIKELY( ctx->current_leader_bank ) ) fd_ext_bank_release( ctx->current_leader_bank );
    1172             :   /* If we stop being leader in a slot, we can never become leader in
    1173             :       that slot again, and all in-flight microblocks for that slot
    1174             :       should be dropped. */
    1175           0 :   ctx->highwater_leader_slot = fd_ulong_max( fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ), ctx->slot );
    1176           0 :   ctx->current_leader_bank = NULL;
    1177           0 :   int identity_changed = maybe_change_identity( ctx, 1 );
    1178           0 :   ctx->next_leader_slot = next_leader_slot( ctx );
    1179           0 :   if( FD_UNLIKELY( identity_changed ) ) {
    1180           0 :     FD_LOG_INFO(( "fd_poh_identity_changed(next_leader_slot=%lu)", ctx->next_leader_slot ));
    1181           0 :   }
    1182             : 
    1183           0 :   FD_COMPILER_MFENCE();
    1184           0 :   fd_ext_poh_signal_leader_change( ctx->signal_leader_change );
    1185           0 :   FD_LOG_INFO(( "no_longer_leader(next_leader_slot=%lu)", ctx->next_leader_slot ));
    1186           0 : }
    1187             : 
    1188             : /* fd_ext_poh_reset is called by the Agave client when a slot on
    1189             :    the active fork has finished a block and we need to reset our PoH to
    1190             :    be ticking on top of the block it produced. */
    1191             : 
    1192             : CALLED_FROM_RUST void
    1193             : fd_ext_poh_reset( ulong         completed_bank_slot, /* The slot that successfully produced a block */
    1194             :                   uchar const * reset_blockhash,     /* The hash of the last tick in the produced block */
    1195           0 :                   ulong         hashcnt_per_tick     /* The hashcnt per tick of the bank that completed */ ) {
    1196           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
    1197             : 
    1198           0 :   ulong slot_before_reset = ctx->slot;
    1199           0 :   int leader_before_reset = ctx->slot>=ctx->next_leader_slot;
    1200           0 :   if( FD_UNLIKELY( leader_before_reset && ctx->current_leader_bank ) ) {
    1201             :     /* If we were in the middle of a leader slot that we notified pack
    1202             :        pack to start packing for we can never publish into that slot
    1203             :        again, mark all in-flight microblocks to be dropped. */
    1204           0 :     ctx->highwater_leader_slot = fd_ulong_max( fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ), 1UL+ctx->slot );
    1205           0 :   }
    1206             : 
    1207           0 :   ctx->leader_bank_start_ns = fd_log_wallclock(); /* safe to call from Rust */
    1208           0 :   if( FD_UNLIKELY( ctx->expect_sequential_leader_slot==(completed_bank_slot+1UL) ) ) {
    1209             :     /* If we are being reset onto a slot, it means some block was fully
    1210             :        processed, so we reset to build on top of it.  Typically we want
    1211             :        to update the reset_slot_start_ns to the current time, because
    1212             :        the network will give the next leader 400ms to publish,
    1213             :        regardless of how long the prior leader took.
    1214             : 
    1215             :        But: if we were leader in the prior slot, and the block was our
    1216             :        own we can do better.  We know that the next slot should start
    1217             :        exactly 400ms after the prior one started, so we can use that as
    1218             :        the reset slot start time instead. */
    1219           0 :     ctx->reset_slot_start_ns = ctx->reset_slot_start_ns + (long)((double)((completed_bank_slot+1UL)-ctx->reset_slot)*ctx->slot_duration_ns);
    1220           0 :   } else {
    1221           0 :     ctx->reset_slot_start_ns = ctx->leader_bank_start_ns;
    1222           0 :   }
    1223           0 :   ctx->expect_sequential_leader_slot = ULONG_MAX;
    1224             : 
    1225           0 :   memcpy( ctx->reset_hash, reset_blockhash, 32UL );
    1226           0 :   memcpy( ctx->hash, reset_blockhash, 32UL );
    1227           0 :   ctx->slot         = completed_bank_slot+1UL;
    1228           0 :   ctx->hashcnt      = 0UL;
    1229           0 :   ctx->last_slot    = ctx->slot;
    1230           0 :   ctx->last_hashcnt = 0UL;
    1231           0 :   ctx->reset_slot   = ctx->slot;
    1232             : 
    1233           0 :   if( FD_UNLIKELY( ctx->hashcnt_per_tick!=hashcnt_per_tick ) ) {
    1234           0 :     FD_LOG_WARNING(( "hashes per tick changed from %lu to %lu", ctx->hashcnt_per_tick, hashcnt_per_tick ));
    1235             : 
    1236             :     /* Recompute derived information about the clock. */
    1237           0 :     ctx->hashcnt_duration_ns = (double)ctx->tick_duration_ns/(double)hashcnt_per_tick;
    1238           0 :     ctx->hashcnt_per_slot = ctx->ticks_per_slot*hashcnt_per_tick;
    1239           0 :     ctx->hashcnt_per_tick = hashcnt_per_tick;
    1240             : 
    1241           0 :     if( FD_UNLIKELY( ctx->hashcnt_per_tick==1UL ) ) {
    1242             :       /* Low power producer, maximum of one microblock per tick in the slot */
    1243           0 :       ctx->max_microblocks_per_slot = ctx->ticks_per_slot;
    1244           0 :     } else {
    1245             :       /* See the long comment in after_credit for this limit */
    1246           0 :       ctx->max_microblocks_per_slot = fd_ulong_min( MAX_MICROBLOCKS_PER_SLOT, ctx->ticks_per_slot*(ctx->hashcnt_per_tick-1UL) );
    1247           0 :     }
    1248           0 :   }
    1249             : 
    1250             :   /* When we reset, we need to allow PoH to tick freely again rather
    1251             :      than being constrained.  If we are leader after the reset, this
    1252             :      is OK because we won't tick until we get a bank, and the lower
    1253             :      bound will be reset with the value from the bank. */
    1254           0 :   ctx->microblocks_lower_bound = ctx->max_microblocks_per_slot;
    1255             : 
    1256           0 :   if( FD_UNLIKELY( leader_before_reset ) ) {
    1257             :     /* No longer have a leader bank if we are reset. Replay stage will
    1258             :        call back again to give us a new one if we should become leader
    1259             :        for the reset slot.
    1260             : 
    1261             :        The order is important here, ctx->hashcnt must be updated before
    1262             :        calling no_longer_leader. */
    1263           0 :     no_longer_leader( ctx );
    1264           0 :   }
    1265           0 :   ctx->next_leader_slot = next_leader_slot( ctx );
    1266           0 :   FD_LOG_INFO(( "fd_ext_poh_reset(slot=%lu,next_leader_slot=%lu)", ctx->reset_slot, ctx->next_leader_slot ));
    1267             : 
    1268           0 :   if( FD_UNLIKELY( ctx->slot>=ctx->next_leader_slot ) ) {
    1269             :     /* We are leader after the reset... two cases: */
    1270           0 :     if( FD_LIKELY( ctx->slot==slot_before_reset ) ) {
    1271             :       /* 1. We are reset onto the same slot we are already leader on.
    1272             :             This is a common case when we have two leader slots in a
    1273             :             row, replay stage will reset us to our own slot.  No need to
    1274             :             do anything here, we already sent a SLOT_START. */
    1275           0 :       FD_TEST( leader_before_reset );
    1276           0 :     } else {
    1277             :       /* 2. We are reset onto a different slot. If we were leader
    1278             :             before, we should first end that slot, then begin the new
    1279             :             one if we are newly leader now. */
    1280           0 :       if( FD_LIKELY( leader_before_reset ) ) publish_plugin_slot_end( ctx, slot_before_reset, ctx->cus_used );
    1281           0 :       else                                   publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->reset_slot );
    1282           0 :     }
    1283           0 :   } else {
    1284           0 :     if( FD_UNLIKELY( leader_before_reset ) ) publish_plugin_slot_end( ctx, slot_before_reset, ctx->cus_used );
    1285           0 :   }
    1286             : 
    1287           0 :   fd_ext_poh_write_unlock();
    1288           0 : }
    1289             : 
    1290             : /* Since it can't easily return an Option<Pubkey>, return 1 for Some and
    1291             :    0 for None. */
    1292             : CALLED_FROM_RUST int
    1293             : fd_ext_poh_get_leader_after_n_slots( ulong n,
    1294           0 :                                      uchar out_pubkey[ static 32 ] ) {
    1295           0 :   fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
    1296           0 :   ulong slot = ctx->slot + n;
    1297           0 :   fd_epoch_leaders_t * leaders = fd_stake_ci_get_lsched_for_slot( ctx->stake_ci, slot ); /* Safe to call from Rust */
    1298             : 
    1299           0 :   int copied = 0;
    1300           0 :   if( FD_LIKELY( leaders ) ) {
    1301           0 :     fd_pubkey_t const * leader = fd_epoch_leaders_get( leaders, slot ); /* Safe to call from Rust */
    1302           0 :     if( FD_LIKELY( leader ) ) {
    1303           0 :       memcpy( out_pubkey, leader, 32UL );
    1304           0 :       copied = 1;
    1305           0 :     }
    1306           0 :   }
    1307           0 :   fd_ext_poh_write_unlock();
    1308           0 :   return copied;
    1309           0 : }
    1310             : 
    1311             : FD_FN_CONST static inline ulong
    1312           3 : scratch_align( void ) {
    1313           3 :   return 128UL;
    1314           3 : }
    1315             : 
    1316             : FD_FN_PURE static inline ulong
    1317           3 : scratch_footprint( fd_topo_tile_t const * tile ) {
    1318           3 :   (void)tile;
    1319           3 :   ulong l = FD_LAYOUT_INIT;
    1320           3 :   l = FD_LAYOUT_APPEND( l, alignof( fd_poh_ctx_t ), sizeof( fd_poh_ctx_t ) );
    1321           3 :   l = FD_LAYOUT_APPEND( l, fd_stake_ci_align(), fd_stake_ci_footprint() );
    1322           3 :   l = FD_LAYOUT_APPEND( l, FD_SHA256_ALIGN, FD_SHA256_FOOTPRINT );
    1323           3 :   return FD_LAYOUT_FINI( l, scratch_align() );
    1324           3 : }
    1325             : 
    1326             : static void
    1327             : publish_tick( fd_poh_ctx_t *      ctx,
    1328             :               fd_stem_context_t * stem,
    1329             :               uchar               hash[ static 32 ],
    1330           0 :               int                 is_skipped ) {
    1331           0 :   ulong hashcnt = ctx->hashcnt_per_tick*(1UL+(ctx->last_hashcnt/ctx->hashcnt_per_tick));
    1332             : 
    1333           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->shred_out->mem, ctx->shred_out->chunk );
    1334             : 
    1335           0 :   FD_TEST( ctx->last_slot>=ctx->reset_slot );
    1336           0 :   fd_entry_batch_meta_t * meta = (fd_entry_batch_meta_t *)dst;
    1337           0 :   if( FD_UNLIKELY( is_skipped ) ) {
    1338             :     /* We are publishing ticks for a skipped slot, the reference tick
    1339             :        and block complete flags should always be zero. */
    1340           0 :     meta->reference_tick = 0UL;
    1341           0 :     meta->block_complete = 0;
    1342           0 :   } else {
    1343           0 :     meta->reference_tick = hashcnt/ctx->hashcnt_per_tick;
    1344           0 :     meta->block_complete = hashcnt==ctx->hashcnt_per_slot;
    1345           0 :   }
    1346             : 
    1347           0 :   ulong slot = fd_ulong_if( meta->block_complete, ctx->slot-1UL, ctx->slot );
    1348           0 :   meta->parent_offset = 1UL+slot-ctx->reset_slot;
    1349             : 
    1350           0 :   FD_TEST( hashcnt>ctx->last_hashcnt );
    1351           0 :   ulong hash_delta = hashcnt-ctx->last_hashcnt;
    1352             : 
    1353           0 :   dst += sizeof(fd_entry_batch_meta_t);
    1354           0 :   fd_entry_batch_header_t * tick = (fd_entry_batch_header_t *)dst;
    1355           0 :   tick->hashcnt_delta = hash_delta;
    1356           0 :   fd_memcpy( tick->hash, hash, 32UL );
    1357           0 :   tick->txn_cnt = 0UL;
    1358             : 
    1359           0 :   ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
    1360           0 :   ulong sz = sizeof(fd_entry_batch_meta_t)+sizeof(fd_entry_batch_header_t);
    1361           0 :   ulong sig = fd_disco_poh_sig( slot, POH_PKT_TYPE_MICROBLOCK, 0UL );
    1362           0 :   fd_stem_publish( stem, ctx->shred_out->idx, sig, ctx->shred_out->chunk, sz, 0UL, 0UL, tspub );
    1363           0 :   ctx->shred_seq = stem->seqs[ ctx->shred_out->idx ];
    1364           0 :   ctx->shred_out->chunk = fd_dcache_compact_next( ctx->shred_out->chunk, sz, ctx->shred_out->chunk0, ctx->shred_out->wmark );
    1365             : 
    1366           0 :   if( FD_UNLIKELY( hashcnt==ctx->hashcnt_per_slot ) ) {
    1367           0 :     ctx->last_slot++;
    1368           0 :     ctx->last_hashcnt = 0UL;
    1369           0 :   } else {
    1370           0 :     ctx->last_hashcnt = hashcnt;
    1371           0 :   }
    1372           0 : }
    1373             : 
    1374             : static inline void
    1375             : after_credit( fd_poh_ctx_t *      ctx,
    1376             :               fd_stem_context_t * stem,
    1377             :               int *               opt_poll_in,
    1378           0 :               int *               charge_busy ) {
    1379           0 :   ctx->stem = stem;
    1380             : 
    1381           0 :   FD_COMPILER_MFENCE();
    1382           0 :   if( FD_UNLIKELY( fd_poh_waiting_lock ) )  {
    1383           0 :     FD_VOLATILE( fd_poh_returned_lock ) = 1UL;
    1384           0 :     FD_COMPILER_MFENCE();
    1385           0 :     for(;;) {
    1386           0 :       if( FD_UNLIKELY( !FD_VOLATILE_CONST( fd_poh_returned_lock ) ) ) break;
    1387           0 :       FD_SPIN_PAUSE();
    1388           0 :     }
    1389           0 :     FD_COMPILER_MFENCE();
    1390           0 :     FD_VOLATILE( fd_poh_waiting_lock ) = 0UL;
    1391           0 :     *opt_poll_in = 0;
    1392           0 :     *charge_busy = 1;
    1393           0 :     return;
    1394           0 :   }
    1395           0 :   FD_COMPILER_MFENCE();
    1396             : 
    1397           0 :   int is_leader = ctx->next_leader_slot!=ULONG_MAX && ctx->slot>=ctx->next_leader_slot;
    1398           0 :   if( FD_UNLIKELY( is_leader && !ctx->current_leader_bank ) ) {
    1399             :     /* If we are the leader, but we didn't yet learn what the leader
    1400             :        bank object is from the replay stage, do not do any hashing.
    1401             : 
    1402             :        This is not ideal, but greatly simplifies the control flow. */
    1403           0 :     return;
    1404           0 :   }
    1405             : 
    1406             :   /* If we have skipped ticks pending because we skipped some slots to
    1407             :      become leader, register them now one at a time. */
    1408           0 :   if( FD_UNLIKELY( is_leader && ctx->last_slot<ctx->slot ) ) {
    1409           0 :     ulong publish_hashcnt = ctx->last_hashcnt+ctx->hashcnt_per_tick;
    1410           0 :     ulong tick_idx = (ctx->last_slot*ctx->ticks_per_slot+publish_hashcnt/ctx->hashcnt_per_tick)%MAX_SKIPPED_TICKS;
    1411             : 
    1412           0 :     fd_ext_poh_register_tick( ctx->current_leader_bank, ctx->skipped_tick_hashes[ tick_idx ] );
    1413           0 :     publish_tick( ctx, stem, ctx->skipped_tick_hashes[ tick_idx ], 1 );
    1414             : 
    1415             :     /* If we are catching up now and publishing a bunch of skipped
    1416             :        ticks, we do not want to process any incoming microblocks until
    1417             :        all the skipped ticks have been published out; otherwise we would
    1418             :        intersperse skipped tick messages with microblocks. */
    1419           0 :     *opt_poll_in = 0;
    1420           0 :     *charge_busy = 1;
    1421           0 :     return;
    1422           0 :   }
    1423             : 
    1424           0 :   int low_power_mode = ctx->hashcnt_per_tick==1UL;
    1425             : 
    1426             :   /* If we are the leader, always leave enough capacity in the slot so
    1427             :      that we can mixin any potential microblocks still coming from the
    1428             :      pack tile for this slot. */
    1429           0 :   ulong max_remaining_microblocks = ctx->max_microblocks_per_slot - ctx->microblocks_lower_bound;
    1430             :   /* With hashcnt_per_tick hashes per tick, we actually get
    1431             :      hashcnt_per_tick-1 chances to mixin a microblock.  For each tick
    1432             :      span that we need to reserve, we also need to reserve the hashcnt
    1433             :      for the tick, hence the +
    1434             :      max_remaining_microblocks/(hashcnt_per_tick-1) rounded up.
    1435             : 
    1436             :      However, if hashcnt_per_tick is 1 because we're in low power mode,
    1437             :      this should probably just be max_remaining_microblocks. */
    1438           0 :   ulong max_remaining_ticks_or_microblocks = max_remaining_microblocks;
    1439           0 :   if( FD_LIKELY( !low_power_mode ) ) max_remaining_ticks_or_microblocks += (max_remaining_microblocks+ctx->hashcnt_per_tick-2UL)/(ctx->hashcnt_per_tick-1UL);
    1440             : 
    1441           0 :   ulong restricted_hashcnt = fd_ulong_if( ctx->hashcnt_per_slot>=max_remaining_ticks_or_microblocks, ctx->hashcnt_per_slot-max_remaining_ticks_or_microblocks, 0UL );
    1442             : 
    1443           0 :   ulong min_hashcnt = ctx->hashcnt;
    1444             : 
    1445           0 :   if( FD_LIKELY( !low_power_mode ) ) {
    1446             :     /* Recall that there are two kinds of events that will get published
    1447             :        to the shredder,
    1448             : 
    1449             :          (a) Ticks. These occur every 62,500 (hashcnt_per_tick) hashcnts,
    1450             :              and there will be 64 (ticks_per_slot) of them in each slot.
    1451             : 
    1452             :              Ticks must not have any transactions mixed into the hash.
    1453             :              This is not strictly needed in theory, but is required by the
    1454             :              current consensus protocol.  They get published here in
    1455             :              after_credit.
    1456             : 
    1457             :          (b) Microblocks.  These can occur at any other hashcnt, as long
    1458             :              as it is not a tick.  Microblocks cannot be empty, and must
    1459             :              have at least one transactions mixed in.  These get
    1460             :              published in after_frag.
    1461             : 
    1462             :        If hashcnt_per_tick is 1, then we are in low power mode and the
    1463             :        following does not apply, since we can mix in transactions at any
    1464             :        time.
    1465             : 
    1466             :        In the normal, non-low-power mode, though, we have to be careful
    1467             :        to make sure that we do not publish microblocks on tick
    1468             :        boundaries.  To do that, we need to obey two rules:
    1469             :          (i)  after_credit must not leave hashcnt one before a tick
    1470             :               boundary
    1471             :          (ii) if after_credit begins one before a tick boundary, it must
    1472             :               advance hashcnt and publish the tick
    1473             : 
    1474             :        There's some interplay between min_hashcnt and restricted_hashcnt
    1475             :        here, and we need to show that there's always a value of
    1476             :        target_hashcnt we can pick such that
    1477             :            min_hashcnt <= target_hashcnt <= restricted_hashcnt.
    1478             :        We'll prove this by induction for current_slot==0 and
    1479             :        is_leader==true, since all other slots should be the same.
    1480             : 
    1481             :        Let m_j and r_j be the min_hashcnt and restricted_hashcnt
    1482             :        (respectively) for the jth call to after_credit in a slot.  We
    1483             :        want to show that for all values of j, it's possible to pick a
    1484             :        value h_j, the value of target_hashcnt for the jth call to
    1485             :        after_credit (which is also the value of hashcnt after
    1486             :        after_credit has completed) such that m_j<=h_j<=r_j.
    1487             : 
    1488             :        Additionally, let T be hashcnt_per_tick and N be ticks_per_slot.
    1489             : 
    1490             :        Starting with the base case, j==0.  m_j=0, and
    1491             :          r_0 = N*T - max_microblocks_per_slot
    1492             :                    - ceil(max_microblocks_per_slot/(T-1)).
    1493             : 
    1494             :        This is monotonic decreasing in max_microblocks_per_slot, so it
    1495             :        achieves its minimum when max_microblocks_per_slot is its
    1496             :        maximum.
    1497             :            r_0 >= N*T - N*(T-1) - ceil( (N*(T-1))/(T-1))
    1498             :                 = N*T - N*(T-1)-N = 0.
    1499             :        Thus, m_0 <= r_0, as desired.
    1500             : 
    1501             : 
    1502             : 
    1503             :        Then, for the inductive step, assume there exists h_j such that
    1504             :        m_j<=h_j<=r_j, and we want to show that there exists h_{j+1},
    1505             :        which is the same as showing m_{j+1}<=r_{j+1}.
    1506             : 
    1507             :        Let a_j be 1 if we had a microblock immediately following the jth
    1508             :        call to after_credit, and 0 otherwise.  Then hashcnt at the start
    1509             :        of the (j+1)th call to after_frag is h_j+a_j.
    1510             :        Also, set b_{j+1}=1 if we are in the case covered by rule (ii)
    1511             :        above during the (j+1)th call to after_credit, i.e. if
    1512             :        (h_j+a_j)%T==T-1.  Thus, m_{j+1} = h_j + a_j + b_{j+1}.
    1513             : 
    1514             :        If we received an additional microblock, then
    1515             :        max_remaining_microblocks goes down by 1, and
    1516             :        max_remaining_ticks_or_microblocks goes down by either 1 or 2,
    1517             :        which means restricted_hashcnt goes up by either 1 or 2.  In
    1518             :        particular, it goes up by 2 if the new value of
    1519             :        max_remaining_microblocks (at the start of the (j+1)th call to
    1520             :        after_credit) is congruent to 0 mod T-1.  Let b'_{j+1} be 1 if
    1521             :        this condition is met and 0 otherwise.  If we receive a
    1522             :        done_packing message, restricted_hashcnt can go up by more, but
    1523             :        we can ignore that case, since it is less restrictive.
    1524             :        Thus, r_{j+1}=r_j+a_j+b'_{j+1}.
    1525             : 
    1526             :        If h_j < r_j (strictly less), then h_j+a_j < r_j+a_j.  And thus,
    1527             :        since b_{j+1}<=b'_{j+1}+1, just by virtue of them both being
    1528             :        binary,
    1529             :              h_j + a_j + b_{j+1} <  r_j + a_j + b'_{j+1} + 1,
    1530             :        which is the same (for integers) as
    1531             :              h_j + a_j + b_{j+1} <= r_j + a_j + b'_{j+1},
    1532             :                  m_{j+1}         <= r_{j+1}
    1533             : 
    1534             :        On the other hand, if h_j==r_j, this is easy unless b_{j+1}==1,
    1535             :        which can also only happen if a_j==1.  Then (h_j+a_j)%T==T-1,
    1536             :        which means there's an integer k such that
    1537             : 
    1538             :              h_j+a_j==(ticks_per_slot-k)*T-1
    1539             :              h_j    ==ticks_per_slot*T -  k*(T-1)-1  - k-1
    1540             :                     ==ticks_per_slot*T - (k*(T-1)+1) - ceil( (k*(T-1)+1)/(T-1) )
    1541             : 
    1542             :        Since h_j==r_j in this case, and
    1543             :        r_j==(ticks_per_slot*T) - max_remaining_microblocks_j - ceil(max_remaining_microblocks_j/(T-1)),
    1544             :        we can see that the value of max_remaining_microblocks at the
    1545             :        start of the jth call to after_credit is k*(T-1)+1.  Again, since
    1546             :        a_j==1, then the value of max_remaining_microblocks at the start
    1547             :        of the j+1th call to after_credit decreases by 1 to k*(T-1),
    1548             :        which means b'_{j+1}=1.
    1549             : 
    1550             :        Thus, h_j + a_j + b_{j+1} == r_j + a_j + b'_{j+1}, so, in
    1551             :        particular, h_{j+1}<=r_{j+1} as desired. */
    1552           0 :      min_hashcnt += (ulong)(min_hashcnt%ctx->hashcnt_per_tick == (ctx->hashcnt_per_tick-1UL)); /* add b_{j+1}, enforcing rule (ii) */
    1553           0 :   }
    1554             :   /* Now figure out how many hashes are needed to "catch up" the hash
    1555             :      count to the current system clock, and clamp it to the allowed
    1556             :      range. */
    1557           0 :   long now = fd_log_wallclock();
    1558           0 :   ulong target_hashcnt;
    1559           0 :   if( FD_LIKELY( !is_leader ) ) {
    1560           0 :     target_hashcnt = (ulong)((double)(now - ctx->reset_slot_start_ns) / ctx->hashcnt_duration_ns) - (ctx->slot-ctx->reset_slot)*ctx->hashcnt_per_slot;
    1561           0 :   } else {
    1562             :     /* We might have gotten very behind on hashes, but if we are leader
    1563             :        we want to catch up gradually over the remainder of our leader
    1564             :        slot, not all at once right now.  This helps keep the tile from
    1565             :        being oversubscribed and taking a long time to process incoming
    1566             :        microblocks. */
    1567           0 :     long expected_slot_start_ns = ctx->reset_slot_start_ns + (long)((double)(ctx->slot-ctx->reset_slot)*ctx->slot_duration_ns);
    1568           0 :     double actual_slot_duration_ns = ctx->slot_duration_ns<(double)(ctx->leader_bank_start_ns - expected_slot_start_ns) ? 0.0 : ctx->slot_duration_ns - (double)(ctx->leader_bank_start_ns - expected_slot_start_ns);
    1569           0 :     double actual_hashcnt_duration_ns = actual_slot_duration_ns / (double)ctx->hashcnt_per_slot;
    1570           0 :     target_hashcnt = fd_ulong_if( actual_hashcnt_duration_ns==0.0, restricted_hashcnt, (ulong)((double)(now - ctx->leader_bank_start_ns) / actual_hashcnt_duration_ns) );
    1571           0 :   }
    1572             :   /* Clamp to [min_hashcnt, restricted_hashcnt] as above */
    1573           0 :   target_hashcnt = fd_ulong_max( fd_ulong_min( target_hashcnt, restricted_hashcnt ), min_hashcnt );
    1574             : 
    1575             :   /* The above proof showed that it was always possible to pick a value
    1576             :      of target_hashcnt, but we still have a lot of freedom in how to
    1577             :      pick it.  It simplifies the code a lot if we don't keep going after
    1578             :      a tick in this function.  In particular, we want to publish at most
    1579             :      1 tick in this call, since otherwise we could consume infinite
    1580             :      credits to publish here.  The credits are set so that we should
    1581             :      only ever publish one tick during this loop.  Also, all the extra
    1582             :      stuff (leader transitions, publishing ticks, etc.) we have to do
    1583             :      happens at tick boundaries, so this lets us consolidate all those
    1584             :      cases.
    1585             : 
    1586             :      Mathematically, since the current value of hashcnt is h_j+a_j, the
    1587             :      next tick (advancing a full tick if we're currently at a tick) is
    1588             :      t_{j+1} = T*(floor( (h_j+a_j)/T )+1).  We need to show that if we set
    1589             :      h'_{j+1} = min( h_{j+1}, t_{j+1} ), it is still valid.
    1590             : 
    1591             :      First, h'_{j+1} <= h_{j+1} <= r_{j+1}, so we're okay in that
    1592             :      direction.
    1593             : 
    1594             :      Next, observe that t_{j+1}>=h_j + a_j + 1, and recall that b_{j+1}
    1595             :      is 0 or 1. So then,
    1596             :                     t_{j+1} >= h_j+a_j+b_{j+1} = m_{j+1}.
    1597             : 
    1598             :      We know h_{j+1) >= m_{j+1} from before, so then h'_{j+1} >=
    1599             :      m_{j+1}, as desired. */
    1600             : 
    1601           0 :   ulong next_tick_hashcnt = ctx->hashcnt_per_tick * (1UL+(ctx->hashcnt/ctx->hashcnt_per_tick));
    1602           0 :   target_hashcnt = fd_ulong_min( target_hashcnt, next_tick_hashcnt );
    1603             : 
    1604             :   /* We still need to enforce rule (i). We know that min_hashcnt%T !=
    1605             :      T-1 because of rule (ii).  That means that if target_hashcnt%T ==
    1606             :      T-1 at this point, target_hashcnt > min_hashcnt (notice the
    1607             :      strict), so target_hashcnt-1 >= min_hashcnt and is thus still a
    1608             :      valid choice for target_hashcnt. */
    1609           0 :   target_hashcnt -= (ulong)( (!low_power_mode) & ((target_hashcnt%ctx->hashcnt_per_tick)==(ctx->hashcnt_per_tick-1UL)) );
    1610             : 
    1611           0 :   FD_TEST( target_hashcnt >= ctx->hashcnt       );
    1612           0 :   FD_TEST( target_hashcnt >= min_hashcnt        );
    1613           0 :   FD_TEST( target_hashcnt <= restricted_hashcnt );
    1614             : 
    1615           0 :   if( FD_UNLIKELY( ctx->hashcnt==target_hashcnt ) ) return; /* Nothing to do, don't publish a tick twice */
    1616             : 
    1617           0 :   *charge_busy = 1;
    1618             : 
    1619           0 :   while( ctx->hashcnt<target_hashcnt ) {
    1620           0 :     fd_sha256_hash( ctx->hash, 32UL, ctx->hash );
    1621           0 :     ctx->hashcnt++;
    1622           0 :   }
    1623             : 
    1624           0 :   if( FD_UNLIKELY( ctx->hashcnt==ctx->hashcnt_per_slot ) ) {
    1625           0 :     ctx->slot++;
    1626           0 :     ctx->hashcnt = 0UL;
    1627           0 :   }
    1628             : 
    1629           0 :   if( FD_UNLIKELY( !is_leader && !(ctx->hashcnt%ctx->hashcnt_per_tick ) ) ) {
    1630             :     /* We finished a tick while not leader... save the current hash so
    1631             :        it can be played back into the bank when we become the leader. */
    1632           0 :     ulong tick_idx = (ctx->slot*ctx->ticks_per_slot+ctx->hashcnt/ctx->hashcnt_per_tick)%MAX_SKIPPED_TICKS;
    1633           0 :     fd_memcpy( ctx->skipped_tick_hashes[ tick_idx ], ctx->hash, 32UL );
    1634             : 
    1635           0 :     ulong initial_tick_idx = (ctx->last_slot*ctx->ticks_per_slot+ctx->last_hashcnt/ctx->hashcnt_per_tick)%MAX_SKIPPED_TICKS;
    1636           0 :     if( FD_UNLIKELY( tick_idx==initial_tick_idx ) ) FD_LOG_ERR(( "Too many skipped ticks from slot %lu to slot %lu, chain must halt", ctx->last_slot, ctx->slot ));
    1637           0 :   }
    1638             : 
    1639           0 :   if( FD_UNLIKELY( is_leader && !(ctx->hashcnt%ctx->hashcnt_per_tick) ) ) {
    1640             :     /* We ticked while leader... tell the leader bank. */
    1641           0 :     fd_ext_poh_register_tick( ctx->current_leader_bank, ctx->hash );
    1642             : 
    1643             :     /* And send an empty microblock (a tick) to the shred tile. */
    1644           0 :     publish_tick( ctx, stem, ctx->hash, 0 );
    1645           0 :   }
    1646             : 
    1647           0 :   if( FD_UNLIKELY( !is_leader && ctx->slot>=ctx->next_leader_slot ) ) {
    1648             :     /* We ticked while not leader and are now leader... transition
    1649             :        the state machine. */
    1650           0 :     publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->reset_slot );
    1651           0 :     FD_LOG_INFO(( "fd_poh_ticked_into_leader(slot=%lu, reset_slot=%lu)", ctx->next_leader_slot, ctx->reset_slot ));
    1652           0 :   }
    1653             : 
    1654           0 :   if( FD_UNLIKELY( is_leader && ctx->slot>ctx->next_leader_slot ) ) {
    1655             :     /* We ticked while leader and are no longer leader... transition
    1656             :        the state machine. */
    1657           0 :     FD_TEST( !max_remaining_microblocks );
    1658           0 :     publish_plugin_slot_end( ctx, ctx->next_leader_slot, ctx->cus_used );
    1659           0 :     FD_LOG_INFO(( "fd_poh_ticked_outof_leader(slot=%lu)", ctx->next_leader_slot ));
    1660             : 
    1661           0 :     no_longer_leader( ctx );
    1662           0 :     ctx->expect_sequential_leader_slot = ctx->slot;
    1663             : 
    1664           0 :     double tick_per_ns = fd_tempo_tick_per_ns( NULL );
    1665           0 :     fd_histf_sample( ctx->slot_done_delay, (ulong)((double)(fd_log_wallclock()-ctx->reset_slot_start_ns)/tick_per_ns) );
    1666           0 :     ctx->next_leader_slot = next_leader_slot( ctx );
    1667             : 
    1668           0 :     if( FD_UNLIKELY( ctx->slot>=ctx->next_leader_slot ) ) {
    1669             :       /* We finished a leader slot, and are immediately leader for the
    1670             :          following slot... transition. */
    1671           0 :       publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->next_leader_slot-1UL );
    1672           0 :       FD_LOG_INFO(( "fd_poh_ticked_into_leader(slot=%lu, reset_slot=%lu)", ctx->next_leader_slot, ctx->next_leader_slot-1UL ));
    1673           0 :     }
    1674           0 :   }
    1675           0 : }
    1676             : 
    1677             : static inline void
    1678           0 : during_housekeeping( fd_poh_ctx_t * ctx ) {
    1679           0 :   if( FD_UNLIKELY( maybe_change_identity( ctx, 0 ) ) ) {
    1680           0 :     ctx->next_leader_slot = next_leader_slot( ctx );
    1681           0 :     FD_LOG_INFO(( "fd_poh_identity_changed(next_leader_slot=%lu)", ctx->next_leader_slot ));
    1682             : 
    1683             :     /* Signal replay to check if we are leader again, in-case it's stuck
    1684             :        because everything already replayed. */
    1685           0 :     FD_COMPILER_MFENCE();
    1686           0 :     fd_ext_poh_signal_leader_change( ctx->signal_leader_change );
    1687           0 :   }
    1688           0 : }
    1689             : 
    1690             : static inline void
    1691           0 : metrics_write( fd_poh_ctx_t * ctx ) {
    1692           0 :   FD_MHIST_COPY( POH, BEGIN_LEADER_DELAY_SECONDS,      ctx->begin_leader_delay     );
    1693           0 :   FD_MHIST_COPY( POH, FIRST_MICROBLOCK_DELAY_SECONDS,  ctx->first_microblock_delay );
    1694           0 :   FD_MHIST_COPY( POH, SLOT_DONE_DELAY_SECONDS,         ctx->slot_done_delay        );
    1695           0 :   FD_MHIST_COPY( POH, BUNDLE_INITIALIZE_DELAY_SECONDS, ctx->bundle_init_delay      );
    1696           0 : }
    1697             : 
    1698             : static int
    1699             : before_frag( fd_poh_ctx_t * ctx,
    1700             :              ulong          in_idx,
    1701             :              ulong          seq,
    1702           0 :              ulong          sig ) {
    1703           0 :   (void)seq;
    1704             : 
    1705           0 :   if( FD_LIKELY( ctx->in_kind[ in_idx ]==IN_KIND_BANK ) ) {
    1706           0 :     ulong microblock_idx = fd_disco_bank_sig_microblock_idx( sig );
    1707           0 :     FD_TEST( microblock_idx>=ctx->expect_microblock_idx );
    1708             : 
    1709             :     /* Return the fragment to stem so we can process it later, if it's
    1710             :        not next in the sequence. */
    1711           0 :     if( FD_UNLIKELY( microblock_idx>ctx->expect_microblock_idx ) ) return -1;
    1712             : 
    1713           0 :     ctx->expect_microblock_idx++;
    1714           0 :   }
    1715             : 
    1716           0 :   return 0;
    1717           0 : }
    1718             : 
    1719             : static inline void
    1720             : during_frag( fd_poh_ctx_t * ctx,
    1721             :              ulong          in_idx,
    1722             :              ulong          seq FD_PARAM_UNUSED,
    1723             :              ulong          sig,
    1724             :              ulong          chunk,
    1725             :              ulong          sz,
    1726           0 :              ulong          ctl FD_PARAM_UNUSED ) {
    1727             : 
    1728           0 :   ctx->skip_frag = 0;
    1729             : 
    1730           0 :   if( FD_UNLIKELY( ctx->in_kind[ in_idx ]==IN_KIND_STAKE ) ) {
    1731           0 :     if( FD_UNLIKELY( chunk<ctx->in[ in_idx ].chunk0 || chunk>ctx->in[ in_idx ].wmark ) )
    1732           0 :       FD_LOG_ERR(( "chunk %lu %lu corrupt, not in range [%lu,%lu]", chunk, sz,
    1733           0 :             ctx->in[ in_idx ].chunk0, ctx->in[ in_idx ].wmark ));
    1734             : 
    1735           0 :     uchar const * dcache_entry = fd_chunk_to_laddr_const( ctx->in[ in_idx ].mem, chunk );
    1736           0 :     fd_stake_ci_stake_msg_init( ctx->stake_ci, dcache_entry );
    1737           0 :     return;
    1738           0 :   }
    1739             : 
    1740           0 :   ulong pkt_type;
    1741           0 :   ulong slot;
    1742           0 :   switch( ctx->in_kind[ in_idx ] ) {
    1743           0 :     case IN_KIND_BANK: {
    1744           0 :       pkt_type = POH_PKT_TYPE_MICROBLOCK;
    1745           0 :       slot = fd_disco_bank_sig_slot( sig );
    1746           0 :       break;
    1747           0 :     }
    1748           0 :     case IN_KIND_PACK: {
    1749           0 :       pkt_type = fd_disco_poh_sig_pkt_type( sig );
    1750           0 :       slot = fd_disco_poh_sig_slot( sig );
    1751           0 :       break;
    1752           0 :     }
    1753           0 :     default:
    1754           0 :       FD_LOG_ERR(( "unexpected in_kind %d", ctx->in_kind[ in_idx ] ));
    1755           0 :   }
    1756             : 
    1757           0 :   int is_frag_for_prior_leader_slot = 0;
    1758           0 :   if( FD_LIKELY( pkt_type==POH_PKT_TYPE_DONE_PACKING || pkt_type==POH_PKT_TYPE_MICROBLOCK ) ) {
    1759             :     /* The following sequence is possible...
    1760             : 
    1761             :         1. We become leader in slot 10
    1762             :         2. While leader, we switch to a fork that is on slot 8, where
    1763             :             we are leader
    1764             :         3. We get the in-flight microblocks for slot 10
    1765             : 
    1766             :       These in-flight microblocks need to be dropped, so we check
    1767             :       against the high water mark (highwater_leader_slot) rather than
    1768             :       the current hashcnt here when determining what to drop.
    1769             : 
    1770             :       We know if the slot is lower than the high water mark it's from a stale
    1771             :       leader slot, because we will not become leader for the same slot twice
    1772             :       even if we are reset back in time (to prevent duplicate blocks). */
    1773           0 :     is_frag_for_prior_leader_slot = slot<ctx->highwater_leader_slot;
    1774           0 :   }
    1775             : 
    1776           0 :   if( FD_UNLIKELY( ctx->in_kind[ in_idx ]==IN_KIND_PACK ) ) {
    1777             :     /* We now know the real amount of microblocks published, so set an
    1778             :        exact bound for once we receive them. */
    1779           0 :     ctx->skip_frag = 1;
    1780           0 :     if( pkt_type==POH_PKT_TYPE_DONE_PACKING ) {
    1781           0 :       if( FD_UNLIKELY( is_frag_for_prior_leader_slot ) ) return;
    1782             : 
    1783           0 :       FD_TEST( ctx->microblocks_lower_bound<=ctx->max_microblocks_per_slot );
    1784           0 :       fd_done_packing_t const * done_packing = fd_chunk_to_laddr( ctx->in[ in_idx ].mem, chunk );
    1785           0 :       FD_LOG_INFO(( "done_packing(slot=%lu,seen_microblocks=%lu,microblocks_in_slot=%lu)",
    1786           0 :                     ctx->slot,
    1787           0 :                     ctx->microblocks_lower_bound,
    1788           0 :                     done_packing->microblocks_in_slot ));
    1789           0 :       ctx->microblocks_lower_bound += ctx->max_microblocks_per_slot - done_packing->microblocks_in_slot;
    1790           0 :     }
    1791           0 :     return;
    1792           0 :   } else {
    1793           0 :     if( FD_UNLIKELY( chunk<ctx->in[ in_idx ].chunk0 || chunk>ctx->in[ in_idx ].wmark || sz>USHORT_MAX ) )
    1794           0 :       FD_LOG_ERR(( "chunk %lu %lu corrupt, not in range [%lu,%lu]", chunk, sz, ctx->in[ in_idx ].chunk0, ctx->in[ in_idx ].wmark ));
    1795             : 
    1796           0 :     uchar * src = (uchar *)fd_chunk_to_laddr( ctx->in[ in_idx ].mem, chunk );
    1797             : 
    1798           0 :     fd_memcpy( ctx->_txns, src, sz-sizeof(fd_microblock_trailer_t) );
    1799           0 :     fd_memcpy( ctx->_microblock_trailer, src+sz-sizeof(fd_microblock_trailer_t), sizeof(fd_microblock_trailer_t) );
    1800             : 
    1801           0 :     ctx->skip_frag = is_frag_for_prior_leader_slot;
    1802           0 :   }
    1803           0 : }
    1804             : 
    1805             : static void
    1806             : publish_microblock( fd_poh_ctx_t *      ctx,
    1807             :                     fd_stem_context_t * stem,
    1808             :                     ulong               slot,
    1809             :                     ulong               hashcnt_delta,
    1810           0 :                     ulong               txn_cnt ) {
    1811           0 :   uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->shred_out->mem, ctx->shred_out->chunk );
    1812           0 :   FD_TEST( slot>=ctx->reset_slot );
    1813           0 :   fd_entry_batch_meta_t * meta = (fd_entry_batch_meta_t *)dst;
    1814           0 :   meta->parent_offset = 1UL+slot-ctx->reset_slot;
    1815           0 :   meta->reference_tick = (ctx->hashcnt/ctx->hashcnt_per_tick) % ctx->ticks_per_slot;
    1816           0 :   meta->block_complete = !ctx->hashcnt;
    1817             : 
    1818           0 :   dst += sizeof(fd_entry_batch_meta_t);
    1819           0 :   fd_entry_batch_header_t * header = (fd_entry_batch_header_t *)dst;
    1820           0 :   header->hashcnt_delta = hashcnt_delta;
    1821           0 :   fd_memcpy( header->hash, ctx->hash, 32UL );
    1822             : 
    1823           0 :   dst += sizeof(fd_entry_batch_header_t);
    1824           0 :   ulong payload_sz = 0UL;
    1825           0 :   ulong included_txn_cnt = 0UL;
    1826           0 :   for( ulong i=0UL; i<txn_cnt; i++ ) {
    1827           0 :     fd_txn_p_t * txn = (fd_txn_p_t *)(ctx->_txns + i*sizeof(fd_txn_p_t));
    1828           0 :     if( FD_UNLIKELY( !(txn->flags & FD_TXN_P_FLAGS_EXECUTE_SUCCESS) ) ) continue;
    1829             : 
    1830           0 :     fd_memcpy( dst, txn->payload, txn->payload_sz );
    1831           0 :     payload_sz += txn->payload_sz;
    1832           0 :     dst        += txn->payload_sz;
    1833           0 :     included_txn_cnt++;
    1834           0 :   }
    1835           0 :   header->txn_cnt = included_txn_cnt;
    1836             : 
    1837             :   /* We always have credits to publish here, because we have a burst
    1838             :      value of 3 credits, and at most we will publish_tick() once and
    1839             :      then publish_became_leader() once, leaving one credit here to
    1840             :      publish the microblock. */
    1841           0 :   ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
    1842           0 :   ulong sz = sizeof(fd_entry_batch_meta_t)+sizeof(fd_entry_batch_header_t)+payload_sz;
    1843           0 :   ulong new_sig = fd_disco_poh_sig( slot, POH_PKT_TYPE_MICROBLOCK, 0UL );
    1844           0 :   fd_stem_publish( stem, ctx->shred_out->idx, new_sig, ctx->shred_out->chunk, sz, 0UL, 0UL, tspub );
    1845           0 :   ctx->shred_seq = stem->seqs[ ctx->shred_out->idx ];
    1846           0 :   ctx->shred_out->chunk = fd_dcache_compact_next( ctx->shred_out->chunk, sz, ctx->shred_out->chunk0, ctx->shred_out->wmark );
    1847           0 : }
    1848             : 
    1849             : static inline void
    1850             : after_frag( fd_poh_ctx_t *      ctx,
    1851             :             ulong               in_idx,
    1852             :             ulong               seq,
    1853             :             ulong               sig,
    1854             :             ulong               sz,
    1855             :             ulong               tsorig,
    1856             :             ulong               tspub,
    1857           0 :             fd_stem_context_t * stem ) {
    1858           0 :   (void)in_idx;
    1859           0 :   (void)seq;
    1860           0 :   (void)tsorig;
    1861           0 :   (void)tspub;
    1862             : 
    1863           0 :   if( FD_UNLIKELY( ctx->skip_frag ) ) return;
    1864             : 
    1865           0 :   if( FD_UNLIKELY( ctx->in_kind[ in_idx ]==IN_KIND_STAKE ) ) {
    1866           0 :     fd_stake_ci_stake_msg_fini( ctx->stake_ci );
    1867             :     /* It might seem like we do not need to do state transitions in and
    1868             :        out of being the leader here, since leader schedule updates are
    1869             :        always one epoch in advance (whether we are leader or not would
    1870             :        never change for the currently executing slot) but this is not
    1871             :        true for new ledgers when the validator first boots.  We will
    1872             :        likely be the leader in slot 1, and get notified of the leader
    1873             :        schedule for that slot while we are still in it.
    1874             : 
    1875             :        For safety we just handle both transitions, in and out, although
    1876             :        the only one possible should be into leader. */
    1877           0 :     ulong next_leader_slot_after_frag = next_leader_slot( ctx );
    1878             : 
    1879           0 :     int currently_leader  = ctx->slot>=ctx->next_leader_slot;
    1880           0 :     int leader_after_frag = ctx->slot>=next_leader_slot_after_frag;
    1881             : 
    1882           0 :     FD_LOG_INFO(( "stake_update(before_leader=%lu,after_leader=%lu)",
    1883           0 :                   ctx->next_leader_slot,
    1884           0 :                   next_leader_slot_after_frag ));
    1885             : 
    1886           0 :     ctx->next_leader_slot = next_leader_slot_after_frag;
    1887           0 :     if( FD_UNLIKELY( currently_leader && !leader_after_frag ) ) {
    1888             :       /* Shouldn't ever happen, otherwise we need to do a state
    1889             :          transition out of being leader. */
    1890           0 :       FD_LOG_ERR(( "stake update caused us to no longer be leader in an active slot" ));
    1891           0 :     }
    1892             : 
    1893             :     /* Nothing to do if we transition into being leader, since it
    1894             :        will just get picked up by the regular tick loop. */
    1895           0 :     if( FD_UNLIKELY( !currently_leader && leader_after_frag ) ) {
    1896           0 :       publish_plugin_slot_start( ctx, next_leader_slot_after_frag, ctx->reset_slot );
    1897           0 :     }
    1898             : 
    1899           0 :     return;
    1900           0 :   }
    1901             : 
    1902           0 :   if( FD_UNLIKELY( !ctx->microblocks_lower_bound ) ) {
    1903           0 :     double tick_per_ns = fd_tempo_tick_per_ns( NULL );
    1904           0 :     fd_histf_sample( ctx->first_microblock_delay, (ulong)((double)(fd_log_wallclock()-ctx->reset_slot_start_ns)/tick_per_ns) );
    1905           0 :   }
    1906             : 
    1907           0 :   ulong target_slot = fd_disco_bank_sig_slot( sig );
    1908             : 
    1909           0 :   if( FD_UNLIKELY( target_slot!=ctx->next_leader_slot || target_slot!=ctx->slot ) ) {
    1910           0 :     FD_LOG_ERR(( "packed too early or late target_slot=%lu, current_slot=%lu. highwater_leader_slot=%lu",
    1911           0 :                  target_slot, ctx->slot, ctx->highwater_leader_slot ));
    1912           0 :   }
    1913             : 
    1914           0 :   FD_TEST( ctx->current_leader_bank );
    1915           0 :   FD_TEST( ctx->microblocks_lower_bound<ctx->max_microblocks_per_slot );
    1916           0 :   ctx->microblocks_lower_bound += 1UL;
    1917             : 
    1918           0 :   ulong txn_cnt = (sz-sizeof(fd_microblock_trailer_t))/sizeof(fd_txn_p_t);
    1919           0 :   fd_txn_p_t * txns = (fd_txn_p_t *)(ctx->_txns);
    1920           0 :   ulong executed_txn_cnt = 0UL;
    1921           0 :   ulong cus_used         = 0UL;
    1922           0 :   for( ulong i=0UL; i<txn_cnt; i++ ) {
    1923           0 :     if( FD_LIKELY( txns[ i ].flags & FD_TXN_P_FLAGS_EXECUTE_SUCCESS ) ) {
    1924           0 :       executed_txn_cnt++;
    1925           0 :       cus_used += txns[ i ].bank_cu.actual_consumed_cus;
    1926           0 :     }
    1927           0 :   }
    1928             : 
    1929             :   /* We don't publish transactions that fail to execute.  If all the
    1930             :      transactions failed to execute, the microblock would be empty,
    1931             :      causing agave to think it's a tick and complain.  Instead, we just
    1932             :      skip the microblock and don't hash or update the hashcnt. */
    1933           0 :   if( FD_UNLIKELY( !executed_txn_cnt ) ) return;
    1934             : 
    1935           0 :   uchar data[ 64 ];
    1936           0 :   fd_memcpy( data, ctx->hash, 32UL );
    1937           0 :   fd_memcpy( data+32UL, ctx->_microblock_trailer->hash, 32UL );
    1938           0 :   fd_sha256_hash( data, 64UL, ctx->hash );
    1939             : 
    1940           0 :   ctx->hashcnt++;
    1941           0 :   FD_TEST( ctx->hashcnt>ctx->last_hashcnt );
    1942           0 :   ulong hashcnt_delta = ctx->hashcnt - ctx->last_hashcnt;
    1943             : 
    1944             :   /* The hashing loop above will never leave us exactly one away from
    1945             :      crossing a tick boundary, so this increment will never cause the
    1946             :      current tick (or the slot) to change, except in low power mode
    1947             :      for development, in which case we do need to register the tick
    1948             :      with the leader bank.  We don't need to publish the tick since
    1949             :      sending the microblock below is the publishing action. */
    1950           0 :   if( FD_UNLIKELY( !(ctx->hashcnt%ctx->hashcnt_per_slot ) ) ) {
    1951           0 :     ctx->slot++;
    1952           0 :     ctx->hashcnt = 0UL;
    1953           0 :   }
    1954             : 
    1955           0 :   ctx->last_slot    = ctx->slot;
    1956           0 :   ctx->last_hashcnt = ctx->hashcnt;
    1957             : 
    1958           0 :   ctx->cus_used += cus_used;
    1959             : 
    1960           0 :   if( FD_UNLIKELY( !(ctx->hashcnt%ctx->hashcnt_per_tick ) ) ) {
    1961           0 :     fd_ext_poh_register_tick( ctx->current_leader_bank, ctx->hash );
    1962           0 :     if( FD_UNLIKELY( ctx->slot>ctx->next_leader_slot ) ) {
    1963             :       /* We ticked while leader and are no longer leader... transition
    1964             :          the state machine. */
    1965           0 :       publish_plugin_slot_end( ctx, ctx->next_leader_slot, ctx->cus_used );
    1966             : 
    1967           0 :       no_longer_leader( ctx );
    1968             : 
    1969           0 :       if( FD_UNLIKELY( ctx->slot>=ctx->next_leader_slot ) ) {
    1970             :         /* We finished a leader slot, and are immediately leader for the
    1971             :            following slot... transition. */
    1972           0 :         publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->next_leader_slot-1UL );
    1973           0 :       }
    1974           0 :     }
    1975           0 :   }
    1976             : 
    1977           0 :   publish_microblock( ctx, stem, target_slot, hashcnt_delta, txn_cnt );
    1978           0 : }
    1979             : 
    1980             : static void
    1981             : privileged_init( fd_topo_t *      topo,
    1982           0 :                  fd_topo_tile_t * tile ) {
    1983           0 :   void * scratch = fd_topo_obj_laddr( topo, tile->tile_obj_id );
    1984             : 
    1985           0 :   FD_SCRATCH_ALLOC_INIT( l, scratch );
    1986           0 :   fd_poh_ctx_t * ctx = FD_SCRATCH_ALLOC_APPEND( l, alignof( fd_poh_ctx_t ), sizeof( fd_poh_ctx_t ) );
    1987             : 
    1988           0 :   if( FD_UNLIKELY( !strcmp( tile->poh.identity_key_path, "" ) ) )
    1989           0 :     FD_LOG_ERR(( "identity_key_path not set" ));
    1990             : 
    1991           0 :   const uchar * identity_key = fd_keyload_load( tile->poh.identity_key_path, /* pubkey only: */ 1 );
    1992           0 :   fd_memcpy( ctx->identity_key.uc, identity_key, 32UL );
    1993             : 
    1994           0 :   if( FD_UNLIKELY( tile->poh.bundle.enabled ) ) {
    1995           0 :     if( FD_UNLIKELY( !fd_base58_decode_32( tile->poh.bundle.vote_account_path, ctx->bundle.vote_account.uc ) ) ) {
    1996           0 :       const uchar * vote_key = fd_keyload_load( tile->poh.bundle.vote_account_path, /* pubkey only: */ 1 );
    1997           0 :       fd_memcpy( ctx->bundle.vote_account.uc, vote_key, 32UL );
    1998           0 :     }
    1999           0 :   }
    2000           0 : }
    2001             : 
    2002             : /* The Agave client needs to communicate to the shred tile what
    2003             :    the shred version is on boot, but shred tile does not live in the
    2004             :    same address space, so have the PoH tile pass the value through
    2005             :    via. a shared memory ulong. */
    2006             : 
    2007             : static volatile ulong * fd_shred_version;
    2008             : 
    2009             : void
    2010           0 : fd_ext_shred_set_shred_version( ulong shred_version ) {
    2011           0 :   while( FD_UNLIKELY( !fd_shred_version ) ) FD_SPIN_PAUSE();
    2012           0 :   *fd_shred_version = shred_version;
    2013           0 : }
    2014             : 
    2015             : void
    2016             : fd_ext_poh_publish_gossip_vote( uchar * data,
    2017           0 :                                 ulong   data_len ) {
    2018           0 :   poh_link_publish( &gossip_dedup, 1UL, data, data_len );
    2019           0 : }
    2020             : 
    2021             : void
    2022             : fd_ext_poh_publish_leader_schedule( uchar * data,
    2023           0 :                                     ulong   data_len ) {
    2024           0 :   poh_link_publish( &stake_out, 2UL, data, data_len );
    2025           0 : }
    2026             : 
    2027             : void
    2028             : fd_ext_poh_publish_cluster_info( uchar * data,
    2029           0 :                                  ulong   data_len ) {
    2030           0 :   poh_link_publish( &crds_shred, 2UL, data, data_len );
    2031           0 : }
    2032             : 
    2033             : void
    2034             : fd_ext_plugin_publish_replay_stage( ulong   sig,
    2035             :                                     uchar * data,
    2036           0 :                                     ulong   data_len ) {
    2037           0 :   poh_link_publish( &replay_plugin, sig, data, data_len );
    2038           0 : }
    2039             : 
    2040             : void
    2041             : fd_ext_plugin_publish_genesis_hash( ulong   sig,
    2042             :                                     uchar * data,
    2043           0 :                                     ulong   data_len ) {
    2044           0 :   poh_link_publish( &replay_plugin, sig, data, data_len );
    2045           0 : }
    2046             : 
    2047             : void
    2048             : fd_ext_plugin_publish_start_progress( ulong   sig,
    2049             :                                       uchar * data,
    2050           0 :                                       ulong   data_len ) {
    2051           0 :   poh_link_publish( &start_progress_plugin, sig, data, data_len );
    2052           0 : }
    2053             : 
    2054             : void
    2055             : fd_ext_plugin_publish_vote_listener( ulong   sig,
    2056             :                                      uchar * data,
    2057           0 :                                      ulong   data_len ) {
    2058           0 :   poh_link_publish( &vote_listener_plugin, sig, data, data_len );
    2059           0 : }
    2060             : 
    2061             : void
    2062             : fd_ext_plugin_publish_validator_info( ulong   sig,
    2063             :                                       uchar * data,
    2064           0 :                                       ulong   data_len ) {
    2065           0 :   poh_link_publish( &validator_info_plugin, sig, data, data_len );
    2066           0 : }
    2067             : 
    2068             : void
    2069             : fd_ext_plugin_publish_periodic( ulong   sig,
    2070             :                                 uchar * data,
    2071           0 :                                 ulong   data_len ) {
    2072           0 :   poh_link_publish( &gossip_plugin, sig, data, data_len );
    2073           0 : }
    2074             : 
    2075             : void
    2076             : fd_ext_resolv_publish_root_bank( uchar * data,
    2077           0 :                                  ulong   data_len ) {
    2078           0 :   poh_link_publish( &replay_resolv, 0UL, data, data_len );
    2079           0 : }
    2080             : 
    2081             : void
    2082             : fd_ext_resolv_publish_completed_blockhash( uchar * data,
    2083           0 :                                            ulong   data_len ) {
    2084           0 :   poh_link_publish( &replay_resolv, 1UL, data, data_len );
    2085           0 : }
    2086             : 
    2087             : static inline fd_poh_out_ctx_t
    2088             : out1( fd_topo_t const *      topo,
    2089             :       fd_topo_tile_t const * tile,
    2090           0 :       char const *           name ) {
    2091           0 :   ulong idx = ULONG_MAX;
    2092             : 
    2093           0 :   for( ulong i=0UL; i<tile->out_cnt; i++ ) {
    2094           0 :     fd_topo_link_t const * link = &topo->links[ tile->out_link_id[ i ] ];
    2095           0 :     if( !strcmp( link->name, name ) ) {
    2096           0 :       if( FD_UNLIKELY( idx!=ULONG_MAX ) ) FD_LOG_ERR(( "tile %s:%lu had multiple output links named %s but expected one", tile->name, tile->kind_id, name ));
    2097           0 :       idx = i;
    2098           0 :     }
    2099           0 :   }
    2100             : 
    2101           0 :   if( FD_UNLIKELY( idx==ULONG_MAX ) ) FD_LOG_ERR(( "tile %s:%lu had no output link named %s", tile->name, tile->kind_id, name ));
    2102             : 
    2103           0 :   void * mem = topo->workspaces[ topo->objs[ topo->links[ tile->out_link_id[ idx ] ].dcache_obj_id ].wksp_id ].wksp;
    2104           0 :   ulong chunk0 = fd_dcache_compact_chunk0( mem, topo->links[ tile->out_link_id[ idx ] ].dcache );
    2105           0 :   ulong wmark  = fd_dcache_compact_wmark ( mem, topo->links[ tile->out_link_id[ idx ] ].dcache, topo->links[ tile->out_link_id[ idx ] ].mtu );
    2106             : 
    2107           0 :   return (fd_poh_out_ctx_t){ .idx = idx, .mem = mem, .chunk0 = chunk0, .wmark = wmark, .chunk = chunk0 };
    2108           0 : }
    2109             : 
    2110             : static void
    2111             : unprivileged_init( fd_topo_t *      topo,
    2112           0 :                    fd_topo_tile_t * tile ) {
    2113           0 :   void * scratch = fd_topo_obj_laddr( topo, tile->tile_obj_id );
    2114             : 
    2115           0 :   FD_SCRATCH_ALLOC_INIT( l, scratch );
    2116           0 :   fd_poh_ctx_t * ctx = FD_SCRATCH_ALLOC_APPEND( l, alignof( fd_poh_ctx_t ), sizeof( fd_poh_ctx_t ) );
    2117           0 :   void * stake_ci = FD_SCRATCH_ALLOC_APPEND( l, fd_stake_ci_align(),              fd_stake_ci_footprint()            );
    2118           0 :   void * sha256   = FD_SCRATCH_ALLOC_APPEND( l, FD_SHA256_ALIGN,                  FD_SHA256_FOOTPRINT                );
    2119             : 
    2120           0 : #define NONNULL( x ) (__extension__({                                        \
    2121           0 :       __typeof__((x)) __x = (x);                                             \
    2122           0 :       if( FD_UNLIKELY( !__x ) ) FD_LOG_ERR(( #x " was unexpectedly NULL" )); \
    2123           0 :       __x; }))
    2124             : 
    2125           0 :   ctx->stake_ci = NONNULL( fd_stake_ci_join( fd_stake_ci_new( stake_ci, &ctx->identity_key ) ) );
    2126           0 :   ctx->sha256 = NONNULL( fd_sha256_join( fd_sha256_new( sha256 ) ) );
    2127           0 :   ctx->current_leader_bank = NULL;
    2128           0 :   ctx->signal_leader_change = NULL;
    2129             : 
    2130           0 :   ctx->shred_seq = ULONG_MAX;
    2131           0 :   ctx->halted_switching_key = 0;
    2132           0 :   ctx->keyswitch = fd_keyswitch_join( fd_topo_obj_laddr( topo, tile->keyswitch_obj_id ) );
    2133           0 :   FD_TEST( ctx->keyswitch );
    2134             : 
    2135           0 :   ctx->slot                  = 0UL;
    2136           0 :   ctx->hashcnt               = 0UL;
    2137           0 :   ctx->last_hashcnt          = 0UL;
    2138           0 :   ctx->highwater_leader_slot = ULONG_MAX;
    2139           0 :   ctx->next_leader_slot      = ULONG_MAX;
    2140           0 :   ctx->reset_slot            = ULONG_MAX;
    2141             : 
    2142           0 :   ctx->lagged_consecutive_leader_start = tile->poh.lagged_consecutive_leader_start;
    2143           0 :   ctx->expect_sequential_leader_slot = ULONG_MAX;
    2144             : 
    2145           0 :   ctx->microblocks_lower_bound = 0UL;
    2146             : 
    2147           0 :   ctx->max_active_descendant = 0UL;
    2148             : 
    2149           0 :   if( FD_UNLIKELY( tile->poh.bundle.enabled ) ) {
    2150           0 :     ctx->bundle.enabled = 1;
    2151           0 :     NONNULL( fd_bundle_crank_gen_init( ctx->bundle.gen, (fd_acct_addr_t const *)tile->poh.bundle.tip_distribution_program_addr,
    2152           0 :              (fd_acct_addr_t const *)tile->poh.bundle.tip_payment_program_addr,
    2153           0 :              (fd_acct_addr_t const *)ctx->bundle.vote_account.uc,
    2154           0 :              (fd_acct_addr_t const *)ctx->bundle.vote_account.uc, 0UL ) ); /* last two arguments are properly bogus */
    2155           0 :   } else {
    2156           0 :     ctx->bundle.enabled = 0;
    2157           0 :   }
    2158             : 
    2159           0 :   ulong poh_shred_obj_id = fd_pod_query_ulong( topo->props, "poh_shred", ULONG_MAX );
    2160           0 :   FD_TEST( poh_shred_obj_id!=ULONG_MAX );
    2161             : 
    2162           0 :   fd_shred_version = fd_fseq_join( fd_topo_obj_laddr( topo, poh_shred_obj_id ) );
    2163           0 :   FD_TEST( fd_shred_version );
    2164             : 
    2165           0 :   poh_link_init( &gossip_dedup,          topo, tile, out1( topo, tile, "gossip_dedup" ).idx );
    2166           0 :   poh_link_init( &stake_out,             topo, tile, out1( topo, tile, "stake_out"    ).idx );
    2167           0 :   poh_link_init( &crds_shred,            topo, tile, out1( topo, tile, "crds_shred"   ).idx );
    2168           0 :   poh_link_init( &replay_resolv,         topo, tile, out1( topo, tile, "replay_resol" ).idx );
    2169             : 
    2170           0 :   if( FD_LIKELY( tile->poh.plugins_enabled ) ) {
    2171           0 :     poh_link_init( &replay_plugin,         topo, tile, out1( topo, tile, "replay_plugi" ).idx );
    2172           0 :     poh_link_init( &gossip_plugin,         topo, tile, out1( topo, tile, "gossip_plugi" ).idx );
    2173           0 :     poh_link_init( &start_progress_plugin, topo, tile, out1( topo, tile, "startp_plugi" ).idx );
    2174           0 :     poh_link_init( &vote_listener_plugin,  topo, tile, out1( topo, tile, "votel_plugin" ).idx );
    2175           0 :     poh_link_init( &validator_info_plugin, topo, tile, out1( topo, tile, "valcfg_plugi" ).idx );
    2176           0 :   } else {
    2177             :     /* Mark these mcaches as "available", so the system boots, but the
    2178             :        memory is not set so nothing will actually get published via.
    2179             :        the links. */
    2180           0 :     FD_COMPILER_MFENCE();
    2181           0 :     replay_plugin.mcache = (fd_frag_meta_t*)1;
    2182           0 :     gossip_plugin.mcache = (fd_frag_meta_t*)1;
    2183           0 :     start_progress_plugin.mcache = (fd_frag_meta_t*)1;
    2184           0 :     vote_listener_plugin.mcache = (fd_frag_meta_t*)1;
    2185           0 :     validator_info_plugin.mcache = (fd_frag_meta_t*)1;
    2186           0 :     FD_COMPILER_MFENCE();
    2187           0 :   }
    2188             : 
    2189           0 :   FD_LOG_INFO(( "PoH waiting to be initialized by Agave client... %lu %lu", fd_poh_waiting_lock, fd_poh_returned_lock ));
    2190           0 :   FD_VOLATILE( fd_poh_global_ctx ) = ctx;
    2191           0 :   FD_COMPILER_MFENCE();
    2192           0 :   for(;;) {
    2193           0 :     if( FD_LIKELY( FD_VOLATILE_CONST( fd_poh_waiting_lock ) ) ) break;
    2194           0 :     FD_SPIN_PAUSE();
    2195           0 :   }
    2196           0 :   FD_VOLATILE( fd_poh_waiting_lock ) = 0UL;
    2197           0 :   FD_VOLATILE( fd_poh_returned_lock ) = 1UL;
    2198           0 :   FD_COMPILER_MFENCE();
    2199           0 :   for(;;) {
    2200           0 :     if( FD_UNLIKELY( !FD_VOLATILE_CONST( fd_poh_returned_lock ) ) ) break;
    2201           0 :     FD_SPIN_PAUSE();
    2202           0 :   }
    2203           0 :   FD_COMPILER_MFENCE();
    2204             : 
    2205           0 :   if( FD_UNLIKELY( ctx->reset_slot==ULONG_MAX ) ) FD_LOG_ERR(( "PoH was not initialized by Agave client" ));
    2206             : 
    2207           0 :   fd_histf_join( fd_histf_new( ctx->begin_leader_delay, FD_MHIST_SECONDS_MIN( POH, BEGIN_LEADER_DELAY_SECONDS ),
    2208           0 :                                                         FD_MHIST_SECONDS_MAX( POH, BEGIN_LEADER_DELAY_SECONDS ) ) );
    2209           0 :   fd_histf_join( fd_histf_new( ctx->first_microblock_delay, FD_MHIST_SECONDS_MIN( POH, FIRST_MICROBLOCK_DELAY_SECONDS  ),
    2210           0 :                                                             FD_MHIST_SECONDS_MAX( POH, FIRST_MICROBLOCK_DELAY_SECONDS  ) ) );
    2211           0 :   fd_histf_join( fd_histf_new( ctx->slot_done_delay, FD_MHIST_SECONDS_MIN( POH, SLOT_DONE_DELAY_SECONDS  ),
    2212           0 :                                                      FD_MHIST_SECONDS_MAX( POH, SLOT_DONE_DELAY_SECONDS  ) ) );
    2213             : 
    2214           0 :   fd_histf_join( fd_histf_new( ctx->bundle_init_delay, FD_MHIST_SECONDS_MIN( POH, BUNDLE_INITIALIZE_DELAY_SECONDS  ),
    2215           0 :                                                        FD_MHIST_SECONDS_MAX( POH, BUNDLE_INITIALIZE_DELAY_SECONDS  ) ) );
    2216             : 
    2217           0 :   for( ulong i=0UL; i<tile->in_cnt; i++ ) {
    2218           0 :     fd_topo_link_t * link = &topo->links[ tile->in_link_id[ i ] ];
    2219           0 :     fd_topo_wksp_t * link_wksp = &topo->workspaces[ topo->objs[ link->dcache_obj_id ].wksp_id ];
    2220             : 
    2221           0 :     ctx->in[ i ].mem    = link_wksp->wksp;
    2222           0 :     ctx->in[ i ].chunk0 = fd_dcache_compact_chunk0( ctx->in[ i ].mem, link->dcache );
    2223           0 :     ctx->in[ i ].wmark  = fd_dcache_compact_wmark ( ctx->in[ i ].mem, link->dcache, link->mtu );
    2224             : 
    2225           0 :     if( FD_UNLIKELY( !strcmp( link->name, "stake_out" ) ) ) {
    2226           0 :       ctx->in_kind[ i ] = IN_KIND_STAKE;
    2227           0 :     } else if( FD_UNLIKELY( !strcmp( link->name, "pack_bank" ) ) ) {
    2228           0 :       ctx->in_kind[ i ] = IN_KIND_PACK;
    2229           0 :     } else if( FD_LIKELY( !strcmp( link->name, "bank_poh" ) ) ) {
    2230           0 :       ctx->in_kind[ i ] = IN_KIND_BANK;
    2231           0 :     } else {
    2232           0 :       FD_LOG_ERR(( "unexpected input link name %s", link->name ));
    2233           0 :     }
    2234           0 :   }
    2235             : 
    2236           0 :   *ctx->shred_out = out1( topo, tile, "poh_shred" );
    2237           0 :   *ctx->pack_out  = out1( topo, tile, "poh_pack" );
    2238           0 :   ctx->plugin_out->mem = NULL;
    2239           0 :   if( FD_LIKELY( tile->poh.plugins_enabled ) ) {
    2240           0 :     *ctx->plugin_out = out1( topo, tile, "poh_plugin" );
    2241           0 :   }
    2242             : 
    2243           0 :   ulong scratch_top = FD_SCRATCH_ALLOC_FINI( l, 1UL );
    2244           0 :   if( FD_UNLIKELY( scratch_top > (ulong)scratch + scratch_footprint( tile ) ) )
    2245           0 :     FD_LOG_ERR(( "scratch overflow %lu %lu %lu", scratch_top - (ulong)scratch - scratch_footprint( tile ), scratch_top, (ulong)scratch + scratch_footprint( tile ) ));
    2246           0 : }
    2247             : 
    2248             : /* One tick, one microblock, one plugin slot end, one plugin slot start,
    2249             :    and one leader update. */
    2250           0 : #define STEM_BURST (5UL)
    2251             : 
    2252             : /* See explanation in fd_pack */
    2253           0 : #define STEM_LAZY  (128L*3000L)
    2254             : 
    2255           0 : #define STEM_CALLBACK_CONTEXT_TYPE  fd_poh_ctx_t
    2256           0 : #define STEM_CALLBACK_CONTEXT_ALIGN alignof(fd_poh_ctx_t)
    2257             : 
    2258           0 : #define STEM_CALLBACK_DURING_HOUSEKEEPING during_housekeeping
    2259           0 : #define STEM_CALLBACK_METRICS_WRITE       metrics_write
    2260           0 : #define STEM_CALLBACK_AFTER_CREDIT        after_credit
    2261           0 : #define STEM_CALLBACK_BEFORE_FRAG         before_frag
    2262           0 : #define STEM_CALLBACK_DURING_FRAG         during_frag
    2263           0 : #define STEM_CALLBACK_AFTER_FRAG          after_frag
    2264             : 
    2265             : #include "../../disco/stem/fd_stem.c"
    2266             : 
    2267             : fd_topo_run_tile_t fd_tile_poh = {
    2268             :   .name                     = "poh",
    2269             :   .populate_allowed_seccomp = NULL,
    2270             :   .populate_allowed_fds     = NULL,
    2271             :   .scratch_align            = scratch_align,
    2272             :   .scratch_footprint        = scratch_footprint,
    2273             :   .privileged_init          = privileged_init,
    2274             :   .unprivileged_init        = unprivileged_init,
    2275             :   .run                      = stem_run,
    2276             : };

Generated by: LCOV version 1.14