Line data Source code
1 : #define _GNU_SOURCE
2 :
3 : /* Let's say there was a computer, the "leader" computer, that acted as
4 : a bank. Users could send it messages saying they wanted to deposit
5 : money, or transfer it to someone else.
6 :
7 : That's how, for example, Bank of America works but there are problems
8 : with it. One simple problem is: the bank can set your balance to
9 : zero if they don't like you.
10 :
11 : You could try to fix this by having the bank periodically publish the
12 : list of all account balances and transactions. If the customers add
13 : unforgeable signatures to their deposit slips and transfers, then
14 : the bank cannot zero a balance without it being obvious to everyone.
15 :
16 : There's still problems. The bank can't lie about your balance now or
17 : take your money, but it can just not accept deposits on your behalf
18 : by ignoring you.
19 :
20 : You could fix this by getting a few independent banks together, lets
21 : say Bank of America, Bank of England, and Westpac, and having them
22 : rotate who operates the leader computer periodically. If one bank
23 : ignores your deposits, you can just wait and send them to the next
24 : one.
25 :
26 : This is Solana.
27 :
28 : There's still problems of course but they are largely technical. How
29 : do the banks agree who is leader? How do you recover if a leader
30 : misbehaves? How do customers verify the transactions aren't forged?
31 : How do banks receive and publish and verify each others work quickly?
32 : These are the main technical innovations that enable Solana to work
33 : well.
34 :
35 : What about Proof of History?
36 :
37 : One particular niche problem is about the leader schedule. When the
38 : leader computer is moving from one bank to another, the new bank must
39 : wait for the old bank to say it's done and provide a final list of
40 : balances that it can start working off of. But: what if the computer
41 : at the old bank crashes and never says its done?
42 :
43 : Does the new leader just take over at some point? What if the new
44 : leader is malicious, and says the past thousand leaders crashed, and
45 : there have been no transactions for days? How do you check?
46 :
47 : This is what Proof of History solves. Each bank in the network must
48 : constantly do a lot of busywork (compute hashes), even when it is not
49 : leader.
50 :
51 : If the prior thousand leaders crashed, and no transactions happened
52 : in an hour, the new leader would have to show they did about an hour
53 : of busywork for everyone else to believe them.
54 :
55 : A better name for this is proof of skipping. If a leader is skipping
56 : slots (building off of a slot that is not the direct parent), it must
57 : prove that it waited a good amount of time to do so.
58 :
59 : It's not a perfect solution. For one thing, some banks have really
60 : fast computers and can compute a lot of busywork in a short amount of
61 : time, allowing them to skip prior slot(s) anyway. But: there is a
62 : social component that prevents validators from skipping the prior
63 : leader slot. It is easy to detect when this happens and the network
64 : could respond by ignoring their votes or stake.
65 :
66 : You could come up with other schemes: for example, the network could
67 : just use wall clock time. If a new leader publishes a block without
68 : waiting 400 milliseconds for the prior slot to complete, then there
69 : is no "proof of skipping" and the nodes ignore the slot.
70 :
71 : These schemes have a problem in that they are not deterministic
72 : across the network (different computers have different clocks), and
73 : so they will cause frequent forks which are very expensive to
74 : resolve. Even though the proof of history scheme is not perfect,
75 : it is better than any alternative which is not deterministic.
76 :
77 : With all that background, we can now describe at a high level what
78 : this PoH tile actually does,
79 :
80 : (1) Whenever any other leader in the network finishes a slot, and
81 : the slot is determined to be the best one to build off of, this
82 : tile gets "reset" onto that block, the so called "reset slot".
83 :
84 : (2) The tile is constantly doing busy work, hash(hash(hash(...))) on
85 : top of the last reset slot, even when it is not leader.
86 :
87 : (3) When the tile becomes leader, it continues hashing from where it
88 : was. Typically, the prior leader finishes their slot, so the
89 : reset slot will be the parent one, and this tile only publishes
90 : hashes for its own slot. But if prior slots were skipped, then
91 : there might be a whole chain already waiting.
92 :
93 : That's pretty much it. When we are leader, in addition to doing
94 : busywork, we publish ticks and microblocks to the shred tile. A
95 : microblock is a non-empty group of transactions whose hashes are
96 : mixed-in to the chain, while a tick is a periodic stamp of the
97 : current hash, with no transactions (nothing mixed in). We need
98 : to send both to the shred tile, as ticks are important for other
99 : validators to verify in parallel.
100 :
101 : As well, the tile should never become leader for a slot that it has
102 : published anything for, otherwise it may create a duplicate block.
103 :
104 : Some particularly common misunderstandings:
105 :
106 : - PoH is critical to security.
107 :
108 : This largely isn't true. The target hash rate of the network is
109 : so slow (1 hash per 500 nanoseconds) that a malicious leader can
110 : easily catch up if they start from an old hash, and the only
111 : practical attack prevented is the proof of skipping. Most of the
112 : long range attacks in the Solana whitepaper are not relevant.
113 :
114 : - PoH keeps passage of time.
115 :
116 : This is also not true. The way the network keeps time so it can
117 : decide who is leader is that, each leader uses their operating
118 : system clock to time 400 milliseconds and publishes their block
119 : when this timer expires.
120 :
121 : If a leader just hashed as fast as they could, they could publish
122 : a block in tens of milliseconds, and the rest of the network
123 : would happily accept it. This is why the Solana "clock" as
124 : determined by PoH is not accurate and drifts over time.
125 :
126 : - PoH prevents transaction reordering by the leader.
127 :
128 : The leader can, in theory, wait until the very end of their
129 : leader slot to publish anything at all to the network. They can,
130 : in particular, hold all received transactions for 400
131 : milliseconds and then reorder and publish some right at the end
132 : to advantage certain transactions.
133 :
134 : You might be wondering... if all the PoH chain is helping us do is
135 : prove that slots were skipped correctly, why do we need to "mix in"
136 : transactions to the hash value? Or do anything at all for slots
137 : where we don't skip the prior slot?
138 :
139 : It's a good question, and the answer is that this behavior is not
140 : necessary. An ideal implementation of PoH have no concept of ticks
141 : or mixins, and would not be part of the TPU pipeline at all.
142 : Instead, there would be a simple field "skip_proof" on the last
143 : shred we send for a slot, the hash(hash(...)) value. This field
144 : would only be filled in (and only verified by replayers) in cases
145 : where the slot actually skipped a parent.
146 :
147 : Then what is the "clock? In Solana, time is constructed as follows:
148 :
149 : HASHES
150 :
151 : The base unit of time is a hash. Hereafter, any values whose
152 : units are in hashes are called a "hashcnt" to distinguish them
153 : from actual hashed values.
154 :
155 : Agave generally defines a constant duration for each tick
156 : (see below) and then varies the number of hashcnt per tick, but
157 : as we consider the hashcnt the base unit of time, Firedancer and
158 : this PoH implementation defines everything in terms of hashcnt
159 : duration instead.
160 :
161 : In mainnet-beta, testnet, and devnet the hashcnt ticks over
162 : (increments) every 100 nanoseconds. The hashcnt rate is
163 : specified as 500 nanoseconds according to the genesis, but there
164 : are several features which increase the number of hashes per
165 : tick while keeping tick duration constant, which make the time
166 : per hashcnt lower. These features up to and including the
167 : `update_hashes_per_tick6` feature are activated on mainnet-beta,
168 : devnet, and testnet, and are described in the TICKS section
169 : below.
170 :
171 : Other chains and development environments might have a different
172 : hashcnt rate in the genesis, or they might not have activated
173 : the features which increase the rate yet, which we also support.
174 :
175 : In practice, although each validator follows a hashcnt rate of
176 : 100 nanoseconds, the overall observed hashcnt rate of the
177 : network is a little slower than once every 100 nanoseconds,
178 : mostly because there are gaps and clock synchronization issues
179 : during handoff between leaders. This is referred to as clock
180 : drift.
181 :
182 : TICKS
183 :
184 : The leader needs to periodically checkpoint the hash value
185 : associated with a given hashcnt so that they can publish it to
186 : other nodes for verification.
187 :
188 : On mainnet-beta, testnet, and devnet this occurs once every
189 : 62,500 hashcnts, or approximately once every 6.4 microseconds.
190 : This value is determined at genesis time, and according to the
191 : features below, and could be different in development
192 : environments or on other chains which we support.
193 :
194 : Due to protocol limitations, when mixing in transactions to the
195 : proof-of-history chain, it cannot occur on a tick boundary (but
196 : can occur at any other hashcnt).
197 :
198 : Ticks exist mainly so that verification can happen in parallel.
199 : A verifier computer, rather than needing to do hash(hash(...))
200 : all in sequence to verify a proof-of-history chain, can do,
201 :
202 : Core 0: hash(hash(...))
203 : Core 1: hash(hash(...))
204 : Core 2: hash(hash(...))
205 : Core 3: hash(hash(...))
206 : ...
207 :
208 : Between each pair of tick boundaries.
209 :
210 : Solana sometimes calls the current tick the "tick height",
211 : although it makes more sense to think of it as a counter from
212 : zero, it's just the number of ticks since the genesis hash.
213 :
214 : There is a set of features which increase the number of hashcnts
215 : per tick. These are all deployed on mainnet-beta, devnet, and
216 : testnet.
217 :
218 : name: update_hashes_per_tick
219 : id: 3uFHb9oKdGfgZGJK9EHaAXN4USvnQtAFC13Fh5gGFS5B
220 : hashes per tick: 12,500
221 : hashcnt duration: 500 nanos
222 :
223 : name: update_hashes_per_tick2
224 : id: EWme9uFqfy1ikK1jhJs8fM5hxWnK336QJpbscNtizkTU
225 : hashes per tick: 17,500
226 : hashcnt duration: 357.142857143 nanos
227 :
228 : name: update_hashes_per_tick3
229 : id: 8C8MCtsab5SsfammbzvYz65HHauuUYdbY2DZ4sznH6h5
230 : hashes per tick: 27,500
231 : hashcnt duration: 227.272727273 nanos
232 :
233 : name: update_hashes_per_tick4
234 : id: 8We4E7DPwF2WfAN8tRTtWQNhi98B99Qpuj7JoZ3Aikgg
235 : hashes per tick: 47,500
236 : hashcnt duration: 131.578947368 nanos
237 :
238 : name: update_hashes_per_tick5
239 : id: BsKLKAn1WM4HVhPRDsjosmqSg2J8Tq5xP2s2daDS6Ni4
240 : hashes per tick: 57,500
241 : hashcnt duration: 108.695652174 nanos
242 :
243 : name: update_hashes_per_tick6
244 : id: FKu1qYwLQSiehz644H6Si65U5ZQ2cp9GxsyFUfYcuADv
245 : hashes per tick: 62,500
246 : hashcnt duration: 100 nanos
247 :
248 : In development environments, there is a way to configure the
249 : hashcnt per tick to be "none" during genesis, for a so-called
250 : "low power" tick producer. The idea is not to spin cores during
251 : development. This is equivalent to setting the hashcnt per tick
252 : to be 1, and increasing the hashcnt duration to the desired tick
253 : duration.
254 :
255 : SLOTS
256 :
257 : Each leader needs to be leader for a fixed amount of time, which
258 : is called a slot. During a slot, a leader has an opportunity to
259 : receive transactions and produce a block for the network,
260 : although they may miss ("skip") the slot if they are offline or
261 : not behaving.
262 :
263 : In mainnet-beta, testnet, and devnet a slot is 64 ticks, or
264 : 4,000,000 hashcnts, or approximately 400 milliseconds.
265 :
266 : Due to the way the leader schedule is constructed, each leader
267 : is always given at least four (4) consecutive slots in the
268 : schedule. This means when becoming leader you will be leader
269 : for at least 4 slots, or 1.6 seconds.
270 :
271 : It is rare, although can happen that a leader gets more than 4
272 : consecutive slots (eg, 8, or 12), if they are lucky with the
273 : leader schedule generation.
274 :
275 : The number of ticks in a slot is fixed at genesis time, and
276 : could be different for development or other chains, which we
277 : support. There is nothing special about 4 leader slots in a
278 : row, and this might be changed in future, and the proof of
279 : history makes no assumptions that this is the case.
280 :
281 : EPOCHS
282 :
283 : Infrequently, the network needs to do certain housekeeping,
284 : mainly things like collecting rent and deciding on the leader
285 : schedule. The length of an epoch is fixed on mainnet-beta,
286 : devnet and testnet at 420,000 slots, or around ~2 (1.94) days.
287 : This value is fixed at genesis time, and could be different for
288 : other chains including development, which we support. Typically
289 : in development, epochs are every 8,192 slots, or around ~1 hour
290 : (54.61 minutes), although it depends on the number of ticks per
291 : slot and the target hashcnt rate of the genesis as well.
292 :
293 : In development, epochs need not be a fixed length either. There
294 : is a "warmup" option, where epochs start short and grow, which
295 : is useful for quickly warming up stake during development.
296 :
297 : The epoch is important because it is the only time the leader
298 : schedule is updated. The leader schedule is a list of which
299 : leader is leader for which slot, and is generated by a special
300 : algorithm that is deterministic and known to all nodes.
301 :
302 : The leader schedule is computed one epoch in advance, so that
303 : at slot T, we always know who will be leader up until the end
304 : of slot T+EPOCH_LENGTH. Specifically, the leader schedule for
305 : epoch N is computed during the epoch boundary crossing from
306 : N-2 to N-1. For mainnet-beta, the slots per epoch is fixed and
307 : will always be 420,000. */
308 :
309 : #include "../../disco/tiles.h"
310 : #include "../../disco/fd_txn_m.h"
311 : #include "../../disco/bundle/fd_bundle_crank.h"
312 : #include "../../disco/pack/fd_pack.h"
313 : #include "../../disco/pack/fd_pack_cost.h"
314 : #include "../../ballet/sha256/fd_sha256.h"
315 : #include "../../disco/metrics/fd_metrics.h"
316 : #include "../../util/pod/fd_pod.h"
317 : #include "../../disco/shred/fd_shredder.h"
318 : #include "../../disco/keyguard/fd_keyload.h"
319 : #include "../../disco/keyguard/fd_keyswitch.h"
320 : #include "../../disco/metrics/generated/fd_metrics_poh.h"
321 : #include "../../disco/plugin/fd_plugin.h"
322 : #include "../../flamenco/leaders/fd_multi_epoch_leaders.h"
323 :
324 : #include <string.h>
325 :
326 : /* The maximum number of microblocks that pack is allowed to pack into a
327 : single slot. This is not consensus critical, and pack could, if we
328 : let it, produce as many microblocks as it wants, and the slot would
329 : still be valid.
330 :
331 : We have this here instead so that PoH can estimate slot completion,
332 : and keep the hashcnt up to date as pack progresses through packing
333 : the slot. If this upper bound was not enforced, PoH could tick to
334 : the last hash of the slot and have no hashes left to mixin incoming
335 : microblocks from pack, so this upper bound is a coordination
336 : mechanism so that PoH can progress hashcnts while the slot is active,
337 : and know that pack will not need those hashcnts later to do mixins. */
338 0 : #define MAX_MICROBLOCKS_PER_SLOT (32768UL)
339 :
340 : /* When we are hashing in the background in case a prior leader skips
341 : their slot, we need to store the result of each tick hash so we can
342 : publish them when we become leader. The network requires at least
343 : one leader slot to publish in each epoch for the leader schedule to
344 : generate, so in the worst case we might need two full epochs of slots
345 : to store the hashes. (Eg, if epoch T only had a published slot in
346 : position 0 and epoch T+1 only had a published slot right at the end).
347 :
348 : There is a tighter bound: the block data limit of mainnet-beta is
349 : currently FD_PACK_MAX_DATA_PER_BLOCK, or 27,332,342 bytes per slot.
350 : At 48 bytes per tick, it is not possible to publish a slot that skips
351 : 569,424 or more prior slots. */
352 0 : #define MAX_SKIPPED_TICKS (1UL+(FD_PACK_MAX_DATA_PER_BLOCK/48UL))
353 :
354 0 : #define IN_KIND_BANK (0)
355 0 : #define IN_KIND_PACK (1)
356 0 : #define IN_KIND_STAKE (2)
357 :
358 :
359 : typedef struct {
360 : fd_wksp_t * mem;
361 : ulong chunk0;
362 : ulong wmark;
363 : } fd_poh_in_ctx_t;
364 :
365 : typedef struct {
366 : ulong idx;
367 : fd_wksp_t * mem;
368 : ulong chunk0;
369 : ulong wmark;
370 : ulong chunk;
371 : } fd_poh_out_ctx_t;
372 :
373 : typedef struct {
374 : fd_stem_context_t * stem;
375 :
376 : /* Static configuration determined at genesis creation time. See
377 : long comment above for more information. */
378 : ulong tick_duration_ns;
379 : ulong hashcnt_per_tick;
380 : ulong ticks_per_slot;
381 :
382 : /* Derived from the above configuration, but we precompute it. */
383 : double slot_duration_ns;
384 : double hashcnt_duration_ns;
385 : ulong hashcnt_per_slot;
386 : /* Constant, fixed at initialization. The maximum number of
387 : microblocks that the pack tile can publish in each slot. */
388 : ulong max_microblocks_per_slot;
389 :
390 : /* Consensus-critical slot cost limits. */
391 : struct {
392 : ulong slot_max_cost;
393 : ulong slot_max_vote_cost;
394 : ulong slot_max_write_cost_per_acct;
395 : } limits;
396 :
397 : /* The current slot and hashcnt within that slot of the proof of
398 : history, including hashes we have been producing in the background
399 : while waiting for our next leader slot. */
400 : ulong slot;
401 : ulong hashcnt;
402 : ulong cus_used;
403 :
404 : /* When we send a microblock on to the shred tile, we need to tell
405 : it how many hashes there have been since the last microblock, so
406 : this tracks the hashcnt of the last published microblock.
407 :
408 : If we are skipping slots prior to our leader slot, the last_slot
409 : will be quite old, and potentially much larger than the number of
410 : hashcnts in one slot. */
411 : ulong last_slot;
412 : ulong last_hashcnt;
413 :
414 : /* If we have published a tick or a microblock for a particular slot
415 : to the shred tile, we should never become leader for that slot
416 : again, otherwise we could publish a duplicate block.
417 :
418 : This value tracks the max slot that we have published a tick or
419 : microblock for so we can prevent this. */
420 : ulong highwater_leader_slot;
421 :
422 : /* See how this field is used below. If we have sequential leader
423 : slots, we don't reset the expected slot end time between the two,
424 : to prevent clock drift. If we didn't do this, our 2nd slot would
425 : end 400ms + `time_for_replay_to_move_slot_and_reset_poh` after
426 : our 1st, rather than just strictly 400ms. */
427 : int lagged_consecutive_leader_start;
428 : ulong expect_sequential_leader_slot;
429 :
430 : /* There's a race condition ... let's say two banks A and B, bank A
431 : processes some transactions, then releases the account locks, and
432 : sends the microblock to PoH to be stamped. Pack now re-packs the
433 : same accounts with a new microblock, sends to bank B, bank B
434 : executes and sends the microblock to PoH, and this all happens fast
435 : enough that PoH picks the 2nd block to stamp before the 1st. The
436 : accounts database changes now are misordered with respect to PoH so
437 : replay could fail.
438 :
439 : To prevent this race, we order all microblocks and only process
440 : them in PoH in the order they are produced by pack. This is a
441 : little bit over-strict, we just need to ensure that microblocks
442 : with conflicting accounts execute in order, but this is easiest to
443 : implement for now. */
444 : uint expect_pack_idx;
445 :
446 : /* If we have received the slot done message from pack yet. We are
447 : not allowed to fully finish hashing the block until this happens so
448 : that we know which slot the slot_done message is arriving for. */
449 : int slot_done;
450 :
451 : /* Pack and bank tiles need a reference to the bank object with a
452 : slightly different lifetime than current_leader_bank, particularly
453 : when we switch forks in the middle of a leader slot. We need to
454 : make sure we don't free the last reference to the bank while the
455 : pack or bank tiles are still using it. The strange thing is that
456 : bank tiles have no concept of the current slot, but we know they're
457 : done with the bank object when pack's inter-slot bank draining
458 : process is complete. Pack notifies PoH by a frag with
459 : sig==ULONG_MAX on the pack_poh link when the banks are drained, and
460 : the PoH tile must then free the reference on behalf of pack.
461 :
462 : pack_leader_bank is non-NULL when the reference we're holding on
463 : behalf of the pack tile is acquired, and NULL when it is not
464 : acquired. */
465 : void const * pack_leader_bank;
466 :
467 : /* The PoH tile must never drop microblocks that get committed by the
468 : bank, so it needs to always be able to mixin a microblock hash.
469 : Mixing in requires incrementing the hashcnt, so we need to ensure
470 : at all times that there is enough hascnts left in the slot to
471 : mixin whatever future microblocks pack might produce for it.
472 :
473 : This value tracks that. At any time, max_microblocks_per_slot
474 : - microblocks_lower_bound is an upper bound on the maximum number
475 : of microblocks that might still be received in this slot. */
476 : ulong microblocks_lower_bound;
477 :
478 : uchar __attribute__((aligned(32UL))) reset_hash[ 32 ];
479 : uchar __attribute__((aligned(32UL))) hash[ 32 ];
480 :
481 : /* When we are not leader, we need to save the hashes that were
482 : produced in case the prior leader skips. If they skip, we will
483 : replay these skipped hashes into our next leader bank so that
484 : the slot hashes sysvar can be updated correctly, and also publish
485 : them to peer nodes as part of our outgoing shreds. */
486 : uchar skipped_tick_hashes[ MAX_SKIPPED_TICKS ][ 32 ];
487 :
488 : /* The timestamp in nanoseconds of when the reset slot was received.
489 : This is the timestamp we are building on top of to determine when
490 : our next leader slot starts. */
491 : long reset_slot_start_ns;
492 :
493 : /* The timestamp in nanoseconds of when we got the bank for the
494 : current leader slot. */
495 : long leader_bank_start_ns;
496 :
497 : /* The hashcnt corresponding to the start of the current reset slot. */
498 : ulong reset_slot;
499 :
500 : /* The hashcnt at which our next leader slot begins, or ULONG max if
501 : we have no known next leader slot. */
502 : ulong next_leader_slot;
503 :
504 : /* If an in progress frag should be skipped */
505 : int skip_frag;
506 :
507 : ulong max_active_descendant;
508 :
509 : /* If we currently are the leader according the clock AND we have
510 : received the leader bank for the slot from the replay stage,
511 : this value will be non-NULL.
512 :
513 : Note that we might be inside our leader slot, but not have a bank
514 : yet, in which case this will still be NULL.
515 :
516 : It will be NULL for a brief race period between consecutive leader
517 : slots, as we ping-pong back to replay stage waiting for a new bank.
518 :
519 : Agave refers to this as the "working bank". */
520 : void const * current_leader_bank;
521 :
522 : fd_sha256_t * sha256;
523 :
524 : fd_multi_epoch_leaders_t * mleaders;
525 :
526 : /* The last sequence number of an outgoing fragment to the shred tile,
527 : or ULONG max if no such fragment. See fd_keyswitch.h for details
528 : of how this is used. */
529 : ulong shred_seq;
530 :
531 : int halted_switching_key;
532 :
533 : fd_keyswitch_t * keyswitch;
534 : fd_pubkey_t identity_key;
535 :
536 : /* We need a few pieces of information to compute the right addresses
537 : for bundle crank information that we need to send to pack. */
538 : struct {
539 : int enabled;
540 : fd_pubkey_t vote_account;
541 : fd_bundle_crank_gen_t gen[1];
542 : } bundle;
543 :
544 :
545 : /* The Agave client needs to be notified when the leader changes,
546 : so that they can resume the replay stage if it was suspended waiting. */
547 : void * signal_leader_change;
548 :
549 : /* These are temporarily set in during_frag so they can be used in
550 : after_frag once the frag has been validated as not overrun. */
551 : uchar _txns[ USHORT_MAX ];
552 : fd_microblock_trailer_t _microblock_trailer[ 1 ];
553 :
554 : int in_kind[ 64 ];
555 : fd_poh_in_ctx_t in[ 64 ];
556 :
557 : fd_poh_out_ctx_t shred_out[ 1 ];
558 : fd_poh_out_ctx_t pack_out[ 1 ];
559 : fd_poh_out_ctx_t plugin_out[ 1 ];
560 :
561 : fd_histf_t begin_leader_delay[ 1 ];
562 : fd_histf_t first_microblock_delay[ 1 ];
563 : fd_histf_t slot_done_delay[ 1 ];
564 : fd_histf_t bundle_init_delay[ 1 ];
565 :
566 : ulong features_activation_avail;
567 : fd_shred_features_activation_t features_activation[1];
568 :
569 : ulong parent_slot;
570 : uchar parent_block_id[ 32 ];
571 :
572 : uchar __attribute__((aligned(FD_MULTI_EPOCH_LEADERS_ALIGN))) mleaders_mem[ FD_MULTI_EPOCH_LEADERS_FOOTPRINT ];
573 : } fd_poh_ctx_t;
574 :
575 : /* The PoH recorder is implemented in Firedancer but for now needs to
576 : work with Agave, so we have a locking scheme for them to
577 : co-operate.
578 :
579 : This is because the PoH tile lives in the Agave memory address
580 : space and their version of concurrency is locking the PoH recorder
581 : and reading arbitrary fields.
582 :
583 : So we allow them to lock the PoH tile, although with a very bad (for
584 : them) locking scheme. By default, the tile has full and exclusive
585 : access to the data. If part of Agave wishes to read/write they
586 : can either,
587 :
588 : 1. Rewrite their concurrency to message passing based on mcache
589 : (preferred, but not feasible).
590 : 2. Signal to the tile they wish to acquire the lock, by setting
591 : fd_poh_waiting_lock to 1.
592 :
593 : During after_credit, the tile will check if the waiting lock is set
594 : to 1, and if so, set the returned lock to 1, indicating to the waiter
595 : that they may now proceed.
596 :
597 : When the waiter is done reading and writing, they restore the
598 : returned lock value back to zero, and the POH tile continues with its
599 : day. */
600 :
601 : static fd_poh_ctx_t * fd_poh_global_ctx;
602 :
603 : static volatile ulong fd_poh_waiting_lock __attribute__((aligned(128UL)));
604 : static volatile ulong fd_poh_returned_lock __attribute__((aligned(128UL)));
605 :
606 : /* Agave also needs to write to some mcaches, so we trampoline
607 : that via. the PoH tile as well. */
608 :
609 : struct poh_link {
610 : fd_frag_meta_t * mcache;
611 : ulong depth;
612 : ulong tx_seq;
613 :
614 : void * mem;
615 : void * dcache;
616 : ulong chunk0;
617 : ulong wmark;
618 : ulong chunk;
619 :
620 : ulong cr_avail;
621 : ulong rx_cnt;
622 : ulong * rx_fseqs[ 32UL ];
623 : };
624 :
625 : typedef struct poh_link poh_link_t;
626 :
627 : static poh_link_t gossip_dedup;
628 : static poh_link_t stake_out;
629 : static poh_link_t crds_shred;
630 : static poh_link_t replay_resolv;
631 : static poh_link_t executed_txn;
632 :
633 : static poh_link_t replay_plugin;
634 : static poh_link_t gossip_plugin;
635 : static poh_link_t start_progress_plugin;
636 : static poh_link_t vote_listener_plugin;
637 : static poh_link_t validator_info_plugin;
638 :
639 : static void
640 0 : poh_link_wait_credit( poh_link_t * link ) {
641 0 : if( FD_LIKELY( link->cr_avail ) ) return;
642 :
643 0 : while( 1 ) {
644 0 : ulong cr_query = ULONG_MAX;
645 0 : for( ulong i=0UL; i<link->rx_cnt; i++ ) {
646 0 : ulong const * _rx_seq = link->rx_fseqs[ i ];
647 0 : ulong rx_seq = FD_VOLATILE_CONST( *_rx_seq );
648 0 : ulong rx_cr_query = (ulong)fd_long_max( (long)link->depth - fd_long_max( fd_seq_diff( link->tx_seq, rx_seq ), 0L ), 0L );
649 0 : cr_query = fd_ulong_min( rx_cr_query, cr_query );
650 0 : }
651 0 : if( FD_LIKELY( cr_query>0UL ) ) {
652 0 : link->cr_avail = cr_query;
653 0 : break;
654 0 : }
655 0 : FD_SPIN_PAUSE();
656 0 : }
657 0 : }
658 :
659 : static void
660 : poh_link_publish( poh_link_t * link,
661 : ulong sig,
662 : uchar const * data,
663 0 : ulong data_sz ) {
664 0 : while( FD_UNLIKELY( !FD_VOLATILE_CONST( link->mcache ) ) ) FD_SPIN_PAUSE();
665 0 : if( FD_UNLIKELY( !link->mem ) ) return; /* link not enabled, don't publish */
666 0 : poh_link_wait_credit( link );
667 :
668 0 : uchar * dst = (uchar *)fd_chunk_to_laddr( link->mem, link->chunk );
669 0 : fd_memcpy( dst, data, data_sz );
670 0 : ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
671 0 : fd_mcache_publish( link->mcache, link->depth, link->tx_seq, sig, link->chunk, data_sz, 0UL, 0UL, tspub );
672 0 : link->chunk = fd_dcache_compact_next( link->chunk, data_sz, link->chunk0, link->wmark );
673 0 : link->cr_avail--;
674 0 : link->tx_seq++;
675 0 : }
676 :
677 : static void
678 : poh_link_init( poh_link_t * link,
679 : fd_topo_t * topo,
680 : fd_topo_tile_t * tile,
681 0 : ulong out_idx ) {
682 0 : fd_topo_link_t * topo_link = &topo->links[ tile->out_link_id[ out_idx ] ];
683 0 : fd_topo_wksp_t * wksp = &topo->workspaces[ topo->objs[ topo_link->dcache_obj_id ].wksp_id ];
684 :
685 0 : link->mem = wksp->wksp;
686 0 : link->depth = fd_mcache_depth( topo_link->mcache );
687 0 : link->tx_seq = 0UL;
688 0 : link->dcache = topo_link->dcache;
689 0 : link->chunk0 = fd_dcache_compact_chunk0( wksp->wksp, topo_link->dcache );
690 0 : link->wmark = fd_dcache_compact_wmark ( wksp->wksp, topo_link->dcache, topo_link->mtu );
691 0 : link->chunk = link->chunk0;
692 0 : link->cr_avail = 0UL;
693 0 : link->rx_cnt = 0UL;
694 0 : for( ulong i=0UL; i<topo->tile_cnt; i++ ) {
695 0 : fd_topo_tile_t * _tile = &topo->tiles[ i ];
696 0 : for( ulong j=0UL; j<_tile->in_cnt; j++ ) {
697 0 : if( _tile->in_link_id[ j ]==topo_link->id && _tile->in_link_reliable[ j ] ) {
698 0 : FD_TEST( link->rx_cnt<32UL );
699 0 : link->rx_fseqs[ link->rx_cnt++ ] = _tile->in_link_fseq[ j ];
700 0 : break;
701 0 : }
702 0 : }
703 0 : }
704 0 : FD_COMPILER_MFENCE();
705 0 : link->mcache = topo_link->mcache;
706 0 : FD_COMPILER_MFENCE();
707 0 : FD_TEST( link->mcache );
708 0 : }
709 :
710 : /* To help show correctness, functions that might be called from
711 : Rust, either directly or indirectly, have this fake "attribute"
712 : CALLED_FROM_RUST, which is actually nothing. Calls from Rust
713 : typically execute on threads did not call fd_boot, so they do not
714 : have the typical FD_TL variables. In particular, they cannot use
715 : normal metrics, and their log messages don't have full context.
716 : Additionally, Rust functions marked CALLED_FROM_RUST cannot call back
717 : into a C fd_ext function without causing a deadlock (although the
718 : other Rust fd_ext functions have a similar problem).
719 :
720 : To prevent annotation from polluting the whole codebase, calls to
721 : functions outside this file are manually checked and marked as being
722 : safe at each call rather than annotated. */
723 : #define CALLED_FROM_RUST
724 :
725 : static CALLED_FROM_RUST fd_poh_ctx_t *
726 0 : fd_ext_poh_write_lock( void ) {
727 0 : for(;;) {
728 : /* Acquire the waiter lock to make sure we are the first writer in the queue. */
729 0 : if( FD_LIKELY( !FD_ATOMIC_CAS( &fd_poh_waiting_lock, 0UL, 1UL) ) ) break;
730 0 : FD_SPIN_PAUSE();
731 0 : }
732 0 : FD_COMPILER_MFENCE();
733 0 : for(;;) {
734 : /* Now wait for the tile to tell us we can proceed. */
735 0 : if( FD_LIKELY( FD_VOLATILE_CONST( fd_poh_returned_lock ) ) ) break;
736 0 : FD_SPIN_PAUSE();
737 0 : }
738 0 : FD_COMPILER_MFENCE();
739 0 : return fd_poh_global_ctx;
740 0 : }
741 :
742 : static CALLED_FROM_RUST void
743 0 : fd_ext_poh_write_unlock( void ) {
744 0 : FD_COMPILER_MFENCE();
745 0 : FD_VOLATILE( fd_poh_returned_lock ) = 0UL;
746 0 : }
747 :
748 : /* The PoH tile needs to interact with the Agave address space to
749 : do certain operations that Firedancer hasn't reimplemented yet, a.k.a
750 : transaction execution. We have Agave export some wrapper
751 : functions that we call into during regular tile execution. These do
752 : not need any locking, since they are called serially from the single
753 : PoH tile. */
754 :
755 : extern CALLED_FROM_RUST void fd_ext_bank_acquire( void const * bank );
756 : extern CALLED_FROM_RUST void fd_ext_bank_release( void const * bank );
757 : extern CALLED_FROM_RUST void fd_ext_poh_signal_leader_change( void * sender );
758 : extern void fd_ext_poh_register_tick( void const * bank, uchar const * hash );
759 :
760 : /* fd_ext_poh_initialize is called by Agave on startup to
761 : initialize the PoH tile with some static configuration, and the
762 : initial reset slot and hash which it retrieves from a snapshot.
763 :
764 : This function is called by some random Agave thread, but
765 : it blocks booting of the PoH tile. The tile will spin until it
766 : determines that this initialization has happened.
767 :
768 : signal_leader_change is an opaque Rust object that is used to
769 : tell the replay stage that the leader has changed. It is a
770 : Box::into_raw(Arc::increment_strong(crossbeam::Sender)), so it
771 : has infinite lifetime unless this C code releases the refcnt.
772 :
773 : It can be used with `fd_ext_poh_signal_leader_change` which
774 : will just issue a nonblocking send on the channel. */
775 :
776 : CALLED_FROM_RUST void
777 : fd_ext_poh_initialize( ulong tick_duration_ns, /* See clock comments above, will be 6.4 microseconds for mainnet-beta. */
778 : ulong hashcnt_per_tick, /* See clock comments above, will be 62,500 for mainnet-beta. */
779 : ulong ticks_per_slot, /* See clock comments above, will almost always be 64. */
780 : ulong tick_height, /* The counter (height) of the tick to start hashing on top of. */
781 : uchar const * last_entry_hash, /* Points to start of a 32 byte region of memory, the hash itself at the tick height. */
782 0 : void * signal_leader_change /* See comment above. */ ) {
783 0 : FD_COMPILER_MFENCE();
784 0 : for(;;) {
785 : /* Make sure the ctx is initialized before trying to take the lock. */
786 0 : if( FD_LIKELY( FD_VOLATILE_CONST( fd_poh_global_ctx ) ) ) break;
787 0 : FD_SPIN_PAUSE();
788 0 : }
789 0 : fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
790 :
791 0 : ctx->slot = tick_height/ticks_per_slot;
792 0 : ctx->hashcnt = 0UL;
793 0 : ctx->cus_used = 0UL;
794 0 : ctx->last_slot = ctx->slot;
795 0 : ctx->last_hashcnt = 0UL;
796 0 : ctx->reset_slot = ctx->slot;
797 0 : ctx->reset_slot_start_ns = fd_log_wallclock(); /* safe to call from Rust */
798 :
799 0 : memcpy( ctx->reset_hash, last_entry_hash, 32UL );
800 0 : memcpy( ctx->hash, last_entry_hash, 32UL );
801 :
802 0 : ctx->signal_leader_change = signal_leader_change;
803 :
804 : /* Static configuration about the clock. */
805 0 : ctx->tick_duration_ns = tick_duration_ns;
806 0 : ctx->hashcnt_per_tick = hashcnt_per_tick;
807 0 : ctx->ticks_per_slot = ticks_per_slot;
808 :
809 : /* Recompute derived information about the clock. */
810 0 : ctx->slot_duration_ns = (double)ticks_per_slot*(double)tick_duration_ns;
811 0 : ctx->hashcnt_duration_ns = (double)tick_duration_ns/(double)hashcnt_per_tick;
812 0 : ctx->hashcnt_per_slot = ticks_per_slot*hashcnt_per_tick;
813 :
814 0 : if( FD_UNLIKELY( ctx->hashcnt_per_tick==1UL ) ) {
815 : /* Low power producer, maximum of one microblock per tick in the slot */
816 0 : ctx->max_microblocks_per_slot = ctx->ticks_per_slot;
817 0 : } else {
818 : /* See the long comment in after_credit for this limit */
819 0 : ctx->max_microblocks_per_slot = fd_ulong_min( MAX_MICROBLOCKS_PER_SLOT, ctx->ticks_per_slot*(ctx->hashcnt_per_tick-1UL) );
820 0 : }
821 :
822 0 : fd_ext_poh_write_unlock();
823 0 : }
824 :
825 : /* fd_ext_poh_acquire_bank gets the current leader bank if there is one
826 : currently active. PoH might think we are leader without having a
827 : leader bank if the replay stage has not yet noticed we are leader.
828 :
829 : The bank that is returned is owned the caller, and must be converted
830 : to an Arc<Bank> by calling Arc::from_raw() on it. PoH increments the
831 : reference count before returning the bank, so that it can also keep
832 : its internal copy.
833 :
834 : If there is no leader bank, NULL is returned. In this case, the
835 : caller should not call `Arc::from_raw()`. */
836 :
837 : CALLED_FROM_RUST void const *
838 0 : fd_ext_poh_acquire_leader_bank( void ) {
839 0 : fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
840 0 : void const * bank = NULL;
841 0 : if( FD_LIKELY( ctx->current_leader_bank ) ) {
842 : /* Clone refcount before we release the lock. */
843 0 : fd_ext_bank_acquire( ctx->current_leader_bank );
844 0 : bank = ctx->current_leader_bank;
845 0 : }
846 0 : fd_ext_poh_write_unlock();
847 0 : return bank;
848 0 : }
849 :
850 : /* fd_ext_poh_reset_slot returns the slot height one above the last good
851 : (unskipped) slot we are building on top of. This is always a good
852 : known value, and will not be ULONG_MAX. */
853 :
854 : CALLED_FROM_RUST ulong
855 0 : fd_ext_poh_reset_slot( void ) {
856 0 : fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
857 0 : ulong reset_slot = ctx->reset_slot;
858 0 : fd_ext_poh_write_unlock();
859 0 : return reset_slot;
860 0 : }
861 :
862 : CALLED_FROM_RUST void
863 0 : fd_ext_poh_update_active_descendant( ulong max_active_descendant ) {
864 0 : fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
865 0 : ctx->max_active_descendant = max_active_descendant;
866 0 : fd_ext_poh_write_unlock();
867 0 : }
868 :
869 : /* fd_ext_poh_reached_leader_slot returns 1 if we have reached a slot
870 : where we are leader. This is used by the replay stage to determine
871 : if it should create a new leader bank descendant of the prior reset
872 : slot block.
873 :
874 : Sometimes, even when we reach our slot we do not return 1, as we are
875 : giving a grace period to the prior leader to finish publishing their
876 : block.
877 :
878 : out_leader_slot is the slot height of the leader slot we reached, and
879 : reset_slot is the slot height of the last good (unskipped) slot we
880 : are building on top of. */
881 :
882 : CALLED_FROM_RUST int
883 : fd_ext_poh_reached_leader_slot( ulong * out_leader_slot,
884 0 : ulong * out_reset_slot ) {
885 0 : fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
886 :
887 0 : *out_leader_slot = ctx->next_leader_slot;
888 0 : *out_reset_slot = ctx->reset_slot;
889 :
890 0 : if( FD_UNLIKELY( ctx->next_leader_slot==ULONG_MAX ||
891 0 : ctx->slot<ctx->next_leader_slot ) ) {
892 : /* Didn't reach our leader slot yet. */
893 0 : fd_ext_poh_write_unlock();
894 0 : return 0;
895 0 : }
896 :
897 0 : if( FD_UNLIKELY( ctx->halted_switching_key ) ) {
898 : /* Reached our leader slot, but the leader pipeline is halted
899 : because we are switching identity key. */
900 0 : fd_ext_poh_write_unlock();
901 0 : return 0;
902 0 : }
903 :
904 0 : if( FD_LIKELY( ctx->reset_slot==ctx->next_leader_slot ) ) {
905 : /* We were reset onto our leader slot, because the prior leader
906 : completed theirs, so we should start immediately, no need for a
907 : grace period. */
908 0 : fd_ext_poh_write_unlock();
909 0 : return 1;
910 0 : }
911 :
912 0 : long now_ns = fd_log_wallclock();
913 0 : long expected_start_time_ns = ctx->reset_slot_start_ns + (long)((double)(ctx->next_leader_slot-ctx->reset_slot)*ctx->slot_duration_ns);
914 :
915 : /* If a prior leader is still in the process of publishing their slot,
916 : delay ours to let them finish ... unless they are so delayed that
917 : we risk getting skipped by the leader following us. 1.2 seconds
918 : is a reasonable default here, although any value between 0 and 1.6
919 : seconds could be considered reasonable. This is arbitrary and
920 : chosen due to intuition. */
921 :
922 0 : if( FD_UNLIKELY( now_ns<expected_start_time_ns+(long)(3.0*ctx->slot_duration_ns) ) ) {
923 : /* If the max_active_descendant is >= next_leader_slot, we waited
924 : too long and a leader after us started publishing to try and skip
925 : us. Just start our leader slot immediately, we might win ... */
926 :
927 0 : if( FD_LIKELY( ctx->max_active_descendant>=ctx->reset_slot && ctx->max_active_descendant<ctx->next_leader_slot ) ) {
928 : /* If one of the leaders between the reset slot and our leader
929 : slot is in the process of publishing (they have a descendant
930 : bank that is in progress of being replayed), then keep waiting.
931 : We probably wouldn't get a leader slot out before they
932 : finished.
933 :
934 : Unless... we are past the deadline to start our slot by more
935 : than 1.2 seconds, in which case we should probably start it to
936 : avoid getting skipped by the leader behind us. */
937 0 : fd_ext_poh_write_unlock();
938 0 : return 0;
939 0 : }
940 0 : }
941 :
942 0 : fd_ext_poh_write_unlock();
943 0 : return 1;
944 0 : }
945 :
946 : CALLED_FROM_RUST static inline void
947 : publish_plugin_slot_start( fd_poh_ctx_t * ctx,
948 : ulong slot,
949 0 : ulong parent_slot ) {
950 0 : if( FD_UNLIKELY( !ctx->plugin_out->mem ) ) return;
951 :
952 0 : fd_plugin_msg_slot_start_t * slot_start = (fd_plugin_msg_slot_start_t *)fd_chunk_to_laddr( ctx->plugin_out->mem, ctx->plugin_out->chunk );
953 0 : *slot_start = (fd_plugin_msg_slot_start_t){ .slot = slot, .parent_slot = parent_slot };
954 0 : fd_stem_publish( ctx->stem, ctx->plugin_out->idx, FD_PLUGIN_MSG_SLOT_START, ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_start_t), 0UL, 0UL, 0UL );
955 0 : ctx->plugin_out->chunk = fd_dcache_compact_next( ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_start_t), ctx->plugin_out->chunk0, ctx->plugin_out->wmark );
956 0 : }
957 :
958 : CALLED_FROM_RUST static inline void
959 : publish_plugin_slot_end( fd_poh_ctx_t * ctx,
960 : ulong slot,
961 0 : ulong cus_used ) {
962 0 : if( FD_UNLIKELY( !ctx->plugin_out->mem ) ) return;
963 :
964 0 : fd_plugin_msg_slot_end_t * slot_end = (fd_plugin_msg_slot_end_t *)fd_chunk_to_laddr( ctx->plugin_out->mem, ctx->plugin_out->chunk );
965 0 : *slot_end = (fd_plugin_msg_slot_end_t){ .slot = slot, .cus_used = cus_used };
966 0 : fd_stem_publish( ctx->stem, ctx->plugin_out->idx, FD_PLUGIN_MSG_SLOT_END, ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_end_t), 0UL, 0UL, 0UL );
967 0 : ctx->plugin_out->chunk = fd_dcache_compact_next( ctx->plugin_out->chunk, sizeof(fd_plugin_msg_slot_end_t), ctx->plugin_out->chunk0, ctx->plugin_out->wmark );
968 0 : }
969 :
970 : extern int
971 : fd_ext_bank_load_account( void const * bank,
972 : int fixed_root,
973 : uchar const * addr,
974 : uchar * owner,
975 : uchar * data,
976 : ulong * data_sz );
977 :
978 : CALLED_FROM_RUST static void
979 : publish_became_leader( fd_poh_ctx_t * ctx,
980 : ulong slot,
981 0 : ulong epoch ) {
982 0 : double tick_per_ns = fd_tempo_tick_per_ns( NULL );
983 0 : fd_histf_sample( ctx->begin_leader_delay, (ulong)((double)(fd_log_wallclock()-ctx->reset_slot_start_ns)/tick_per_ns) );
984 :
985 0 : if( FD_UNLIKELY( ctx->lagged_consecutive_leader_start ) ) {
986 : /* If we are mirroring Agave behavior, the wall clock gets reset
987 : here so we don't count time spent waiting for a bank to freeze
988 : or replay stage to actually start the slot towards our 400ms.
989 :
990 : See extended comments in the config file on this option. */
991 0 : ctx->reset_slot_start_ns = fd_log_wallclock() - (long)((double)(slot-ctx->reset_slot)*ctx->slot_duration_ns);
992 0 : }
993 :
994 0 : fd_bundle_crank_tip_payment_config_t config[1] = { 0 };
995 0 : fd_acct_addr_t tip_receiver_owner[1] = { 0 };
996 :
997 0 : if( FD_UNLIKELY( ctx->bundle.enabled ) ) {
998 0 : long bundle_time = -fd_tickcount();
999 0 : fd_acct_addr_t tip_payment_config[1];
1000 0 : fd_acct_addr_t tip_receiver[1];
1001 0 : fd_bundle_crank_get_addresses( ctx->bundle.gen, epoch, tip_payment_config, tip_receiver );
1002 :
1003 0 : fd_acct_addr_t _dummy[1];
1004 0 : uchar dummy[1];
1005 :
1006 0 : void const * bank = ctx->current_leader_bank;
1007 :
1008 : /* Calling rust from a C function that is CALLED_FROM_RUST risks
1009 : deadlock. In this case, I checked the load_account function and
1010 : ensured it never calls any C functions that acquire the lock. */
1011 0 : ulong sz1 = sizeof(config), sz2 = 1UL;
1012 0 : int found1 = fd_ext_bank_load_account( bank, 0, tip_payment_config->b, _dummy->b, (uchar *)config, &sz1 );
1013 0 : int found2 = fd_ext_bank_load_account( bank, 0, tip_receiver->b, tip_receiver_owner->b, dummy, &sz2 );
1014 : /* The bundle crank code detects whether the accounts were found by
1015 : whether they have non-zero values (since found and uninitialized
1016 : should be treated the same), so we actually don't really care
1017 : about the value of found{1,2}. */
1018 0 : (void)found1; (void)found2;
1019 0 : bundle_time += fd_tickcount();
1020 0 : fd_histf_sample( ctx->bundle_init_delay, (ulong)bundle_time );
1021 0 : }
1022 :
1023 0 : long slot_start_ns = ctx->reset_slot_start_ns + (long)((double)(slot-ctx->reset_slot)*ctx->slot_duration_ns);
1024 :
1025 : /* No need to check flow control, there are always credits became when we
1026 : are leader, we will not "become" leader again until we are done, so at
1027 : most one frag in flight at a time. */
1028 :
1029 0 : uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->pack_out->mem, ctx->pack_out->chunk );
1030 :
1031 0 : fd_became_leader_t * leader = (fd_became_leader_t *)dst;
1032 0 : leader->slot_start_ns = slot_start_ns;
1033 0 : leader->slot_end_ns = (long)((double)slot_start_ns + ctx->slot_duration_ns);
1034 0 : leader->bank = ctx->current_leader_bank;
1035 0 : leader->max_microblocks_in_slot = ctx->max_microblocks_per_slot;
1036 0 : leader->ticks_per_slot = ctx->ticks_per_slot;
1037 0 : leader->total_skipped_ticks = ctx->ticks_per_slot*(slot-ctx->reset_slot);
1038 0 : leader->epoch = epoch;
1039 0 : leader->bundle->config[0] = config[0];
1040 :
1041 0 : leader->limits.slot_max_cost = ctx->limits.slot_max_cost;
1042 0 : leader->limits.slot_max_vote_cost = ctx->limits.slot_max_vote_cost;
1043 0 : leader->limits.slot_max_write_cost_per_acct = ctx->limits.slot_max_write_cost_per_acct;
1044 :
1045 0 : memcpy( leader->bundle->last_blockhash, ctx->reset_hash, 32UL );
1046 0 : memcpy( leader->bundle->tip_receiver_owner, tip_receiver_owner, 32UL );
1047 :
1048 0 : if( FD_UNLIKELY( leader->ticks_per_slot+leader->total_skipped_ticks>=MAX_SKIPPED_TICKS ) )
1049 0 : FD_LOG_ERR(( "Too many skipped ticks %lu for slot %lu, chain must halt", leader->ticks_per_slot+leader->total_skipped_ticks, slot ));
1050 :
1051 0 : ulong sig = fd_disco_poh_sig( slot, POH_PKT_TYPE_BECAME_LEADER, 0UL );
1052 0 : fd_stem_publish( ctx->stem, ctx->pack_out->idx, sig, ctx->pack_out->chunk, sizeof(fd_became_leader_t), 0UL, 0UL, fd_frag_meta_ts_comp( fd_tickcount() ) );
1053 0 : ctx->pack_out->chunk = fd_dcache_compact_next( ctx->pack_out->chunk, sizeof(fd_became_leader_t), ctx->pack_out->chunk0, ctx->pack_out->wmark );
1054 :
1055 : /* increment refcount for pack's reference to the current leader bank */
1056 0 : if( FD_UNLIKELY( ctx->current_leader_bank ) ) {
1057 0 : ctx->pack_leader_bank = ctx->current_leader_bank;
1058 0 : fd_ext_bank_acquire( ctx->pack_leader_bank );
1059 0 : }
1060 0 : }
1061 :
1062 : /* The PoH tile knows when it should become leader by waiting for its
1063 : leader slot (with the operating system clock). This function is so
1064 : that when it becomes the leader, it can be told what the leader bank
1065 : is by the replay stage. See the notes in the long comment above for
1066 : more on how this works. */
1067 :
1068 : CALLED_FROM_RUST void
1069 : fd_ext_poh_begin_leader( void const * bank,
1070 : ulong slot,
1071 : ulong epoch,
1072 : ulong hashcnt_per_tick,
1073 : ulong cus_block_limit,
1074 : ulong cus_vote_cost_limit,
1075 0 : ulong cus_account_cost_limit ) {
1076 0 : fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
1077 :
1078 0 : FD_TEST( !ctx->current_leader_bank );
1079 :
1080 0 : if( FD_UNLIKELY( slot!=ctx->slot ) ) FD_LOG_ERR(( "Trying to begin leader slot %lu but we are now on slot %lu", slot, ctx->slot ));
1081 0 : if( FD_UNLIKELY( slot!=ctx->next_leader_slot ) ) FD_LOG_ERR(( "Trying to begin leader slot %lu but next leader slot is %lu", slot, ctx->next_leader_slot ));
1082 :
1083 0 : if( FD_UNLIKELY( ctx->hashcnt_per_tick!=hashcnt_per_tick ) ) {
1084 0 : FD_LOG_WARNING(( "hashes per tick changed from %lu to %lu", ctx->hashcnt_per_tick, hashcnt_per_tick ));
1085 :
1086 : /* Recompute derived information about the clock. */
1087 0 : ctx->hashcnt_duration_ns = (double)ctx->tick_duration_ns/(double)hashcnt_per_tick;
1088 0 : ctx->hashcnt_per_slot = ctx->ticks_per_slot*hashcnt_per_tick;
1089 0 : ctx->hashcnt_per_tick = hashcnt_per_tick;
1090 :
1091 0 : if( FD_UNLIKELY( ctx->hashcnt_per_tick==1UL ) ) {
1092 : /* Low power producer, maximum of one microblock per tick in the slot */
1093 0 : ctx->max_microblocks_per_slot = ctx->ticks_per_slot;
1094 0 : } else {
1095 : /* See the long comment in after_credit for this limit */
1096 0 : ctx->max_microblocks_per_slot = fd_ulong_min( MAX_MICROBLOCKS_PER_SLOT, ctx->ticks_per_slot*(ctx->hashcnt_per_tick-1UL) );
1097 0 : }
1098 :
1099 : /* Discard any ticks we might have done in the interim. They will
1100 : have the wrong number of hashes per tick. We can just catch back
1101 : up quickly if not too many slots were skipped and hopefully
1102 : publish on time. Note that tick production and verification of
1103 : skipped slots is done for the eventual bank that publishes a
1104 : slot, for example:
1105 :
1106 : Reset Slot: 998
1107 : Epoch Transition Slot: 1000
1108 : Leader Slot: 1002
1109 :
1110 : In this case, if a feature changing the hashcnt_per_tick is
1111 : activated in slot 1000, and we are publishing empty ticks for
1112 : slots 998, 999, 1000, and 1001, they should all have the new
1113 : hashes_per_tick number of hashes, rather than the older one, or
1114 : some combination. */
1115 :
1116 0 : FD_TEST( ctx->last_slot==ctx->reset_slot );
1117 0 : FD_TEST( !ctx->last_hashcnt );
1118 0 : ctx->slot = ctx->reset_slot;
1119 0 : ctx->hashcnt = 0UL;
1120 0 : }
1121 :
1122 0 : ctx->current_leader_bank = bank;
1123 0 : ctx->slot_done = 0;
1124 0 : ctx->microblocks_lower_bound = 0UL;
1125 0 : ctx->cus_used = 0UL;
1126 :
1127 0 : ctx->limits.slot_max_cost = cus_block_limit;
1128 0 : ctx->limits.slot_max_vote_cost = cus_vote_cost_limit;
1129 0 : ctx->limits.slot_max_write_cost_per_acct = cus_account_cost_limit;
1130 :
1131 : /* clamp and warn if we are underutilizing CUs */
1132 0 : if( FD_UNLIKELY( ctx->limits.slot_max_cost > FD_PACK_MAX_COST_PER_BLOCK_UPPER_BOUND ) ) {
1133 0 : FD_LOG_WARNING(( "Underutilizing protocol slot CU limit. protocol_limit=%lu validator_limit=%lu", ctx->limits.slot_max_cost, FD_PACK_MAX_COST_PER_BLOCK_UPPER_BOUND ));
1134 0 : ctx->limits.slot_max_cost = FD_PACK_MAX_COST_PER_BLOCK_UPPER_BOUND;
1135 0 : }
1136 0 : if( FD_UNLIKELY( ctx->limits.slot_max_vote_cost > FD_PACK_MAX_VOTE_COST_PER_BLOCK_UPPER_BOUND ) ) {
1137 0 : FD_LOG_WARNING(( "Underutilizing protocol vote CU limit. protocol_limit=%lu validator_limit=%lu", ctx->limits.slot_max_vote_cost, FD_PACK_MAX_VOTE_COST_PER_BLOCK_UPPER_BOUND ));
1138 0 : ctx->limits.slot_max_vote_cost = FD_PACK_MAX_VOTE_COST_PER_BLOCK_UPPER_BOUND;
1139 0 : }
1140 0 : if( FD_UNLIKELY( ctx->limits.slot_max_write_cost_per_acct > FD_PACK_MAX_WRITE_COST_PER_ACCT_UPPER_BOUND ) ) {
1141 0 : FD_LOG_WARNING(( "Underutilizing protocol write CU limit. protocol_limit=%lu validator_limit=%lu", ctx->limits.slot_max_write_cost_per_acct, FD_PACK_MAX_WRITE_COST_PER_ACCT_UPPER_BOUND ));
1142 0 : ctx->limits.slot_max_write_cost_per_acct = FD_PACK_MAX_WRITE_COST_PER_ACCT_UPPER_BOUND;
1143 0 : }
1144 :
1145 : /* We are about to start publishing to the shred tile for this slot
1146 : so update the highwater mark so we never republish in this slot
1147 : again. Also check that the leader slot is greater than the
1148 : highwater, which should have been ensured earlier. */
1149 :
1150 0 : FD_TEST( ctx->highwater_leader_slot==ULONG_MAX || slot>=ctx->highwater_leader_slot );
1151 0 : ctx->highwater_leader_slot = fd_ulong_max( fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ), slot );
1152 :
1153 0 : publish_became_leader( ctx, slot, epoch );
1154 0 : FD_LOG_INFO(( "fd_ext_poh_begin_leader(slot=%lu, highwater_leader_slot=%lu, last_slot=%lu, last_hashcnt=%lu)", slot, ctx->highwater_leader_slot, ctx->last_slot, ctx->last_hashcnt ));
1155 :
1156 0 : fd_ext_poh_write_unlock();
1157 0 : }
1158 :
1159 : /* Determine what the next slot is in the leader schedule is that we are
1160 : leader. Includes the current slot. If we are not leader in what
1161 : remains of the current and next epoch, return ULONG_MAX. */
1162 :
1163 : static inline CALLED_FROM_RUST ulong
1164 0 : next_leader_slot( fd_poh_ctx_t * ctx ) {
1165 : /* If we have published anything in a particular slot, then we
1166 : should never become leader for that slot again. */
1167 0 : ulong min_leader_slot = fd_ulong_max( ctx->slot, fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ) );
1168 0 : return fd_multi_epoch_leaders_get_next_slot( ctx->mleaders, min_leader_slot, &ctx->identity_key );
1169 0 : }
1170 :
1171 : extern int
1172 : fd_ext_admin_rpc_set_identity( uchar const * identity_keypair,
1173 : int require_tower );
1174 :
1175 : static inline int FD_FN_SENSITIVE
1176 : maybe_change_identity( fd_poh_ctx_t * ctx,
1177 0 : int definitely_not_leader ) {
1178 0 : if( FD_UNLIKELY( ctx->halted_switching_key && fd_keyswitch_state_query( ctx->keyswitch )==FD_KEYSWITCH_STATE_UNHALT_PENDING ) ) {
1179 0 : ctx->halted_switching_key = 0;
1180 0 : fd_keyswitch_state( ctx->keyswitch, FD_KEYSWITCH_STATE_COMPLETED );
1181 0 : return 1;
1182 0 : }
1183 :
1184 : /* Cannot change identity while in the middle of a leader slot, else
1185 : poh state machine would become corrupt. */
1186 :
1187 0 : int is_leader = !definitely_not_leader && ctx->next_leader_slot!=ULONG_MAX && ctx->slot>=ctx->next_leader_slot;
1188 0 : if( FD_UNLIKELY( is_leader ) ) return 0;
1189 :
1190 0 : if( FD_UNLIKELY( fd_keyswitch_state_query( ctx->keyswitch )==FD_KEYSWITCH_STATE_SWITCH_PENDING ) ) {
1191 0 : int failed = fd_ext_admin_rpc_set_identity( ctx->keyswitch->bytes, fd_keyswitch_param_query( ctx->keyswitch )==1 );
1192 0 : explicit_bzero( ctx->keyswitch->bytes, 32UL );
1193 0 : FD_COMPILER_MFENCE();
1194 0 : if( FD_UNLIKELY( failed==-1 ) ) {
1195 0 : fd_keyswitch_state( ctx->keyswitch, FD_KEYSWITCH_STATE_FAILED );
1196 0 : return 0;
1197 0 : }
1198 :
1199 0 : memcpy( ctx->identity_key.uc, ctx->keyswitch->bytes+32UL, 32UL );
1200 :
1201 : /* When we switch key, we might have ticked part way through a slot
1202 : that we are now leader in. This violates the contract of the
1203 : tile, that when we become leader, we have not ticked in that slot
1204 : at all. To see why this would be bad, consider the case where we
1205 : have ticked almost to the end, and there isn't enough space left
1206 : to reserve the minimum amount of microblocks needed by pack.
1207 :
1208 : To resolve this, we just reset PoH back to the reset slot, and
1209 : let it try to catch back up quickly. This is OK since the network
1210 : rarely skips. */
1211 0 : ctx->slot = ctx->reset_slot;
1212 0 : ctx->hashcnt = 0UL;
1213 0 : memcpy( ctx->hash, ctx->reset_hash, 32UL );
1214 :
1215 0 : ctx->halted_switching_key = 1;
1216 0 : ctx->keyswitch->result = ctx->shred_seq;
1217 0 : fd_keyswitch_state( ctx->keyswitch, FD_KEYSWITCH_STATE_COMPLETED );
1218 0 : }
1219 :
1220 0 : return 0;
1221 0 : }
1222 :
1223 : static CALLED_FROM_RUST void
1224 0 : no_longer_leader( fd_poh_ctx_t * ctx ) {
1225 0 : if( FD_UNLIKELY( ctx->current_leader_bank ) ) fd_ext_bank_release( ctx->current_leader_bank );
1226 : /* If we stop being leader in a slot, we can never become leader in
1227 : that slot again, and all in-flight microblocks for that slot
1228 : should be dropped. */
1229 0 : ctx->highwater_leader_slot = fd_ulong_max( fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ), ctx->slot );
1230 0 : ctx->current_leader_bank = NULL;
1231 0 : int identity_changed = maybe_change_identity( ctx, 1 );
1232 0 : ctx->next_leader_slot = next_leader_slot( ctx );
1233 0 : if( FD_UNLIKELY( identity_changed ) ) {
1234 0 : FD_LOG_INFO(( "fd_poh_identity_changed(next_leader_slot=%lu)", ctx->next_leader_slot ));
1235 0 : }
1236 :
1237 0 : FD_COMPILER_MFENCE();
1238 0 : fd_ext_poh_signal_leader_change( ctx->signal_leader_change );
1239 0 : FD_LOG_INFO(( "no_longer_leader(next_leader_slot=%lu)", ctx->next_leader_slot ));
1240 0 : }
1241 :
1242 : /* fd_ext_poh_reset is called by the Agave client when a slot on
1243 : the active fork has finished a block and we need to reset our PoH to
1244 : be ticking on top of the block it produced. */
1245 :
1246 : CALLED_FROM_RUST void
1247 : fd_ext_poh_reset( ulong completed_bank_slot, /* The slot that successfully produced a block */
1248 : uchar const * reset_blockhash, /* The hash of the last tick in the produced block */
1249 : ulong hashcnt_per_tick, /* The hashcnt per tick of the bank that completed */
1250 : uchar const * parent_block_id, /* The block id of the parent block */
1251 0 : ulong const * features_activation /* The activation slot of shred-tile features */ ) {
1252 0 : fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
1253 :
1254 0 : ulong slot_before_reset = ctx->slot;
1255 0 : int leader_before_reset = ctx->slot>=ctx->next_leader_slot;
1256 0 : if( FD_UNLIKELY( leader_before_reset && ctx->current_leader_bank ) ) {
1257 : /* If we were in the middle of a leader slot that we notified pack
1258 : pack to start packing for we can never publish into that slot
1259 : again, mark all in-flight microblocks to be dropped. */
1260 0 : ctx->highwater_leader_slot = fd_ulong_max( fd_ulong_if( ctx->highwater_leader_slot==ULONG_MAX, 0UL, ctx->highwater_leader_slot ), 1UL+ctx->slot );
1261 0 : }
1262 :
1263 0 : ctx->leader_bank_start_ns = fd_log_wallclock(); /* safe to call from Rust */
1264 0 : if( FD_UNLIKELY( ctx->expect_sequential_leader_slot==(completed_bank_slot+1UL) ) ) {
1265 : /* If we are being reset onto a slot, it means some block was fully
1266 : processed, so we reset to build on top of it. Typically we want
1267 : to update the reset_slot_start_ns to the current time, because
1268 : the network will give the next leader 400ms to publish,
1269 : regardless of how long the prior leader took.
1270 :
1271 : But: if we were leader in the prior slot, and the block was our
1272 : own we can do better. We know that the next slot should start
1273 : exactly 400ms after the prior one started, so we can use that as
1274 : the reset slot start time instead. */
1275 0 : ctx->reset_slot_start_ns = ctx->reset_slot_start_ns + (long)((double)((completed_bank_slot+1UL)-ctx->reset_slot)*ctx->slot_duration_ns);
1276 0 : } else {
1277 0 : ctx->reset_slot_start_ns = ctx->leader_bank_start_ns;
1278 0 : }
1279 0 : ctx->expect_sequential_leader_slot = ULONG_MAX;
1280 :
1281 0 : memcpy( ctx->reset_hash, reset_blockhash, 32UL );
1282 0 : memcpy( ctx->hash, reset_blockhash, 32UL );
1283 0 : if( FD_LIKELY( parent_block_id!=NULL ) ) {
1284 0 : ctx->parent_slot = completed_bank_slot;
1285 0 : memcpy( ctx->parent_block_id, parent_block_id, 32UL );
1286 0 : }
1287 0 : ctx->slot = completed_bank_slot+1UL;
1288 0 : ctx->hashcnt = 0UL;
1289 0 : ctx->last_slot = ctx->slot;
1290 0 : ctx->last_hashcnt = 0UL;
1291 0 : ctx->reset_slot = ctx->slot;
1292 :
1293 0 : if( FD_UNLIKELY( ctx->hashcnt_per_tick!=hashcnt_per_tick ) ) {
1294 0 : FD_LOG_WARNING(( "hashes per tick changed from %lu to %lu", ctx->hashcnt_per_tick, hashcnt_per_tick ));
1295 :
1296 : /* Recompute derived information about the clock. */
1297 0 : ctx->hashcnt_duration_ns = (double)ctx->tick_duration_ns/(double)hashcnt_per_tick;
1298 0 : ctx->hashcnt_per_slot = ctx->ticks_per_slot*hashcnt_per_tick;
1299 0 : ctx->hashcnt_per_tick = hashcnt_per_tick;
1300 :
1301 0 : if( FD_UNLIKELY( ctx->hashcnt_per_tick==1UL ) ) {
1302 : /* Low power producer, maximum of one microblock per tick in the slot */
1303 0 : ctx->max_microblocks_per_slot = ctx->ticks_per_slot;
1304 0 : } else {
1305 : /* See the long comment in after_credit for this limit */
1306 0 : ctx->max_microblocks_per_slot = fd_ulong_min( MAX_MICROBLOCKS_PER_SLOT, ctx->ticks_per_slot*(ctx->hashcnt_per_tick-1UL) );
1307 0 : }
1308 0 : }
1309 :
1310 : /* When we reset, we need to allow PoH to tick freely again rather
1311 : than being constrained. If we are leader after the reset, this
1312 : is OK because we won't tick until we get a bank, and the lower
1313 : bound will be reset with the value from the bank. */
1314 0 : ctx->microblocks_lower_bound = ctx->max_microblocks_per_slot;
1315 :
1316 0 : if( FD_UNLIKELY( leader_before_reset ) ) {
1317 : /* No longer have a leader bank if we are reset. Replay stage will
1318 : call back again to give us a new one if we should become leader
1319 : for the reset slot.
1320 :
1321 : The order is important here, ctx->hashcnt must be updated before
1322 : calling no_longer_leader. */
1323 0 : no_longer_leader( ctx );
1324 0 : }
1325 0 : ctx->next_leader_slot = next_leader_slot( ctx );
1326 0 : FD_LOG_INFO(( "fd_ext_poh_reset(slot=%lu,next_leader_slot=%lu)", ctx->reset_slot, ctx->next_leader_slot ));
1327 :
1328 0 : if( FD_UNLIKELY( ctx->slot>=ctx->next_leader_slot ) ) {
1329 : /* We are leader after the reset... two cases: */
1330 0 : if( FD_LIKELY( ctx->slot==slot_before_reset ) ) {
1331 : /* 1. We are reset onto the same slot we are already leader on.
1332 : This is a common case when we have two leader slots in a
1333 : row, replay stage will reset us to our own slot. No need to
1334 : do anything here, we already sent a SLOT_START. */
1335 0 : FD_TEST( leader_before_reset );
1336 0 : } else {
1337 : /* 2. We are reset onto a different slot. If we were leader
1338 : before, we should first end that slot, then begin the new
1339 : one if we are newly leader now. */
1340 0 : if( FD_LIKELY( leader_before_reset ) ) publish_plugin_slot_end( ctx, slot_before_reset, ctx->cus_used );
1341 0 : else publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->reset_slot );
1342 0 : }
1343 0 : } else {
1344 0 : if( FD_UNLIKELY( leader_before_reset ) ) publish_plugin_slot_end( ctx, slot_before_reset, ctx->cus_used );
1345 0 : }
1346 :
1347 : /* There is a subset of FD_SHRED_FEATURES_ACTIVATION_... slots that
1348 : the shred tile needs to be aware of. Since their computation
1349 : requires the bank, we are forced (so far) to receive them here
1350 : from the Rust side, before forwarding them to the shred tile as
1351 : POH_PKT_TYPE_FEAT_ACT_SLOT. This is not elegant, and it should
1352 : be revised in the future (TODO), but it provides a "temporary"
1353 : working solution to handle features activation. */
1354 0 : fd_memcpy( ctx->features_activation->slots, features_activation, sizeof(fd_shred_features_activation_t) );
1355 0 : ctx->features_activation_avail = 1UL;
1356 :
1357 0 : fd_ext_poh_write_unlock();
1358 0 : }
1359 :
1360 : /* Since it can't easily return an Option<Pubkey>, return 1 for Some and
1361 : 0 for None. */
1362 : CALLED_FROM_RUST int
1363 : fd_ext_poh_get_leader_after_n_slots( ulong n,
1364 0 : uchar out_pubkey[ static 32 ] ) {
1365 0 : fd_poh_ctx_t * ctx = fd_ext_poh_write_lock();
1366 0 : ulong slot = ctx->slot + n;
1367 0 : fd_pubkey_t const * leader = fd_multi_epoch_leaders_get_leader_for_slot( ctx->mleaders, slot );
1368 :
1369 0 : int copied = 0;
1370 0 : if( FD_LIKELY( leader ) ) {
1371 0 : memcpy( out_pubkey, leader, 32UL );
1372 0 : copied = 1;
1373 0 : }
1374 0 : fd_ext_poh_write_unlock();
1375 0 : return copied;
1376 0 : }
1377 :
1378 : FD_FN_CONST static inline ulong
1379 0 : scratch_align( void ) {
1380 0 : return 128UL;
1381 0 : }
1382 :
1383 : FD_FN_PURE static inline ulong
1384 0 : scratch_footprint( fd_topo_tile_t const * tile ) {
1385 0 : (void)tile;
1386 0 : ulong l = FD_LAYOUT_INIT;
1387 0 : l = FD_LAYOUT_APPEND( l, alignof( fd_poh_ctx_t ), sizeof( fd_poh_ctx_t ) );
1388 0 : l = FD_LAYOUT_APPEND( l, FD_SHA256_ALIGN, FD_SHA256_FOOTPRINT );
1389 0 : return FD_LAYOUT_FINI( l, scratch_align() );
1390 0 : }
1391 :
1392 : static void
1393 : publish_tick( fd_poh_ctx_t * ctx,
1394 : fd_stem_context_t * stem,
1395 : uchar hash[ static 32 ],
1396 0 : int is_skipped ) {
1397 0 : ulong hashcnt = ctx->hashcnt_per_tick*(1UL+(ctx->last_hashcnt/ctx->hashcnt_per_tick));
1398 :
1399 0 : uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->shred_out->mem, ctx->shred_out->chunk );
1400 :
1401 0 : FD_TEST( ctx->last_slot>=ctx->reset_slot );
1402 0 : fd_entry_batch_meta_t * meta = (fd_entry_batch_meta_t *)dst;
1403 0 : if( FD_UNLIKELY( is_skipped ) ) {
1404 : /* We are publishing ticks for a skipped slot, the reference tick
1405 : and block complete flags should always be zero. */
1406 0 : meta->reference_tick = 0UL;
1407 0 : meta->block_complete = 0;
1408 0 : } else {
1409 0 : meta->reference_tick = hashcnt/ctx->hashcnt_per_tick;
1410 0 : meta->block_complete = hashcnt==ctx->hashcnt_per_slot;
1411 0 : }
1412 :
1413 0 : ulong slot = fd_ulong_if( meta->block_complete, ctx->slot-1UL, ctx->slot );
1414 0 : meta->parent_offset = 1UL+slot-ctx->reset_slot;
1415 :
1416 : /* From poh_reset we received the block_id for ctx->parent_slot.
1417 : Now we're telling shred tile to build on parent: (slot-meta->parent_offset).
1418 : The block_id that we're passing is valid iff the two are the same,
1419 : i.e. ctx->parent_slot == (slot-meta->parent_offset). */
1420 0 : meta->parent_block_id_valid = ctx->parent_slot == (slot-meta->parent_offset);
1421 0 : if( FD_LIKELY( meta->parent_block_id_valid ) ) {
1422 0 : fd_memcpy( meta->parent_block_id, ctx->parent_block_id, 32UL );
1423 0 : }
1424 :
1425 0 : FD_TEST( hashcnt>ctx->last_hashcnt );
1426 0 : ulong hash_delta = hashcnt-ctx->last_hashcnt;
1427 :
1428 0 : dst += sizeof(fd_entry_batch_meta_t);
1429 0 : fd_entry_batch_header_t * tick = (fd_entry_batch_header_t *)dst;
1430 0 : tick->hashcnt_delta = hash_delta;
1431 0 : fd_memcpy( tick->hash, hash, 32UL );
1432 0 : tick->txn_cnt = 0UL;
1433 :
1434 0 : ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
1435 0 : ulong sz = sizeof(fd_entry_batch_meta_t)+sizeof(fd_entry_batch_header_t);
1436 0 : ulong sig = fd_disco_poh_sig( slot, POH_PKT_TYPE_MICROBLOCK, 0UL );
1437 0 : fd_stem_publish( stem, ctx->shred_out->idx, sig, ctx->shred_out->chunk, sz, 0UL, 0UL, tspub );
1438 0 : ctx->shred_seq = stem->seqs[ ctx->shred_out->idx ];
1439 0 : ctx->shred_out->chunk = fd_dcache_compact_next( ctx->shred_out->chunk, sz, ctx->shred_out->chunk0, ctx->shred_out->wmark );
1440 :
1441 0 : if( FD_UNLIKELY( hashcnt==ctx->hashcnt_per_slot ) ) {
1442 0 : ctx->last_slot++;
1443 0 : ctx->last_hashcnt = 0UL;
1444 0 : } else {
1445 0 : ctx->last_hashcnt = hashcnt;
1446 0 : }
1447 0 : }
1448 :
1449 : static inline void
1450 : publish_features_activation( fd_poh_ctx_t * ctx,
1451 0 : fd_stem_context_t * stem ) {
1452 0 : uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->shred_out->mem, ctx->shred_out->chunk );
1453 0 : fd_shred_features_activation_t * act_data = (fd_shred_features_activation_t *)dst;
1454 0 : fd_memcpy( act_data, ctx->features_activation, sizeof(fd_shred_features_activation_t) );
1455 :
1456 0 : ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
1457 0 : ulong sz = sizeof(fd_shred_features_activation_t);
1458 0 : ulong sig = fd_disco_poh_sig( ctx->slot, POH_PKT_TYPE_FEAT_ACT_SLOT, 0UL );
1459 0 : fd_stem_publish( stem, ctx->shred_out->idx, sig, ctx->shred_out->chunk, sz, 0UL, 0UL, tspub );
1460 0 : ctx->shred_seq = stem->seqs[ ctx->shred_out->idx ];
1461 0 : ctx->shred_out->chunk = fd_dcache_compact_next( ctx->shred_out->chunk, sz, ctx->shred_out->chunk0, ctx->shred_out->wmark );
1462 0 : }
1463 :
1464 : static inline void
1465 : after_credit( fd_poh_ctx_t * ctx,
1466 : fd_stem_context_t * stem,
1467 : int * opt_poll_in,
1468 0 : int * charge_busy ) {
1469 0 : ctx->stem = stem;
1470 :
1471 0 : FD_COMPILER_MFENCE();
1472 0 : if( FD_UNLIKELY( fd_poh_waiting_lock ) ) {
1473 0 : FD_VOLATILE( fd_poh_returned_lock ) = 1UL;
1474 0 : FD_COMPILER_MFENCE();
1475 0 : for(;;) {
1476 0 : if( FD_UNLIKELY( !FD_VOLATILE_CONST( fd_poh_returned_lock ) ) ) break;
1477 0 : FD_SPIN_PAUSE();
1478 0 : }
1479 0 : FD_COMPILER_MFENCE();
1480 0 : FD_VOLATILE( fd_poh_waiting_lock ) = 0UL;
1481 0 : *opt_poll_in = 0;
1482 0 : *charge_busy = 1;
1483 0 : return;
1484 0 : }
1485 0 : FD_COMPILER_MFENCE();
1486 :
1487 0 : if( FD_UNLIKELY( ctx->features_activation_avail ) ) {
1488 : /* If we have received an update on features_activation, then
1489 : forward them to the shred tile. In principle, this should
1490 : happen at most once per slot. */
1491 0 : publish_features_activation( ctx, stem );
1492 0 : ctx->features_activation_avail = 0UL;
1493 0 : }
1494 :
1495 0 : int is_leader = ctx->next_leader_slot!=ULONG_MAX && ctx->slot>=ctx->next_leader_slot;
1496 0 : if( FD_UNLIKELY( is_leader && !ctx->current_leader_bank ) ) {
1497 : /* If we are the leader, but we didn't yet learn what the leader
1498 : bank object is from the replay stage, do not do any hashing.
1499 :
1500 : This is not ideal, but greatly simplifies the control flow. */
1501 0 : return;
1502 0 : }
1503 :
1504 : /* If we have skipped ticks pending because we skipped some slots to
1505 : become leader, register them now one at a time. */
1506 0 : if( FD_UNLIKELY( is_leader && ctx->last_slot<ctx->slot ) ) {
1507 0 : ulong publish_hashcnt = ctx->last_hashcnt+ctx->hashcnt_per_tick;
1508 0 : ulong tick_idx = (ctx->last_slot*ctx->ticks_per_slot+publish_hashcnt/ctx->hashcnt_per_tick)%MAX_SKIPPED_TICKS;
1509 :
1510 0 : fd_ext_poh_register_tick( ctx->current_leader_bank, ctx->skipped_tick_hashes[ tick_idx ] );
1511 0 : publish_tick( ctx, stem, ctx->skipped_tick_hashes[ tick_idx ], 1 );
1512 :
1513 : /* If we are catching up now and publishing a bunch of skipped
1514 : ticks, we do not want to process any incoming microblocks until
1515 : all the skipped ticks have been published out; otherwise we would
1516 : intersperse skipped tick messages with microblocks. */
1517 0 : *opt_poll_in = 0;
1518 0 : *charge_busy = 1;
1519 0 : return;
1520 0 : }
1521 :
1522 0 : int low_power_mode = ctx->hashcnt_per_tick==1UL;
1523 :
1524 : /* If we are the leader, always leave enough capacity in the slot so
1525 : that we can mixin any potential microblocks still coming from the
1526 : pack tile for this slot. */
1527 0 : ulong max_remaining_microblocks = ctx->max_microblocks_per_slot - ctx->microblocks_lower_bound;
1528 :
1529 : /* We don't want to tick over (finish) the slot until pack tell us
1530 : it's done. If we're waiting on pack, then we clamp to [0, 1] */
1531 0 : if( FD_LIKELY( !ctx->slot_done && is_leader ) ) max_remaining_microblocks = fd_ulong_max( fd_ulong_min( 1UL, max_remaining_microblocks ), max_remaining_microblocks );
1532 :
1533 : /* With hashcnt_per_tick hashes per tick, we actually get
1534 : hashcnt_per_tick-1 chances to mixin a microblock. For each tick
1535 : span that we need to reserve, we also need to reserve the hashcnt
1536 : for the tick, hence the +
1537 : max_remaining_microblocks/(hashcnt_per_tick-1) rounded up.
1538 :
1539 : However, if hashcnt_per_tick is 1 because we're in low power mode,
1540 : this should probably just be max_remaining_microblocks. */
1541 0 : ulong max_remaining_ticks_or_microblocks = max_remaining_microblocks;
1542 0 : if( FD_LIKELY( !low_power_mode ) ) max_remaining_ticks_or_microblocks += (max_remaining_microblocks+ctx->hashcnt_per_tick-2UL)/(ctx->hashcnt_per_tick-1UL);
1543 :
1544 0 : ulong restricted_hashcnt = fd_ulong_if( ctx->hashcnt_per_slot>=max_remaining_ticks_or_microblocks, ctx->hashcnt_per_slot-max_remaining_ticks_or_microblocks, 0UL );
1545 :
1546 0 : ulong min_hashcnt = ctx->hashcnt;
1547 :
1548 0 : if( FD_LIKELY( !low_power_mode ) ) {
1549 : /* Recall that there are two kinds of events that will get published
1550 : to the shredder,
1551 :
1552 : (a) Ticks. These occur every 62,500 (hashcnt_per_tick) hashcnts,
1553 : and there will be 64 (ticks_per_slot) of them in each slot.
1554 :
1555 : Ticks must not have any transactions mixed into the hash.
1556 : This is not strictly needed in theory, but is required by the
1557 : current consensus protocol. They get published here in
1558 : after_credit.
1559 :
1560 : (b) Microblocks. These can occur at any other hashcnt, as long
1561 : as it is not a tick. Microblocks cannot be empty, and must
1562 : have at least one transactions mixed in. These get
1563 : published in after_frag.
1564 :
1565 : If hashcnt_per_tick is 1, then we are in low power mode and the
1566 : following does not apply, since we can mix in transactions at any
1567 : time.
1568 :
1569 : In the normal, non-low-power mode, though, we have to be careful
1570 : to make sure that we do not publish microblocks on tick
1571 : boundaries. To do that, we need to obey two rules:
1572 : (i) after_credit must not leave hashcnt one before a tick
1573 : boundary
1574 : (ii) if after_credit begins one before a tick boundary, it must
1575 : advance hashcnt and publish the tick
1576 :
1577 : There's some interplay between min_hashcnt and restricted_hashcnt
1578 : here, and we need to show that there's always a value of
1579 : target_hashcnt we can pick such that
1580 : min_hashcnt <= target_hashcnt <= restricted_hashcnt.
1581 : We'll prove this by induction for current_slot==0 and
1582 : is_leader==true, since all other slots should be the same.
1583 :
1584 : Let m_j and r_j be the min_hashcnt and restricted_hashcnt
1585 : (respectively) for the jth call to after_credit in a slot. We
1586 : want to show that for all values of j, it's possible to pick a
1587 : value h_j, the value of target_hashcnt for the jth call to
1588 : after_credit (which is also the value of hashcnt after
1589 : after_credit has completed) such that m_j<=h_j<=r_j.
1590 :
1591 : Additionally, let T be hashcnt_per_tick and N be ticks_per_slot.
1592 :
1593 : Starting with the base case, j==0. m_j=0, and
1594 : r_0 = N*T - max_microblocks_per_slot
1595 : - ceil(max_microblocks_per_slot/(T-1)).
1596 :
1597 : This is monotonic decreasing in max_microblocks_per_slot, so it
1598 : achieves its minimum when max_microblocks_per_slot is its
1599 : maximum.
1600 : r_0 >= N*T - N*(T-1) - ceil( (N*(T-1))/(T-1))
1601 : = N*T - N*(T-1)-N = 0.
1602 : Thus, m_0 <= r_0, as desired.
1603 :
1604 :
1605 :
1606 : Then, for the inductive step, assume there exists h_j such that
1607 : m_j<=h_j<=r_j, and we want to show that there exists h_{j+1},
1608 : which is the same as showing m_{j+1}<=r_{j+1}.
1609 :
1610 : Let a_j be 1 if we had a microblock immediately following the jth
1611 : call to after_credit, and 0 otherwise. Then hashcnt at the start
1612 : of the (j+1)th call to after_frag is h_j+a_j.
1613 : Also, set b_{j+1}=1 if we are in the case covered by rule (ii)
1614 : above during the (j+1)th call to after_credit, i.e. if
1615 : (h_j+a_j)%T==T-1. Thus, m_{j+1} = h_j + a_j + b_{j+1}.
1616 :
1617 : If we received an additional microblock, then
1618 : max_remaining_microblocks goes down by 1, and
1619 : max_remaining_ticks_or_microblocks goes down by either 1 or 2,
1620 : which means restricted_hashcnt goes up by either 1 or 2. In
1621 : particular, it goes up by 2 if the new value of
1622 : max_remaining_microblocks (at the start of the (j+1)th call to
1623 : after_credit) is congruent to 0 mod T-1. Let b'_{j+1} be 1 if
1624 : this condition is met and 0 otherwise. If we receive a
1625 : done_packing message, restricted_hashcnt can go up by more, but
1626 : we can ignore that case, since it is less restrictive.
1627 : Thus, r_{j+1}=r_j+a_j+b'_{j+1}.
1628 :
1629 : If h_j < r_j (strictly less), then h_j+a_j < r_j+a_j. And thus,
1630 : since b_{j+1}<=b'_{j+1}+1, just by virtue of them both being
1631 : binary,
1632 : h_j + a_j + b_{j+1} < r_j + a_j + b'_{j+1} + 1,
1633 : which is the same (for integers) as
1634 : h_j + a_j + b_{j+1} <= r_j + a_j + b'_{j+1},
1635 : m_{j+1} <= r_{j+1}
1636 :
1637 : On the other hand, if h_j==r_j, this is easy unless b_{j+1}==1,
1638 : which can also only happen if a_j==1. Then (h_j+a_j)%T==T-1,
1639 : which means there's an integer k such that
1640 :
1641 : h_j+a_j==(ticks_per_slot-k)*T-1
1642 : h_j ==ticks_per_slot*T - k*(T-1)-1 - k-1
1643 : ==ticks_per_slot*T - (k*(T-1)+1) - ceil( (k*(T-1)+1)/(T-1) )
1644 :
1645 : Since h_j==r_j in this case, and
1646 : r_j==(ticks_per_slot*T) - max_remaining_microblocks_j - ceil(max_remaining_microblocks_j/(T-1)),
1647 : we can see that the value of max_remaining_microblocks at the
1648 : start of the jth call to after_credit is k*(T-1)+1. Again, since
1649 : a_j==1, then the value of max_remaining_microblocks at the start
1650 : of the j+1th call to after_credit decreases by 1 to k*(T-1),
1651 : which means b'_{j+1}=1.
1652 :
1653 : Thus, h_j + a_j + b_{j+1} == r_j + a_j + b'_{j+1}, so, in
1654 : particular, h_{j+1}<=r_{j+1} as desired. */
1655 0 : min_hashcnt += (ulong)(min_hashcnt%ctx->hashcnt_per_tick == (ctx->hashcnt_per_tick-1UL)); /* add b_{j+1}, enforcing rule (ii) */
1656 0 : }
1657 : /* Now figure out how many hashes are needed to "catch up" the hash
1658 : count to the current system clock, and clamp it to the allowed
1659 : range. */
1660 0 : long now = fd_log_wallclock();
1661 0 : ulong target_hashcnt;
1662 0 : if( FD_LIKELY( !is_leader ) ) {
1663 0 : target_hashcnt = (ulong)((double)(now - ctx->reset_slot_start_ns) / ctx->hashcnt_duration_ns) - (ctx->slot-ctx->reset_slot)*ctx->hashcnt_per_slot;
1664 0 : } else {
1665 : /* We might have gotten very behind on hashes, but if we are leader
1666 : we want to catch up gradually over the remainder of our leader
1667 : slot, not all at once right now. This helps keep the tile from
1668 : being oversubscribed and taking a long time to process incoming
1669 : microblocks. */
1670 0 : long expected_slot_start_ns = ctx->reset_slot_start_ns + (long)((double)(ctx->slot-ctx->reset_slot)*ctx->slot_duration_ns);
1671 0 : double actual_slot_duration_ns = ctx->slot_duration_ns<(double)(ctx->leader_bank_start_ns - expected_slot_start_ns) ? 0.0 : ctx->slot_duration_ns - (double)(ctx->leader_bank_start_ns - expected_slot_start_ns);
1672 0 : double actual_hashcnt_duration_ns = actual_slot_duration_ns / (double)ctx->hashcnt_per_slot;
1673 0 : target_hashcnt = fd_ulong_if( actual_hashcnt_duration_ns==0.0, restricted_hashcnt, (ulong)((double)(now - ctx->leader_bank_start_ns) / actual_hashcnt_duration_ns) );
1674 0 : }
1675 : /* Clamp to [min_hashcnt, restricted_hashcnt] as above */
1676 0 : target_hashcnt = fd_ulong_max( fd_ulong_min( target_hashcnt, restricted_hashcnt ), min_hashcnt );
1677 :
1678 : /* The above proof showed that it was always possible to pick a value
1679 : of target_hashcnt, but we still have a lot of freedom in how to
1680 : pick it. It simplifies the code a lot if we don't keep going after
1681 : a tick in this function. In particular, we want to publish at most
1682 : 1 tick in this call, since otherwise we could consume infinite
1683 : credits to publish here. The credits are set so that we should
1684 : only ever publish one tick during this loop. Also, all the extra
1685 : stuff (leader transitions, publishing ticks, etc.) we have to do
1686 : happens at tick boundaries, so this lets us consolidate all those
1687 : cases.
1688 :
1689 : Mathematically, since the current value of hashcnt is h_j+a_j, the
1690 : next tick (advancing a full tick if we're currently at a tick) is
1691 : t_{j+1} = T*(floor( (h_j+a_j)/T )+1). We need to show that if we set
1692 : h'_{j+1} = min( h_{j+1}, t_{j+1} ), it is still valid.
1693 :
1694 : First, h'_{j+1} <= h_{j+1} <= r_{j+1}, so we're okay in that
1695 : direction.
1696 :
1697 : Next, observe that t_{j+1}>=h_j + a_j + 1, and recall that b_{j+1}
1698 : is 0 or 1. So then,
1699 : t_{j+1} >= h_j+a_j+b_{j+1} = m_{j+1}.
1700 :
1701 : We know h_{j+1) >= m_{j+1} from before, so then h'_{j+1} >=
1702 : m_{j+1}, as desired. */
1703 :
1704 0 : ulong next_tick_hashcnt = ctx->hashcnt_per_tick * (1UL+(ctx->hashcnt/ctx->hashcnt_per_tick));
1705 0 : target_hashcnt = fd_ulong_min( target_hashcnt, next_tick_hashcnt );
1706 :
1707 : /* We still need to enforce rule (i). We know that min_hashcnt%T !=
1708 : T-1 because of rule (ii). That means that if target_hashcnt%T ==
1709 : T-1 at this point, target_hashcnt > min_hashcnt (notice the
1710 : strict), so target_hashcnt-1 >= min_hashcnt and is thus still a
1711 : valid choice for target_hashcnt. */
1712 0 : target_hashcnt -= (ulong)( (!low_power_mode) & ((target_hashcnt%ctx->hashcnt_per_tick)==(ctx->hashcnt_per_tick-1UL)) );
1713 :
1714 0 : FD_TEST( target_hashcnt >= ctx->hashcnt );
1715 0 : FD_TEST( target_hashcnt >= min_hashcnt );
1716 0 : FD_TEST( target_hashcnt <= restricted_hashcnt );
1717 :
1718 0 : if( FD_UNLIKELY( ctx->hashcnt==target_hashcnt ) ) return; /* Nothing to do, don't publish a tick twice */
1719 :
1720 0 : *charge_busy = 1;
1721 :
1722 0 : if( FD_LIKELY( ctx->hashcnt<target_hashcnt ) ) {
1723 0 : fd_sha256_hash_32_repeated( ctx->hash, ctx->hash, target_hashcnt-ctx->hashcnt );
1724 0 : ctx->hashcnt = target_hashcnt;
1725 0 : }
1726 :
1727 0 : if( FD_UNLIKELY( ctx->hashcnt==ctx->hashcnt_per_slot ) ) {
1728 0 : ctx->slot++;
1729 0 : ctx->hashcnt = 0UL;
1730 0 : }
1731 :
1732 0 : if( FD_UNLIKELY( !is_leader && !(ctx->hashcnt%ctx->hashcnt_per_tick ) ) ) {
1733 : /* We finished a tick while not leader... save the current hash so
1734 : it can be played back into the bank when we become the leader. */
1735 0 : ulong tick_idx = (ctx->slot*ctx->ticks_per_slot+ctx->hashcnt/ctx->hashcnt_per_tick)%MAX_SKIPPED_TICKS;
1736 0 : fd_memcpy( ctx->skipped_tick_hashes[ tick_idx ], ctx->hash, 32UL );
1737 :
1738 0 : ulong initial_tick_idx = (ctx->last_slot*ctx->ticks_per_slot+ctx->last_hashcnt/ctx->hashcnt_per_tick)%MAX_SKIPPED_TICKS;
1739 0 : if( FD_UNLIKELY( tick_idx==initial_tick_idx ) ) FD_LOG_ERR(( "Too many skipped ticks from slot %lu to slot %lu, chain must halt", ctx->last_slot, ctx->slot ));
1740 0 : }
1741 :
1742 0 : if( FD_UNLIKELY( is_leader && !(ctx->hashcnt%ctx->hashcnt_per_tick) ) ) {
1743 : /* We ticked while leader... tell the leader bank. */
1744 0 : fd_ext_poh_register_tick( ctx->current_leader_bank, ctx->hash );
1745 :
1746 : /* And send an empty microblock (a tick) to the shred tile. */
1747 0 : publish_tick( ctx, stem, ctx->hash, 0 );
1748 0 : }
1749 :
1750 0 : if( FD_UNLIKELY( !is_leader && ctx->slot>=ctx->next_leader_slot ) ) {
1751 : /* We ticked while not leader and are now leader... transition
1752 : the state machine. */
1753 0 : publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->reset_slot );
1754 0 : FD_LOG_INFO(( "fd_poh_ticked_into_leader(slot=%lu, reset_slot=%lu)", ctx->next_leader_slot, ctx->reset_slot ));
1755 0 : }
1756 :
1757 0 : if( FD_UNLIKELY( is_leader && ctx->slot>ctx->next_leader_slot ) ) {
1758 : /* We ticked while leader and are no longer leader... transition
1759 : the state machine. */
1760 0 : FD_TEST( !max_remaining_microblocks );
1761 0 : publish_plugin_slot_end( ctx, ctx->next_leader_slot, ctx->cus_used );
1762 0 : FD_LOG_INFO(( "fd_poh_ticked_outof_leader(slot=%lu)", ctx->next_leader_slot ));
1763 :
1764 0 : no_longer_leader( ctx );
1765 0 : ctx->expect_sequential_leader_slot = ctx->slot;
1766 :
1767 0 : double tick_per_ns = fd_tempo_tick_per_ns( NULL );
1768 0 : fd_histf_sample( ctx->slot_done_delay, (ulong)((double)(fd_log_wallclock()-ctx->reset_slot_start_ns)*tick_per_ns) );
1769 0 : ctx->next_leader_slot = next_leader_slot( ctx );
1770 :
1771 0 : if( FD_UNLIKELY( ctx->slot>=ctx->next_leader_slot ) ) {
1772 : /* We finished a leader slot, and are immediately leader for the
1773 : following slot... transition. */
1774 0 : publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->next_leader_slot-1UL );
1775 0 : FD_LOG_INFO(( "fd_poh_ticked_into_leader(slot=%lu, reset_slot=%lu)", ctx->next_leader_slot, ctx->next_leader_slot-1UL ));
1776 0 : }
1777 0 : }
1778 0 : }
1779 :
1780 : static inline void
1781 0 : during_housekeeping( fd_poh_ctx_t * ctx ) {
1782 0 : if( FD_UNLIKELY( maybe_change_identity( ctx, 0 ) ) ) {
1783 0 : ctx->next_leader_slot = next_leader_slot( ctx );
1784 0 : FD_LOG_INFO(( "fd_poh_identity_changed(next_leader_slot=%lu)", ctx->next_leader_slot ));
1785 :
1786 : /* Signal replay to check if we are leader again, in-case it's stuck
1787 : because everything already replayed. */
1788 0 : FD_COMPILER_MFENCE();
1789 0 : fd_ext_poh_signal_leader_change( ctx->signal_leader_change );
1790 0 : }
1791 0 : }
1792 :
1793 : static inline void
1794 0 : metrics_write( fd_poh_ctx_t * ctx ) {
1795 0 : FD_MHIST_COPY( POH, BEGIN_LEADER_DELAY_SECONDS, ctx->begin_leader_delay );
1796 0 : FD_MHIST_COPY( POH, FIRST_MICROBLOCK_DELAY_SECONDS, ctx->first_microblock_delay );
1797 0 : FD_MHIST_COPY( POH, SLOT_DONE_DELAY_SECONDS, ctx->slot_done_delay );
1798 0 : FD_MHIST_COPY( POH, BUNDLE_INITIALIZE_DELAY_SECONDS, ctx->bundle_init_delay );
1799 0 : }
1800 :
1801 : static int
1802 : before_frag( fd_poh_ctx_t * ctx,
1803 : ulong in_idx,
1804 : ulong seq,
1805 0 : ulong sig ) {
1806 0 : (void)seq;
1807 :
1808 0 : if( FD_LIKELY( ctx->in_kind[ in_idx ]!=IN_KIND_BANK && ctx->in_kind[ in_idx ]!=IN_KIND_PACK ) ) return 0;
1809 :
1810 0 : if( FD_UNLIKELY( sig==ULONG_MAX ) ) {
1811 : /* Banks are drained, release pack's owenership of the current bank */
1812 0 : if( FD_UNLIKELY( ctx->pack_leader_bank ) ) fd_ext_bank_release( ctx->pack_leader_bank );
1813 0 : ctx->pack_leader_bank = NULL;
1814 0 : return 1; /* discard */
1815 0 : }
1816 :
1817 0 : uint pack_idx = (uint)fd_disco_bank_sig_pack_idx( sig );
1818 0 : FD_TEST( ((int)(pack_idx-ctx->expect_pack_idx))>=0L );
1819 0 : if( FD_UNLIKELY( pack_idx!=ctx->expect_pack_idx ) ) return -1;
1820 0 : ctx->expect_pack_idx++;
1821 :
1822 0 : return 0;
1823 0 : }
1824 :
1825 : static inline void
1826 : during_frag( fd_poh_ctx_t * ctx,
1827 : ulong in_idx,
1828 : ulong seq FD_PARAM_UNUSED,
1829 : ulong sig,
1830 : ulong chunk,
1831 : ulong sz,
1832 0 : ulong ctl FD_PARAM_UNUSED ) {
1833 0 : ctx->skip_frag = 0;
1834 :
1835 0 : if( FD_UNLIKELY( ctx->in_kind[ in_idx ]==IN_KIND_STAKE ) ) {
1836 0 : if( FD_UNLIKELY( chunk<ctx->in[ in_idx ].chunk0 || chunk>ctx->in[ in_idx ].wmark ) )
1837 0 : FD_LOG_ERR(( "chunk %lu %lu corrupt, not in range [%lu,%lu]", chunk, sz,
1838 0 : ctx->in[ in_idx ].chunk0, ctx->in[ in_idx ].wmark ));
1839 :
1840 0 : uchar const * dcache_entry = fd_chunk_to_laddr_const( ctx->in[ in_idx ].mem, chunk );
1841 0 : fd_multi_epoch_leaders_stake_msg_init( ctx->mleaders, fd_type_pun_const( dcache_entry ) );
1842 0 : return;
1843 0 : }
1844 :
1845 0 : ulong slot;
1846 0 : switch( ctx->in_kind[ in_idx ] ) {
1847 0 : case IN_KIND_BANK:
1848 0 : case IN_KIND_PACK: {
1849 0 : slot = fd_disco_bank_sig_slot( sig );
1850 0 : break;
1851 0 : }
1852 0 : default:
1853 0 : FD_LOG_ERR(( "unexpected in_kind %d", ctx->in_kind[ in_idx ] ));
1854 0 : }
1855 :
1856 : /* The following sequence is possible...
1857 :
1858 : 1. We become leader in slot 10
1859 : 2. While leader, we switch to a fork that is on slot 8, where
1860 : we are leader
1861 : 3. We get the in-flight microblocks for slot 10
1862 :
1863 : These in-flight microblocks need to be dropped, so we check
1864 : against the high water mark (highwater_leader_slot) rather than
1865 : the current hashcnt here when determining what to drop.
1866 :
1867 : We know if the slot is lower than the high water mark it's from a stale
1868 : leader slot, because we will not become leader for the same slot twice
1869 : even if we are reset back in time (to prevent duplicate blocks). */
1870 0 : int is_frag_for_prior_leader_slot = slot<ctx->highwater_leader_slot;
1871 :
1872 0 : if( FD_UNLIKELY( ctx->in_kind[ in_idx ]==IN_KIND_PACK ) ) {
1873 : /* We now know the real amount of microblocks published, so set an
1874 : exact bound for once we receive them. */
1875 0 : ctx->skip_frag = 1;
1876 0 : if( FD_UNLIKELY( is_frag_for_prior_leader_slot ) ) return;
1877 :
1878 0 : FD_TEST( ctx->microblocks_lower_bound<=ctx->max_microblocks_per_slot );
1879 0 : fd_done_packing_t const * done_packing = fd_chunk_to_laddr( ctx->in[ in_idx ].mem, chunk );
1880 0 : FD_LOG_INFO(( "done_packing(slot=%lu,seen_microblocks=%lu,microblocks_in_slot=%lu)",
1881 0 : ctx->slot,
1882 0 : ctx->microblocks_lower_bound,
1883 0 : done_packing->microblocks_in_slot ));
1884 0 : ctx->slot_done = 1;
1885 0 : ctx->microblocks_lower_bound += ctx->max_microblocks_per_slot - done_packing->microblocks_in_slot;
1886 0 : return;
1887 0 : } else {
1888 0 : if( FD_UNLIKELY( chunk<ctx->in[ in_idx ].chunk0 || chunk>ctx->in[ in_idx ].wmark || sz>USHORT_MAX ) )
1889 0 : FD_LOG_ERR(( "chunk %lu %lu corrupt, not in range [%lu,%lu]", chunk, sz, ctx->in[ in_idx ].chunk0, ctx->in[ in_idx ].wmark ));
1890 :
1891 0 : uchar * src = (uchar *)fd_chunk_to_laddr( ctx->in[ in_idx ].mem, chunk );
1892 :
1893 0 : fd_memcpy( ctx->_txns, src, sz-sizeof(fd_microblock_trailer_t) );
1894 0 : fd_memcpy( ctx->_microblock_trailer, src+sz-sizeof(fd_microblock_trailer_t), sizeof(fd_microblock_trailer_t) );
1895 :
1896 0 : ctx->skip_frag = is_frag_for_prior_leader_slot;
1897 0 : }
1898 0 : }
1899 :
1900 : static void
1901 : publish_microblock( fd_poh_ctx_t * ctx,
1902 : fd_stem_context_t * stem,
1903 : ulong slot,
1904 : ulong hashcnt_delta,
1905 0 : ulong txn_cnt ) {
1906 0 : uchar * dst = (uchar *)fd_chunk_to_laddr( ctx->shred_out->mem, ctx->shred_out->chunk );
1907 0 : FD_TEST( slot>=ctx->reset_slot );
1908 0 : fd_entry_batch_meta_t * meta = (fd_entry_batch_meta_t *)dst;
1909 0 : meta->parent_offset = 1UL+slot-ctx->reset_slot;
1910 0 : meta->reference_tick = (ctx->hashcnt/ctx->hashcnt_per_tick) % ctx->ticks_per_slot;
1911 0 : meta->block_complete = !ctx->hashcnt;
1912 :
1913 : /* Refer to publish_tick() for details on meta->parent_block_id_valid. */
1914 0 : meta->parent_block_id_valid = ctx->parent_slot == (slot-meta->parent_offset);
1915 0 : if( FD_LIKELY( meta->parent_block_id_valid ) ) {
1916 0 : fd_memcpy( meta->parent_block_id, ctx->parent_block_id, 32UL );
1917 0 : }
1918 :
1919 0 : dst += sizeof(fd_entry_batch_meta_t);
1920 0 : fd_entry_batch_header_t * header = (fd_entry_batch_header_t *)dst;
1921 0 : header->hashcnt_delta = hashcnt_delta;
1922 0 : fd_memcpy( header->hash, ctx->hash, 32UL );
1923 :
1924 0 : dst += sizeof(fd_entry_batch_header_t);
1925 0 : ulong payload_sz = 0UL;
1926 0 : ulong included_txn_cnt = 0UL;
1927 0 : for( ulong i=0UL; i<txn_cnt; i++ ) {
1928 0 : fd_txn_p_t * txn = (fd_txn_p_t *)(ctx->_txns + i*sizeof(fd_txn_p_t));
1929 0 : if( FD_UNLIKELY( !(txn->flags & FD_TXN_P_FLAGS_EXECUTE_SUCCESS) ) ) continue;
1930 :
1931 0 : fd_memcpy( dst, txn->payload, txn->payload_sz );
1932 0 : payload_sz += txn->payload_sz;
1933 0 : dst += txn->payload_sz;
1934 0 : included_txn_cnt++;
1935 0 : }
1936 0 : header->txn_cnt = included_txn_cnt;
1937 :
1938 : /* We always have credits to publish here, because we have a burst
1939 : value of 3 credits, and at most we will publish_tick() once and
1940 : then publish_became_leader() once, leaving one credit here to
1941 : publish the microblock. */
1942 0 : ulong tspub = (ulong)fd_frag_meta_ts_comp( fd_tickcount() );
1943 0 : ulong sz = sizeof(fd_entry_batch_meta_t)+sizeof(fd_entry_batch_header_t)+payload_sz;
1944 0 : ulong new_sig = fd_disco_poh_sig( slot, POH_PKT_TYPE_MICROBLOCK, 0UL );
1945 0 : fd_stem_publish( stem, ctx->shred_out->idx, new_sig, ctx->shred_out->chunk, sz, 0UL, 0UL, tspub );
1946 0 : ctx->shred_seq = stem->seqs[ ctx->shred_out->idx ];
1947 0 : ctx->shred_out->chunk = fd_dcache_compact_next( ctx->shred_out->chunk, sz, ctx->shred_out->chunk0, ctx->shred_out->wmark );
1948 0 : }
1949 :
1950 : static inline void
1951 : after_frag( fd_poh_ctx_t * ctx,
1952 : ulong in_idx,
1953 : ulong seq,
1954 : ulong sig,
1955 : ulong sz,
1956 : ulong tsorig,
1957 : ulong tspub,
1958 0 : fd_stem_context_t * stem ) {
1959 0 : (void)in_idx;
1960 0 : (void)seq;
1961 0 : (void)tsorig;
1962 0 : (void)tspub;
1963 :
1964 0 : if( FD_UNLIKELY( ctx->skip_frag ) ) return;
1965 :
1966 0 : if( FD_UNLIKELY( ctx->in_kind[ in_idx ]==IN_KIND_STAKE ) ) {
1967 0 : fd_multi_epoch_leaders_stake_msg_fini( ctx->mleaders );
1968 : /* It might seem like we do not need to do state transitions in and
1969 : out of being the leader here, since leader schedule updates are
1970 : always one epoch in advance (whether we are leader or not would
1971 : never change for the currently executing slot) but this is not
1972 : true for new ledgers when the validator first boots. We will
1973 : likely be the leader in slot 1, and get notified of the leader
1974 : schedule for that slot while we are still in it.
1975 :
1976 : For safety we just handle both transitions, in and out, although
1977 : the only one possible should be into leader. */
1978 0 : ulong next_leader_slot_after_frag = next_leader_slot( ctx );
1979 :
1980 0 : int currently_leader = ctx->slot>=ctx->next_leader_slot;
1981 0 : int leader_after_frag = ctx->slot>=next_leader_slot_after_frag;
1982 :
1983 0 : FD_LOG_INFO(( "stake_update(before_leader=%lu,after_leader=%lu)",
1984 0 : ctx->next_leader_slot,
1985 0 : next_leader_slot_after_frag ));
1986 :
1987 0 : ctx->next_leader_slot = next_leader_slot_after_frag;
1988 0 : if( FD_UNLIKELY( currently_leader && !leader_after_frag ) ) {
1989 : /* Shouldn't ever happen, otherwise we need to do a state
1990 : transition out of being leader. */
1991 0 : FD_LOG_ERR(( "stake update caused us to no longer be leader in an active slot" ));
1992 0 : }
1993 :
1994 : /* Nothing to do if we transition into being leader, since it
1995 : will just get picked up by the regular tick loop. */
1996 0 : if( FD_UNLIKELY( !currently_leader && leader_after_frag ) ) {
1997 0 : publish_plugin_slot_start( ctx, next_leader_slot_after_frag, ctx->reset_slot );
1998 0 : }
1999 :
2000 0 : return;
2001 0 : }
2002 :
2003 0 : if( FD_UNLIKELY( !ctx->microblocks_lower_bound ) ) {
2004 0 : double tick_per_ns = fd_tempo_tick_per_ns( NULL );
2005 0 : fd_histf_sample( ctx->first_microblock_delay, (ulong)((double)(fd_log_wallclock()-ctx->reset_slot_start_ns)/tick_per_ns) );
2006 0 : }
2007 :
2008 0 : ulong target_slot = fd_disco_bank_sig_slot( sig );
2009 :
2010 0 : if( FD_UNLIKELY( target_slot!=ctx->next_leader_slot || target_slot!=ctx->slot ) ) {
2011 0 : FD_LOG_ERR(( "packed too early or late target_slot=%lu, current_slot=%lu. highwater_leader_slot=%lu",
2012 0 : target_slot, ctx->slot, ctx->highwater_leader_slot ));
2013 0 : }
2014 :
2015 0 : FD_TEST( ctx->current_leader_bank );
2016 0 : FD_TEST( ctx->microblocks_lower_bound<ctx->max_microblocks_per_slot );
2017 0 : ctx->microblocks_lower_bound += 1UL;
2018 :
2019 0 : ulong txn_cnt = (sz-sizeof(fd_microblock_trailer_t))/sizeof(fd_txn_p_t);
2020 0 : fd_txn_p_t * txns = (fd_txn_p_t *)(ctx->_txns);
2021 0 : ulong executed_txn_cnt = 0UL;
2022 0 : ulong cus_used = 0UL;
2023 0 : for( ulong i=0UL; i<txn_cnt; i++ ) {
2024 : /* It's important that we check if a transaction is included in the
2025 : block with FD_TXN_P_FLAGS_EXECUTE_SUCCESS since
2026 : actual_consumed_cus may have a nonzero value for excluded
2027 : transactions used for monitoring purposes */
2028 0 : if( FD_LIKELY( txns[ i ].flags & FD_TXN_P_FLAGS_EXECUTE_SUCCESS ) ) {
2029 0 : executed_txn_cnt++;
2030 0 : cus_used += txns[ i ].bank_cu.actual_consumed_cus;
2031 0 : }
2032 0 : }
2033 :
2034 : /* We don't publish transactions that fail to execute. If all the
2035 : transactions failed to execute, the microblock would be empty,
2036 : causing agave to think it's a tick and complain. Instead, we just
2037 : skip the microblock and don't hash or update the hashcnt. */
2038 0 : if( FD_UNLIKELY( !executed_txn_cnt ) ) return;
2039 :
2040 0 : uchar data[ 64 ];
2041 0 : fd_memcpy( data, ctx->hash, 32UL );
2042 0 : fd_memcpy( data+32UL, ctx->_microblock_trailer->hash, 32UL );
2043 0 : fd_sha256_hash( data, 64UL, ctx->hash );
2044 :
2045 0 : ctx->hashcnt++;
2046 0 : FD_TEST( ctx->hashcnt>ctx->last_hashcnt );
2047 0 : ulong hashcnt_delta = ctx->hashcnt - ctx->last_hashcnt;
2048 :
2049 : /* The hashing loop above will never leave us exactly one away from
2050 : crossing a tick boundary, so this increment will never cause the
2051 : current tick (or the slot) to change, except in low power mode
2052 : for development, in which case we do need to register the tick
2053 : with the leader bank. We don't need to publish the tick since
2054 : sending the microblock below is the publishing action. */
2055 0 : if( FD_UNLIKELY( !(ctx->hashcnt%ctx->hashcnt_per_slot ) ) ) {
2056 0 : ctx->slot++;
2057 0 : ctx->hashcnt = 0UL;
2058 0 : }
2059 :
2060 0 : ctx->last_slot = ctx->slot;
2061 0 : ctx->last_hashcnt = ctx->hashcnt;
2062 :
2063 0 : ctx->cus_used += cus_used;
2064 :
2065 0 : if( FD_UNLIKELY( !(ctx->hashcnt%ctx->hashcnt_per_tick ) ) ) {
2066 0 : fd_ext_poh_register_tick( ctx->current_leader_bank, ctx->hash );
2067 0 : if( FD_UNLIKELY( ctx->slot>ctx->next_leader_slot ) ) {
2068 : /* We ticked while leader and are no longer leader... transition
2069 : the state machine. */
2070 0 : publish_plugin_slot_end( ctx, ctx->next_leader_slot, ctx->cus_used );
2071 :
2072 0 : no_longer_leader( ctx );
2073 :
2074 0 : if( FD_UNLIKELY( ctx->slot>=ctx->next_leader_slot ) ) {
2075 : /* We finished a leader slot, and are immediately leader for the
2076 : following slot... transition. */
2077 0 : publish_plugin_slot_start( ctx, ctx->next_leader_slot, ctx->next_leader_slot-1UL );
2078 0 : }
2079 0 : }
2080 0 : }
2081 :
2082 0 : publish_microblock( ctx, stem, target_slot, hashcnt_delta, txn_cnt );
2083 0 : }
2084 :
2085 : static void
2086 : privileged_init( fd_topo_t * topo,
2087 0 : fd_topo_tile_t * tile ) {
2088 0 : void * scratch = fd_topo_obj_laddr( topo, tile->tile_obj_id );
2089 :
2090 0 : FD_SCRATCH_ALLOC_INIT( l, scratch );
2091 0 : fd_poh_ctx_t * ctx = FD_SCRATCH_ALLOC_APPEND( l, alignof( fd_poh_ctx_t ), sizeof( fd_poh_ctx_t ) );
2092 :
2093 0 : if( FD_UNLIKELY( !strcmp( tile->poh.identity_key_path, "" ) ) )
2094 0 : FD_LOG_ERR(( "identity_key_path not set" ));
2095 :
2096 0 : const uchar * identity_key = fd_keyload_load( tile->poh.identity_key_path, /* pubkey only: */ 1 );
2097 0 : fd_memcpy( ctx->identity_key.uc, identity_key, 32UL );
2098 :
2099 0 : if( FD_UNLIKELY( !tile->poh.bundle.vote_account_path[0] ) ) {
2100 0 : tile->poh.bundle.enabled = 0;
2101 0 : }
2102 0 : if( FD_UNLIKELY( tile->poh.bundle.enabled ) ) {
2103 0 : if( FD_UNLIKELY( !fd_base58_decode_32( tile->poh.bundle.vote_account_path, ctx->bundle.vote_account.uc ) ) ) {
2104 0 : const uchar * vote_key = fd_keyload_load( tile->poh.bundle.vote_account_path, /* pubkey only: */ 1 );
2105 0 : fd_memcpy( ctx->bundle.vote_account.uc, vote_key, 32UL );
2106 0 : }
2107 0 : }
2108 0 : }
2109 :
2110 : /* The Agave client needs to communicate to the shred tile what
2111 : the shred version is on boot, but shred tile does not live in the
2112 : same address space, so have the PoH tile pass the value through
2113 : via. a shared memory ulong. */
2114 :
2115 : static volatile ulong * fd_shred_version;
2116 :
2117 : void
2118 0 : fd_ext_shred_set_shred_version( ulong shred_version ) {
2119 0 : while( FD_UNLIKELY( !fd_shred_version ) ) FD_SPIN_PAUSE();
2120 0 : *fd_shred_version = shred_version;
2121 0 : }
2122 :
2123 : void
2124 : fd_ext_poh_publish_gossip_vote( uchar * data,
2125 : ulong data_len,
2126 : uint source_ipv4,
2127 0 : uchar * pubkey ) {
2128 0 : (void)pubkey;
2129 0 : uchar txn_with_header[ FD_TPU_RAW_MTU ];
2130 0 : fd_txn_m_t * txnm = (fd_txn_m_t *)txn_with_header;
2131 0 : *txnm = (fd_txn_m_t) { 0UL };
2132 0 : txnm->payload_sz = (ushort)data_len;
2133 0 : txnm->source_ipv4 = source_ipv4;
2134 0 : txnm->source_tpu = FD_TXN_M_TPU_SOURCE_GOSSIP;
2135 0 : fd_memcpy(txn_with_header + sizeof(fd_txn_m_t), data, data_len);
2136 0 : poh_link_publish( &gossip_dedup, 1UL, txn_with_header, fd_txn_m_realized_footprint( txnm, 0, 0 ) );
2137 0 : }
2138 :
2139 : void
2140 : fd_ext_poh_publish_leader_schedule( uchar * data,
2141 0 : ulong data_len ) {
2142 0 : poh_link_publish( &stake_out, 2UL, data, data_len );
2143 0 : }
2144 :
2145 : void
2146 : fd_ext_poh_publish_cluster_info( uchar * data,
2147 0 : ulong data_len ) {
2148 0 : poh_link_publish( &crds_shred, 2UL, data, data_len );
2149 0 : }
2150 :
2151 : void
2152 0 : fd_ext_poh_publish_executed_txn( uchar const * data ) {
2153 0 : static int lock = 0;
2154 :
2155 : /* Need to lock since the link publisher is not concurrent, and replay
2156 : happens on a thread pool. */
2157 0 : for(;;) {
2158 0 : if( FD_LIKELY( FD_ATOMIC_CAS( &lock, 0, 1 )==0 ) ) break;
2159 0 : FD_SPIN_PAUSE();
2160 0 : }
2161 :
2162 0 : FD_COMPILER_MFENCE();
2163 0 : poh_link_publish( &executed_txn, 0UL, data, 64UL );
2164 0 : FD_COMPILER_MFENCE();
2165 :
2166 0 : FD_VOLATILE(lock) = 0;
2167 0 : }
2168 :
2169 : void
2170 : fd_ext_plugin_publish_replay_stage( ulong sig,
2171 : uchar * data,
2172 0 : ulong data_len ) {
2173 0 : poh_link_publish( &replay_plugin, sig, data, data_len );
2174 0 : }
2175 :
2176 : void
2177 : fd_ext_plugin_publish_genesis_hash( ulong sig,
2178 : uchar * data,
2179 0 : ulong data_len ) {
2180 0 : poh_link_publish( &replay_plugin, sig, data, data_len );
2181 0 : }
2182 :
2183 : void
2184 : fd_ext_plugin_publish_start_progress( ulong sig,
2185 : uchar * data,
2186 0 : ulong data_len ) {
2187 0 : poh_link_publish( &start_progress_plugin, sig, data, data_len );
2188 0 : }
2189 :
2190 : void
2191 : fd_ext_plugin_publish_vote_listener( ulong sig,
2192 : uchar * data,
2193 0 : ulong data_len ) {
2194 0 : poh_link_publish( &vote_listener_plugin, sig, data, data_len );
2195 0 : }
2196 :
2197 : void
2198 : fd_ext_plugin_publish_validator_info( ulong sig,
2199 : uchar * data,
2200 0 : ulong data_len ) {
2201 0 : poh_link_publish( &validator_info_plugin, sig, data, data_len );
2202 0 : }
2203 :
2204 : void
2205 : fd_ext_plugin_publish_periodic( ulong sig,
2206 : uchar * data,
2207 0 : ulong data_len ) {
2208 0 : poh_link_publish( &gossip_plugin, sig, data, data_len );
2209 0 : }
2210 :
2211 : void
2212 : fd_ext_resolv_publish_root_bank( uchar * data,
2213 0 : ulong data_len ) {
2214 0 : poh_link_publish( &replay_resolv, 0UL, data, data_len );
2215 0 : }
2216 :
2217 : void
2218 : fd_ext_resolv_publish_completed_blockhash( uchar * data,
2219 0 : ulong data_len ) {
2220 0 : poh_link_publish( &replay_resolv, 1UL, data, data_len );
2221 0 : }
2222 :
2223 : static inline fd_poh_out_ctx_t
2224 : out1( fd_topo_t const * topo,
2225 : fd_topo_tile_t const * tile,
2226 0 : char const * name ) {
2227 0 : ulong idx = ULONG_MAX;
2228 :
2229 0 : for( ulong i=0UL; i<tile->out_cnt; i++ ) {
2230 0 : fd_topo_link_t const * link = &topo->links[ tile->out_link_id[ i ] ];
2231 0 : if( !strcmp( link->name, name ) ) {
2232 0 : if( FD_UNLIKELY( idx!=ULONG_MAX ) ) FD_LOG_ERR(( "tile %s:%lu had multiple output links named %s but expected one", tile->name, tile->kind_id, name ));
2233 0 : idx = i;
2234 0 : }
2235 0 : }
2236 :
2237 0 : if( FD_UNLIKELY( idx==ULONG_MAX ) ) FD_LOG_ERR(( "tile %s:%lu had no output link named %s", tile->name, tile->kind_id, name ));
2238 :
2239 0 : void * mem = topo->workspaces[ topo->objs[ topo->links[ tile->out_link_id[ idx ] ].dcache_obj_id ].wksp_id ].wksp;
2240 0 : ulong chunk0 = fd_dcache_compact_chunk0( mem, topo->links[ tile->out_link_id[ idx ] ].dcache );
2241 0 : ulong wmark = fd_dcache_compact_wmark ( mem, topo->links[ tile->out_link_id[ idx ] ].dcache, topo->links[ tile->out_link_id[ idx ] ].mtu );
2242 :
2243 0 : return (fd_poh_out_ctx_t){ .idx = idx, .mem = mem, .chunk0 = chunk0, .wmark = wmark, .chunk = chunk0 };
2244 0 : }
2245 :
2246 : static void
2247 : unprivileged_init( fd_topo_t * topo,
2248 0 : fd_topo_tile_t * tile ) {
2249 0 : void * scratch = fd_topo_obj_laddr( topo, tile->tile_obj_id );
2250 :
2251 0 : FD_SCRATCH_ALLOC_INIT( l, scratch );
2252 0 : fd_poh_ctx_t * ctx = FD_SCRATCH_ALLOC_APPEND( l, alignof( fd_poh_ctx_t ), sizeof( fd_poh_ctx_t ) );
2253 0 : void * sha256 = FD_SCRATCH_ALLOC_APPEND( l, FD_SHA256_ALIGN, FD_SHA256_FOOTPRINT );
2254 :
2255 0 : #define NONNULL( x ) (__extension__({ \
2256 0 : __typeof__((x)) __x = (x); \
2257 0 : if( FD_UNLIKELY( !__x ) ) FD_LOG_ERR(( #x " was unexpectedly NULL" )); \
2258 0 : __x; }))
2259 :
2260 0 : ctx->mleaders = NONNULL( fd_multi_epoch_leaders_join( fd_multi_epoch_leaders_new( ctx->mleaders_mem ) ) );
2261 0 : ctx->sha256 = NONNULL( fd_sha256_join( fd_sha256_new( sha256 ) ) );
2262 0 : ctx->current_leader_bank = NULL;
2263 0 : ctx->pack_leader_bank = NULL;
2264 0 : ctx->signal_leader_change = NULL;
2265 :
2266 0 : ctx->shred_seq = ULONG_MAX;
2267 0 : ctx->halted_switching_key = 0;
2268 0 : ctx->keyswitch = fd_keyswitch_join( fd_topo_obj_laddr( topo, tile->keyswitch_obj_id ) );
2269 0 : FD_TEST( ctx->keyswitch );
2270 :
2271 0 : ctx->slot = 0UL;
2272 0 : ctx->hashcnt = 0UL;
2273 0 : ctx->last_hashcnt = 0UL;
2274 0 : ctx->highwater_leader_slot = ULONG_MAX;
2275 0 : ctx->next_leader_slot = ULONG_MAX;
2276 0 : ctx->reset_slot = ULONG_MAX;
2277 :
2278 0 : ctx->lagged_consecutive_leader_start = tile->poh.lagged_consecutive_leader_start;
2279 0 : ctx->expect_sequential_leader_slot = ULONG_MAX;
2280 :
2281 0 : ctx->slot_done = 1;
2282 0 : ctx->expect_pack_idx = 0U;
2283 0 : ctx->microblocks_lower_bound = 0UL;
2284 :
2285 0 : ctx->max_active_descendant = 0UL;
2286 :
2287 0 : if( FD_UNLIKELY( tile->poh.bundle.enabled ) ) {
2288 0 : ctx->bundle.enabled = 1;
2289 0 : NONNULL( fd_bundle_crank_gen_init( ctx->bundle.gen, (fd_acct_addr_t const *)tile->poh.bundle.tip_distribution_program_addr,
2290 0 : (fd_acct_addr_t const *)tile->poh.bundle.tip_payment_program_addr,
2291 0 : (fd_acct_addr_t const *)ctx->bundle.vote_account.uc,
2292 0 : (fd_acct_addr_t const *)ctx->bundle.vote_account.uc, "NAN", 0UL ) ); /* last three arguments are properly bogus */
2293 0 : } else {
2294 0 : ctx->bundle.enabled = 0;
2295 0 : }
2296 :
2297 0 : ulong poh_shred_obj_id = fd_pod_query_ulong( topo->props, "poh_shred", ULONG_MAX );
2298 0 : FD_TEST( poh_shred_obj_id!=ULONG_MAX );
2299 :
2300 0 : fd_shred_version = fd_fseq_join( fd_topo_obj_laddr( topo, poh_shred_obj_id ) );
2301 0 : FD_TEST( fd_shred_version );
2302 :
2303 0 : poh_link_init( &gossip_dedup, topo, tile, out1( topo, tile, "gossip_dedup" ).idx );
2304 0 : poh_link_init( &stake_out, topo, tile, out1( topo, tile, "stake_out" ).idx );
2305 0 : poh_link_init( &crds_shred, topo, tile, out1( topo, tile, "crds_shred" ).idx );
2306 0 : poh_link_init( &replay_resolv, topo, tile, out1( topo, tile, "replay_resol" ).idx );
2307 0 : poh_link_init( &executed_txn, topo, tile, out1( topo, tile, "executed_txn" ).idx );
2308 :
2309 0 : if( FD_LIKELY( tile->poh.plugins_enabled ) ) {
2310 0 : poh_link_init( &replay_plugin, topo, tile, out1( topo, tile, "replay_plugi" ).idx );
2311 0 : poh_link_init( &gossip_plugin, topo, tile, out1( topo, tile, "gossip_plugi" ).idx );
2312 0 : poh_link_init( &start_progress_plugin, topo, tile, out1( topo, tile, "startp_plugi" ).idx );
2313 0 : poh_link_init( &vote_listener_plugin, topo, tile, out1( topo, tile, "votel_plugin" ).idx );
2314 0 : poh_link_init( &validator_info_plugin, topo, tile, out1( topo, tile, "valcfg_plugi" ).idx );
2315 0 : } else {
2316 : /* Mark these mcaches as "available", so the system boots, but the
2317 : memory is not set so nothing will actually get published via.
2318 : the links. */
2319 0 : FD_COMPILER_MFENCE();
2320 0 : replay_plugin.mcache = (fd_frag_meta_t*)1;
2321 0 : gossip_plugin.mcache = (fd_frag_meta_t*)1;
2322 0 : start_progress_plugin.mcache = (fd_frag_meta_t*)1;
2323 0 : vote_listener_plugin.mcache = (fd_frag_meta_t*)1;
2324 0 : validator_info_plugin.mcache = (fd_frag_meta_t*)1;
2325 0 : FD_COMPILER_MFENCE();
2326 0 : }
2327 :
2328 0 : FD_LOG_INFO(( "PoH waiting to be initialized by Agave client... %lu %lu", fd_poh_waiting_lock, fd_poh_returned_lock ));
2329 0 : FD_VOLATILE( fd_poh_global_ctx ) = ctx;
2330 0 : FD_COMPILER_MFENCE();
2331 0 : for(;;) {
2332 0 : if( FD_LIKELY( FD_VOLATILE_CONST( fd_poh_waiting_lock ) ) ) break;
2333 0 : FD_SPIN_PAUSE();
2334 0 : }
2335 0 : FD_VOLATILE( fd_poh_waiting_lock ) = 0UL;
2336 0 : FD_VOLATILE( fd_poh_returned_lock ) = 1UL;
2337 0 : FD_COMPILER_MFENCE();
2338 0 : for(;;) {
2339 0 : if( FD_UNLIKELY( !FD_VOLATILE_CONST( fd_poh_returned_lock ) ) ) break;
2340 0 : FD_SPIN_PAUSE();
2341 0 : }
2342 0 : FD_COMPILER_MFENCE();
2343 :
2344 0 : if( FD_UNLIKELY( ctx->reset_slot==ULONG_MAX ) ) FD_LOG_ERR(( "PoH was not initialized by Agave client" ));
2345 :
2346 0 : fd_histf_join( fd_histf_new( ctx->begin_leader_delay, FD_MHIST_SECONDS_MIN( POH, BEGIN_LEADER_DELAY_SECONDS ),
2347 0 : FD_MHIST_SECONDS_MAX( POH, BEGIN_LEADER_DELAY_SECONDS ) ) );
2348 0 : fd_histf_join( fd_histf_new( ctx->first_microblock_delay, FD_MHIST_SECONDS_MIN( POH, FIRST_MICROBLOCK_DELAY_SECONDS ),
2349 0 : FD_MHIST_SECONDS_MAX( POH, FIRST_MICROBLOCK_DELAY_SECONDS ) ) );
2350 0 : fd_histf_join( fd_histf_new( ctx->slot_done_delay, FD_MHIST_SECONDS_MIN( POH, SLOT_DONE_DELAY_SECONDS ),
2351 0 : FD_MHIST_SECONDS_MAX( POH, SLOT_DONE_DELAY_SECONDS ) ) );
2352 :
2353 0 : fd_histf_join( fd_histf_new( ctx->bundle_init_delay, FD_MHIST_SECONDS_MIN( POH, BUNDLE_INITIALIZE_DELAY_SECONDS ),
2354 0 : FD_MHIST_SECONDS_MAX( POH, BUNDLE_INITIALIZE_DELAY_SECONDS ) ) );
2355 :
2356 0 : for( ulong i=0UL; i<tile->in_cnt; i++ ) {
2357 0 : fd_topo_link_t * link = &topo->links[ tile->in_link_id[ i ] ];
2358 0 : fd_topo_wksp_t * link_wksp = &topo->workspaces[ topo->objs[ link->dcache_obj_id ].wksp_id ];
2359 :
2360 0 : ctx->in[ i ].mem = link_wksp->wksp;
2361 0 : ctx->in[ i ].chunk0 = fd_dcache_compact_chunk0( ctx->in[ i ].mem, link->dcache );
2362 0 : ctx->in[ i ].wmark = fd_dcache_compact_wmark ( ctx->in[ i ].mem, link->dcache, link->mtu );
2363 :
2364 0 : if( !strcmp( link->name, "stake_out" ) ) {
2365 0 : ctx->in_kind[ i ] = IN_KIND_STAKE;
2366 0 : } else if( !strcmp( link->name, "pack_poh" ) ) {
2367 0 : ctx->in_kind[ i ] = IN_KIND_PACK;
2368 0 : } else if( !strcmp( link->name, "bank_poh" ) ) {
2369 0 : ctx->in_kind[ i ] = IN_KIND_BANK;
2370 0 : } else {
2371 0 : FD_LOG_ERR(( "unexpected input link name %s", link->name ));
2372 0 : }
2373 0 : }
2374 :
2375 0 : *ctx->shred_out = out1( topo, tile, "poh_shred" );
2376 0 : *ctx->pack_out = out1( topo, tile, "poh_pack" );
2377 0 : ctx->plugin_out->mem = NULL;
2378 0 : if( FD_LIKELY( tile->poh.plugins_enabled ) ) {
2379 0 : *ctx->plugin_out = out1( topo, tile, "poh_plugin" );
2380 0 : }
2381 :
2382 0 : ctx->features_activation_avail = 0UL;
2383 0 : for( ulong i=0UL; i<FD_SHRED_FEATURES_ACTIVATION_SLOT_CNT; i++ )
2384 0 : ctx->features_activation->slots[i] = FD_SHRED_FEATURES_ACTIVATION_SLOT_DISABLED;
2385 :
2386 0 : ulong scratch_top = FD_SCRATCH_ALLOC_FINI( l, 1UL );
2387 0 : if( FD_UNLIKELY( scratch_top > (ulong)scratch + scratch_footprint( tile ) ) )
2388 0 : FD_LOG_ERR(( "scratch overflow %lu %lu %lu", scratch_top - (ulong)scratch - scratch_footprint( tile ), scratch_top, (ulong)scratch + scratch_footprint( tile ) ));
2389 0 : }
2390 :
2391 : /* One tick, one microblock, one plugin slot end, one plugin slot start,
2392 : one leader update, and one features activation. */
2393 0 : #define STEM_BURST (6UL)
2394 :
2395 : /* See explanation in fd_pack */
2396 0 : #define STEM_LAZY (128L*3000L)
2397 :
2398 0 : #define STEM_CALLBACK_CONTEXT_TYPE fd_poh_ctx_t
2399 0 : #define STEM_CALLBACK_CONTEXT_ALIGN alignof(fd_poh_ctx_t)
2400 :
2401 0 : #define STEM_CALLBACK_DURING_HOUSEKEEPING during_housekeeping
2402 0 : #define STEM_CALLBACK_METRICS_WRITE metrics_write
2403 0 : #define STEM_CALLBACK_AFTER_CREDIT after_credit
2404 0 : #define STEM_CALLBACK_BEFORE_FRAG before_frag
2405 0 : #define STEM_CALLBACK_DURING_FRAG during_frag
2406 0 : #define STEM_CALLBACK_AFTER_FRAG after_frag
2407 :
2408 : #include "../../disco/stem/fd_stem.c"
2409 :
2410 : fd_topo_run_tile_t fd_tile_poh = {
2411 : .name = "poh",
2412 : .populate_allowed_seccomp = NULL,
2413 : .populate_allowed_fds = NULL,
2414 : .scratch_align = scratch_align,
2415 : .scratch_footprint = scratch_footprint,
2416 : .privileged_init = privileged_init,
2417 : .unprivileged_init = unprivileged_init,
2418 : .run = stem_run,
2419 : };
|