Transaction

121d0e4c274666955e5376ebfebf27881cc77e9276d5b7e8dff78774581913a4
Timestamp (utc)
2024-03-24 06:05:28
Fee Paid
0.00000028 BSV
(
0.01750353 BSV
-
0.01750325 BSV
)
Fee Rate
10.24 sat/KB
Version
1
Confirmations
100,760
Size Stats
2,732 B

2 Outputs

Total Output:
0.01750325 BSV
  • j"1LAnZuoQdcKCkpDBKQMCgziGMoPC4VQUckM° <div class="post"><div class="quoteheader"><a href="https://bitcointalk.org/index.php?topic=788.msg8761#msg8761">Quote from: gavinandresen on August 11, 2010, 04:10:56 PM</a></div><div class="quote">+ have clients tell each other how many transactions per unit of time they're willing to accept.&nbsp; If a client sends you more (within some fuzz factor), drop it.&nbsp; Compile in a default that's based on estimated number of transactions for a typical user and estimate on the number of current users.<br/><br/>+ require some proof-of-work as part of the client-to-client connection process (helps prevent 'Sybil' attacks).<br/></div><br/>I agree that eventually the latter will have to be done. It's for the reasons you pointed out that my DHT solution has flaws. Curiously it's all a side effect of not being able to implement the former constraint.<br/><br/>If you allow validating nodes to arbitrarily ignore transactions you risk breaking the key requirement that all validating nodes receive and record all transactions. The current presumption is that all validators try to receive and record all transactions. If a transaction is non-uniformly delayed and missed by the node who completes the block, it is presumed that statistically the transaction would be recorded in a subsequent block. However, that requires continually rebroadcasting the transaction to assure it gets through.<br/><br/>Let's say throughput was right and there was an advantage to a node saying, "I'm only willing to take 5 transactions a 10 block period." in that case it still generates blocks that can't be rejected by others, but an increasing number of unrecorded transactions backlogs with each minimal block. This causes additional retransmissions exacerbating the bandwidth problem.<br/><br/>In effect you rely on unrestricted nodes to compensate for a problem caused by restricted nodes. So if the restricted nodes are causing problems and doing less validation and recording work than other nodes, why should they be rewarded equally for generating blocks? That seems counter productive.<br/><br/>It would be better to say, "record all transactions or you can't be a validator!" Less validators means less bandwidth usage overall. It also becomes easier to spot abusers.<br/><br/>----ps----<br/><br/>An zero knowledge proof-of-completeness would be for competing validators to reject a proof-of-work block if it didn't contain 99% of the known outstanding transactions.<br/>&nbsp;</div> text/html
    https://whatsonchain.com/tx/121d0e4c274666955e5376ebfebf27881cc77e9276d5b7e8dff78774581913a4