Transaction

90703c2ec1205aa5ee9f2de632ebf23c3db969fa7996147d7861ba4d67678ccb
Timestamp (utc)
2024-03-23 00:11:05
Fee Paid
0.00000016 BSV
(
0.03819790 BSV
-
0.03819774 BSV
)
Fee Rate
10.24 sat/KB
Version
1
Confirmations
102,284
Size Stats
1,561 B

2 Outputs

Total Output:
0.03819774 BSV
  • j"1LAnZuoQdcKCkpDBKQMCgziGMoPC4VQUckM<div class="post">Maybe someone with a little background in this statistics/math stuff can shed some light on this..<br/><br/>The way this thing works is it takes a (basically random) block of data and alters a 32 bit field inside it by starting at 1 and incrementing.&nbsp; The block of data also contains a timestamp and that's incremented occasionally just to keep mixing it up (but the incrementing field isn't restarted when the timestamp is update).&nbsp; If you get a new block from the network you sort of end up having to start over with the incrementing field at 1 again.. however all the other data changed too so it's not the same thing you're hashing anyway.<br/><br/>The way I understand it, since the data that's being hashed is pretty much random and because the hashing algorithm exhibits the 'avalanche effect' it probably doesn't matter if you keep starting with 1 and incrementing it or if you use pseudo random values instead, but I was wondering if anyone could support this or disprove it.<br/><br/>Can you increase your likelihood of finding a low numerical value hash by doing something other than just sequentially incrementing that piece of data in the input?&nbsp; Or is this equivalent to trying to increase your chances of rolling a 6 (with dice) by using your other hand?</div> text/html
    https://whatsonchain.com/tx/90703c2ec1205aa5ee9f2de632ebf23c3db969fa7996147d7861ba4d67678ccb