pull down to refresh
0 sats \ 0 replies \ @027c352e45 18 Mar \ on: I'm losing my faith (II) bitcoin
This is not a religion, so I suggest you be happy in losing your faith.
what China is like for an expat to live in right now.
I left there 10-12 years ago, it was already getting oppressive then, and it's gotten a hundred times worse. Don't try to go and live in China as a foreigner.
The problem-behind-the-problem here is: is it possible to have the consensus defined as a written spec. This point is surprisingly controversial because, compare with e.g. internet protocols like TLS: they have a written spec to which all clients adhere. Even there, it's not cut and dried, there are lots of minor details and extensions where there at least could be a server-client mismatch and connections get dropped. But here's the difference: bitcoin's consensus requires byte-for-byte and bug-for-bug 100% compatibility between every peer, it is not a question of pairs of nodes agreeing temporarily, it is a question of every node everywhere and always agreeing. It's possible that neither written specs, no matter how thorough, nor well written code, can meet that standard (a perfect practical example is the 2013 chain split, which was not caused by any of the C++ consensus code per se, but by an unforeseen behaviour in the database software used. Not intentional but unintentional consensus dependency).
ETH from the beginning chose to base things on a written spec, and to my mind this was not wrong, but I am pessimistic that it actually changed anything. If there is an actual open-to-interpretation thing that happens, you're kind of better off having one overwhelmingly used client, as it's a de facto tiebreaker on the debatable thing. The counterargument is, if the overwhelming client has a consensus break that's clearly just an error/bug, then it's nice if there's another client everyone can switch over to as an emergency.
10 sats \ 0 replies \ @027c352e45 2 Mar \ parent \ on: 27 Reading Tips from Nassim Taleb BooksAndArticles
I don't think that captures it. He's pompous and conceited to the point of it making him an idiot sometimes. But he is an intellectual. His own term applies well: IYI.
I don't see anything wrong with that, many of us are anon, at some point we will not be distinguishable from AI (except by being dumber).
Yes I do have a lot of sympathy for the 'build it concretely first, rather than just theorize in abstract papers' point of view. There's wisdom in that. But for some kinds of system, there can be dangers of insufficient analysis of attacks.
Building it first means you get eyes on it. Analysing it theoretically first means you might spot the attacks before they're executed against innocent users.
I disagree.
Very occasionally, the authors of these papers will write code themselves in support of their proposed algorithm, which i think is fantastic.
The issue is that cryptography is, fundamentally, mathematics so it should use mathematical formalism to be unambiguous.
That doesn't mean mathematical formalism is a panacea; the field is rife with slight ambiguities in papers leading to terrible mistakes - for one example, look up the 'frozen heart' vulnerability.
But replacing mathematical notation with pseudocode, will not make that better.
Adding pseudocode, where it's relevant, otoh, yeah, that can be a great idea. But it makes more sense one layer up - when you write protocol specs. For exa.ple, there was a tradition of using pseudocode in RFCs for internet protocols, e.g. tls 1.0, rfc2246
A witness can be thought of as basically a piece of data that gives a concrete proof for a claim you're making. Like imagine i claim '91 is not prime'. The witness could be (13,7) because anyone can verify that those are the factors. Note that for some problems, there is more than one solution, so multiple witnesses might be valid.
It's not hard to see that 'i have control of this utxo' and a signature (or more generally a script) could fit that same template. With ecdsa/schnorr digital signatures, more than one signature is valid.
About the segregation: this is all about how you uniquely identify transactions. A fully valid transaction includes a list of inputs, a list of outputs and thirdly a 'witness' (as above) that the spending of each input is authorised by its owner.
If you hash all 3 of those and use that as the txid, you lose uniqueness; by changing the witness - using a different, but still valid, signature - you will change the hash, but the transaction will still be valid and accepted by the network.
So instead, segwit removes that third part, the witness, from the hashing. The txid only refers to the inputs and outputs, not the witness.
This means that the txid fixes the actual financial transfer, but is not affected by how it is authorized. This is how it should be.
Finally, three practical points: 1/ the witnesses are stored in a separate part of each transaction, not in a separate part of the block, as some people erroneously say (how would that even work!?), 2/ in ECDSA it is possible for another person, not the key holder, to create a different signature, so txids depending on it really can be a disaster (see 'malleability'), 3/ lightning actually really depends on removing this malleability, it wouldn't work otherwise.
GENESIS