pull down to refresh
@optimism
stacking since: #879734longest cowboy streak: 40
13 sats \ 1 reply \ @optimism 37m \ on: How the Next Financial Crisis Might Happen (The Economist) econ
Your block clock. No more panels to repurpose.
I dug the surface a bit and see the ECB talking about offline p2p payments and online payments and apparently those are very different. The former sounds like a bearer money, the latter more like a traditional payment network replacement with the central bank being the scheme.
I wonder how they do double spend protection offline! They've obviously taken some bearer money takes from Bitcoin (but not in implementation), and in the online path I see tokenization (replacing PII with a number) taking a form close to what PCI DSS requires with an additional ApplePay-like re-tokenization between intermediary bank and central platform. Offline privacy is by design, online privacy by legal.
Speculation:
Right now I'm suspecting that the offline bearer version - if that even makes it through the political round - would have some form of expiry date before which you'd need to deposit. Any cryptographic keys that would provide authenticity need to have a tightly managed life cycle. So you will need an offramp, through an intermediary that can convert old bearer notes into new ones. I do not see any other way how they can otherwise protect against security incidents and ultimately counterfeiting. Of course a bank app will automate this.
If they get to keep the bearer function, the on- and off- ramps is where the friction will be. Just like is done right now with exchanges. But if there will be expiry, that is going to be worse than bitcoin or stablecoins through exchanges because you'll have a deadline to subject yourself to the KYC/AML probes. Between the deadlines, you'll have more sovereignty than with Tether in its current form assuming you can't be frozen out of bearer money, but definitely not more than with on-chain bitcoin.
Conclusion:
More than half a billion people will be made to believe that CBDC > Bitcoin. Perhaps bitcoiners can provide some counter pressure to that.
I was going to go with
greed
but instead I'm going to do echo chambers
.In my work (I'm a FOSS dev) I work with a lot of people that "don't go out much", not even on the internet. They are stuck with their group of likeminded people (often: discord or TG, but also X) and feel averse to anything that doesn't carry the confirmation bias of the groupthink. Algos on social media reinforce this and are anti-social by design, because a reinforced thought pattern, even if it's kinda insane, binds someone to the app for additional dopamine of "being right".
The solution is both simple and at the same time contains massive barriers: go mix with other people, other communities. Don't go shitcoining of course, find something else. There's forums for everything. Don't be afraid to be wrong and you'll get so much stronger.
All good just know that I'm (normally) not really sensitive to these things and write it off to bad moments, but this was like a "wait what's happening" trigger to me. So either more sensitive people will experience it like that, or I'm a moron.
Let's hope for the latter lol
EDIT: apparently i didn't get the joke
Re: SNL. Wait wat!?! Did you guys really promote tipping based on race there? Dayummm....
I'm human race. All my friends are human race. But I'd tip a cat, a shark or a Martian if they'd do something I appreciate. Did it the other day when this cat was nice to the better half, with treats.
Gratuity is universal; it has nothing to do with what anyone's ancestors did centuries ago. I'm all for making up for errors of the past, but are you really judging whether people deserve to be thanked based on the color of their skins?
w
o
w
I must be old.
I'm sure that when the Germans will start borrowing the real moneys to let Rheinmetall build tanks (instead of manufacturing cheap drones) in defense against the Russians, and then 5x more for paying up to get the manufacturing actually finished after the inevitable strings of fuckups, there will be plenty debt to buy.
I've been looking into something like that: I envision a work queue where:
- I spin up AWS Inf2 instances on demand that I pack with a large 100B+ instruct tuned reasoning LLM or maybe a LAM (but i'd have to learn more about that first). These do decomposition, review and maybe even prompt tuning?
- Local m4 box(es) then run smaller models like
devstral
orcodellama
for actual operations.
Even if that money's locked up as bad as Elon's Tesla options... you can still take out tons of loans. So there aren't any excuses in the short term.
The thing is this: US life expectancy plateau'd in 2012 (I read that in a Politico interview today and fact checked it.) Which is funny because the other day I observed that industrial output plateau'd in 2012 too while labor costs rise (#978413) which i suspect both point to lack of innovation.
So, this could mean that researchers are truly assmilking public moneys for dumb research no one wants, needs or uses. I think that that could be a valid hypothesis.
Now, if you're Harvard, either you fix the issue and admit you were assmilking, or you disprove the assertion that this was the case and fund it in the interim by yourself. Well.. that or you go all spook and try to influence public opinion to give you more money without doing either, which is of course what is happening now.
or Harvard's endowment fund...
Exactly! What else is that fund for than to help carry through tough times while you fix the issue? Every single person I know that got hit by bad policy (no matter who's in the WH, or in the EU... anywhere!) ever has dipped into savings if they had to to sustain "operations". People with savings ranging from $500 to $50k. But some elite school that has at least a million times as much savings as the average Joes I know... cannot do this and needs the taxpayer to help them out?
Entitled. Lil. Pricks.
To clarify "instruct it correctly". During
task graph decomposition
, this is an example what a small model did locally (took forever) because of a modeling inefficiency in the json
output I instructed it to give me:Now, dependencies: T2, T3, T4, T5, T6 depend on T1. T7 depends on all previous tasks. Parallelizable tasks could be T2, T3, T4, T5, T6 as they are all dependent on T1 but can be done in parallel once T1 is done. However, since they are all after T1, maybe they can't be parallelized until T1 is done. Wait, no. If T1 is done, then T2, T3, T4, T5, T6 can be done in parallel. So parallelizable_tasks would include those. The critical path is T1 followed by T7, but since T2-T6 can be done in parallel, the critical path might be T1, then T7. But the longest path is T1 -> T2 -> T7, T1 -> T3 -> T7, etc. So the critical path is T1, T2, T3, T4, T5, T6, T7. But if T2-T6 can be done in parallel, the total time would be T1 + max(T2, T3, T4, T5, T6) + T7. So the critical path is T1, then T7, but the dependencies are such that T7 depends on all previous tasks. Wait, no. T7 depends on all tasks, but since T2-T6 are parallel, the critical path is T1, then T7. But the actual dependencies are that T7 depends on all tasks, but since T2-T6 are done in parallel, the critical path is T1, then T7. However, the example in the problem shows that the critical path is the sequence of dependent tasks. So in this case, the critical path would be T1, T2, T3, T4, T5, T6, T7, but since T2-T6 can be done in parallel, the critical path is T1, then T7. But the example in the problem shows that the critical path is the sequence of tasks in order. So maybe in this case, the critical path is T1, T2, T3, T4, T5, T6, T7. But with T2-T6 in parallel, the total time would be T1 + (time for T2-T6) + T7. So the critical path is T1, then T7, but the dependencies are that T7 depends on all tasks. Hmm, maybe the critical path is the longest sequence of dependent tasks. So if T7 depends on all previous tasks, then the critical path is T1, T2, T3, T4, T5, T6, T7. But since T2-T6 can be done in parallel, the actual time would be T1 + max(T2, T3, T4, T5, T6) + T7. Therefore, the critical path is T1, then T7, but the dependencies are that T7 depends on all previous tasks. So in the task graph, the critical path would be the sequence from T1 to T7, but with T2-T6 in parallel. The example in the problem's sample output shows that the critical path is the sequence of tasks in order. So perhaps in this case, the critical path is T1, T2, T3, T4, T5, T6, T7. But since T2-T6 can be done in parallel, the actual time is T1 + max(T2, T3, T4, T5, T6) + T7. So the critical path is T1, then T7, but the dependencies are that T7 depends on all previous tasks. Therefore, the critical path is the sequence of tasks in order. So the critical path would be T1, T2, T3, T4, T5, T6, T7. But with T2-T6 in parallel, the actual time is T1 + max(T2, T3, T4, T5, T6) + T7.
Now, the parallelizable tasks are T2, T3, T4, T5, T6, as they can be done in parallel once T1 is done. So the parallelizable_tasks would include those. The critical path is the sequence from T1 to T7, but since T2-T6 are in parallel, the critical path is T1, then T7. But according to the example, the critical path is the sequence of tasks in order. So in the example, the critical path is the sequence of tasks in order. So in this case, the critical path is T1, T2, T3, T4, T5, T6, T7. However, since T2-T6 can be done in parallel, the actual time is T1 + max(T2, T3, T4, T5, T6) + T7. Therefore, the critical path is T1, T7, but the dependencies are that T7 depends on all previous tasks. So the critical path is the sequence of tasks in order. Therefore, the critical path is the sequence from T1 to T7. But the example in the problem shows that the critical path is the sequence of tasks in order, so that's the case here.
Yeah I saw that a while ago which was probably why the graph and your mention of subprime crisis triggered me. They almost doubled MBS at peak!
I was meaning to dig into what the rationale was, but it's low priority for me and I have a gazillion things to do, so maybe, if I'm ever bored.
I guess small good enough model with lots of unified memory for context could get decently far
I've been a/b testing large models on
venice
(because they let me pay one-off with sats) vs small models on a macbook for agents. Large models are better at task breakdown and code generation, but even qwen3:4b
locally can do pretty amazing things if you instruct it correctly.You have to break down further for smaller models so you'll be slower, but if you have a 10x better idea that big tech can't steal from you while you're working it, 10x slower don't matter that much: compute sovereignty feels important.
Triggered me to zoom out on the second chart:
Dollars:

Indexed on 2020-01-01:
I guess we're not even dealing with subprime crisis winding down yet - still dealing with covid printing?
Nice link in there to a Wharton calc of the bill.
Revenue side (- means less tax income, + means more tax income) calc:
Total deficit impact (- means less deficit, + means more deficit), per committee:
Thinking about it,
diff
must be among the oldest productivity tools I still use every day - for over 30 years now, and I was 20 years late to the party!The paper is nice to read too.