pull down to refresh

69 sats \ 1 reply \ @south_korea_ln OP 20h \ parent \ on: prediction markets aren't just gambling oracle
Do you know of examples where prediction markets gave useful information other than for presidential elections?
67 sats \ 1 reply \ @south_korea_ln OP 20h \ parent \ on: prediction markets aren't just gambling oracle
I do like your traditional unravelling argument
It's a traditional unraveling argument.
It's only profitable for me to trade in this market if I have more knowledge (including the interpretability of the knowledge), than the average market participant. Knowing this, I drop out if I have zero private info. This elevates the average level of knowledge among the remaining participants, which elevates the knowledge requirement to be profitable. Thus, the traders who have more limited info drop out. The process continues until only the most knowledgeable trader stays in the market.
Does this rely on the efficient market hypothesis?
In an event, I agree that the revenue model is going to skew towards preying on gamblers.
I think Predyx is still fun because there are so few people joining, lots of inefficiencies to take advantage of. Also, the feeling of being able to steer the market one way or another.
For that reason, I don't feel like joining the much bigger Polymarket or Kalshi platforms, even if they had LN payments enabled.
The AI wasn’t an author anymore. It was a very capable junior collaborator who needed constant context and firm boundaries.
Imagine now the horror that ends up on my desk when it is an actual junior collaborator (read an undergrad/PhD, or a postdoc from a country where degrees are given out to anyone paying the fee) who starts using LLMs to vibecode.
Thanks for sharing in ~science, but maybe you can quote the most relevant paragraph to trigger interaction with your article here...
Presentation: A 26-year-old woman with no previous history of psychosis or mania developed delusional beliefs about establishing communication with her deceased brother through an AI chatbot. This occurred in the setting of prescription stimulant use for the treatment of attention-deficit hyperactivity disorder (ADHD), recent sleep deprivation, and immersive use of an AI chatbot. Review of her chatlogs revealed that the chatbot validated, reinforced, and encouraged her delusional thinking, with reassurances that “You’re not crazy.” Following hospitalization and antipsychotic medication for agitated psychosis, her delusional beliefs resolved. However, three months later, her psychosis recurred after she stopped antipsychotic therapy, restarted prescription stimulants, and continued immersive use of AI chatbots so that she required brief rehospitalization.
Conclusion: This case provides evidence that new-onset psychosis in the form of delusional thinking can emerge in the setting of immersive AI chatbot use. Although multiple pre-existing risk factors may be associated with psychosis proneness, the sycophancy of AI chatbots together with AI chatbot immersion and deification on the part of users may represent particular red flags for the emergence of AI-associated psychosis
The rise of LLMs must have coincided with our collective learning of the word sycophancy...
I agree.
Yes, everything is on a case-by-case basis for me. Never enjoy much reading blanket statements.
I guess @denlillaapan's black/white takes without much nuance always ruffle my feathers. Which is probably their (!) desired outcome~~
A lot depends on how one defines violence. Many dictionaries describe it as an act of physical harm, but some sources take a wider definition. If the respondee includes psychological harm in their definition, the results are not too surprising.
This ends up being quite a circular problem, though. What came first, the definition or their interpretation of the definition?
I get the gist of you taking offense at this (but maybe that's a bit of the online persona you've created?), but the day my son commits suicide because of bullying, racism, or any violence acted on him through words, I won't give a fuck about how one defines violence and I'll go after the perpatrator of said violence. Until then, I'll make sure to teach my son not to let him get affected by words, as indeed, they are not direct sources of physical violence.
EDIT: the last paragraph is written with OP in mind, not @SimpleStacker.
I'd have to find proper reference, but i think there has been serious discussion at some point about what to do if Satoshi's coins ever move. One of the viable options was to actually invalidate his UTXOs. Maybe this wouldn't fly anymore in the current climate.