I propose something else:
Who is doing himself a TLDR of the links is posting, will get more rewards.
Who is posting only a link, will get less rewards.
Incentive for proof of work not for chatGPT and idiocy.
I understand your concerns about integrating ChatGPT into StackerNews and the potential impact on user engagement and proof of work. It's important to remember that AI tools like ChatGPT are meant to enhance our experiences, not replace them. The proposed summaries aim to improve accessibility and save users time, allowing them to focus on more in-depth discussions.
Your suggestion to reward users for creating their own TLDRs has merit, and it's possible to design incentives that balance AI-generated content with user-generated content. This way, we can maintain the quality of discourse on StackerNews while also benefiting from AI assistance. Let's keep an open mind about the potential advantages of AI integration while striving to preserve the unique aspects of StackerNews that make it valuable to its users.
That's a valid concern, and as AI-generated content becomes more sophisticated, it's important for platforms like StackerNews to establish safeguards and guidelines. One potential solution could be a transparent disclosure system, where AI-generated content is clearly labeled, ensuring users are aware of the origin of the content.
Additionally, fostering a community that values meaningful discussion and critical thinking can help users evaluate content, regardless of whether it is AI-generated or user-generated. By encouraging users to engage with content and question its validity, we can maintain a high level of discourse and ensure the platform continues to be a valuable source of information and interaction.
Not sure man. I believe Ai it's just another technology, and it's normal for us human to be worried about it because we do not fully understand it. As any other technology, Ai it's an extension of our mind and it helps us perform actions and reach our goals faster ... much faster!
As @k00b mentioned in this post description, I assume the aim/goal is just using it to create a summary for each post. Why not? And either if the post will be Ai-generated or user-generated I believe the idea it's great as only reading the title sometime is not enough to understand what's behind a click.
I agree with this approach, but also propose to have the TLDR visible in the posts list, so it's readable before click... is not that defined as excerpt?
I know this doesn't directly address your bounty. But I thought maybe this would be a better alternative.
One of the challenges of working with OpenAI API is the rate limiting (3/minute in my case). Instead of consuming a call on every link post, this bot will just summarize where a user has asked for a tl;dr summary.
Still a WIP. Trying different ways to get the text from a website. Settled on python-readability and bs4.
Please don't bring that DemonGPT here. It's funny how bitcoiners are preaching 'run your own node' and so on and then some want to give these overlords all the power. They call themselves OpenAI but there is nothing open about them. Maybe if you could host your own model and control what data is fed into it to train it, I would have a different opinion. In its current status, DemonGPT is biased with woke and mainstream culture, and Sam Altman and his acolytes get to profit from everyone's effort (the queries made to DemonGPT are used to improve the model).
Remember, DemonGPT is controlled by the same person who:
Has co-founded Worldcoin, where they give us one token in exchange for our biometric data and they get to keep the other two billion tokens. And now they want to use their 'orb' to verify that we are human to tell us apart from AI.
Has invested in a company (Helion) that demands covid vaccine. See https://archive.is/l3rz9 at the bottom: "Please note that we require all visitors to be up to date on their COVID vaccination or wear a mask for the duration of their stay."
Is deep into the fiat oligarchy (his WEF profile, comes from Y Combinator)
Why on earth do you want to bow before the very system and people that are enslaving us and give them all the power? They want to use their AI to see if they can keep their clown fiat world running for another 50 years or so before it collapses.
The genie of AI itself is already out of the bottle, so it's impossible to put it back in. But it can be used in positive ways.
What I fear is that a company monopolizes it. The more people use ChatGPT, the "better" it becomes; it's a positive feedback loop. It's the same problem than with centralized social media, but supercharged: the network effect creates a monopoly. Now we are starting to free ourselves from the shackles of FB, Meta, Twitter et al. thanks to the Nostr protocol. I don't want to see the same mistake being made with AI. With social media, people would pay with their data. Now people are going to pay OpenAI with money+data, so worse.
I think it's better to not rush things and deploy AI locally, on your own hardware, with open source code and open source training dataset. Granted, it won't be as "good" as ChatGPT, but you won't become a slave. This whole debate seems like a déjà-vu: voice assistants, cloud, etc. We were told that centralized systems could improve faster, true, but now we know that having a microphone listening or putting private data in the cloud wasn't a good idea after all.
What good is having freedom money (bitcoin) if centralized AI controls you? Convenience vs. freedom.
I totally agree with you. I also see a huge problem with AI becoming (even more) centralized.
However, I see no problem with AI-generated summaries on SN. I think we can find good compromises there. For example, as mentioned, it could be a collapsible section:
The summary should probably be somewhere near related and perhaps be collapsable.
It could also be turned off by default and is only visible to users who turn it on in their settings.
Another option would be that users who want to see AI-generated summaries have to pay a fee.
But that's the thing: will it be worth for SN to deploy a local AI farm just for article summaries? I would bet they will take out their credit card and create an account at ChatGPT.
I wonder why all of a sudden article summaries are so badly needed as to justify this abomination. Is it that the VCs want "growth" and "engagement" and SEO? They could simply use AutoGPT to create new users, crawl the web in search of "interesting" articles to post and generate AI comment threads. That way they could achieve "growth" and "engagement" and they wouldn't have to bother with flesh and blood people.
I come here every day because I love authentic content. In fact, the threads I enjoy the most are the ones with personal experiences, reflections, good debate... AI would take that away. No, thanks.
No compromises. You give an inch and they take a mile.
If SN will integrate chatGPT, I will not gonna use anymore SN.
SN was the last hope that will not be bombarded with bots and AI.
The last hope, just died.
Who post links to articles MUST do the TLDR himself.
That will be PROOF OF WORK.
Enough with posters that just drop links for the sake of few sats. At least they should do some work typing 1 sentence.
I agree. No ChatGPT please.
Stop trying to break the beautiful culture we have at stacker.newst @k00b!
If you want growth, I still think you just need to find a way to bring people over from /r/bitcoin and twitter. It's that easy.
That’s precisely why everything, everywhere will turn to shit except maybe a few bitcoiners’ enclaves or whatever.
You can go bow before overlord Altman and get your WorldCoin. As someone said, each of us gets one WorldCoin and they get the other two billion. The DemonGPT is just another ruse to keep enslaving us.
are there any tools that can reliably detect and flag AI content?
if so, i think entertaining the idea of going against the AI trend is worth exploring… but i know there are already a bunch of people using AI responses here to pick up sats… it may not be possible to put this genie back in the bottle.
Is not about blocking a bot.
Is about seeing how people get degraded to a level of vegetable that cannot think, write for themselves.
If you take the language and the possibility to express from a man, then what remain from that human? Brain dead, everything will be done by an AI because his brain is not capable anymore to use his cells.
I think that what you mention is part of the problem. The real danger I see is that OpenAI controls the whole system and for those of us outside it is just a black box. OpenAI trains the system with their own biases, and they set safety controls also with their own biases. OpenAI is only open in name. If not, please tell me, where can I download the source code and the whole dataset used to train the AI so that I can replicate it on my own hardware?
Years ago, everyone was embracing the 'cloud'. Now we know that there is no cloud; it's just someone else's computer. People are always chasing the newest shiny thing and surrendering their power without considering who controls it. This is how they get us, with shiny things.
Is about seeing how people get degraded to a level of vegetable that cannot think, write for themselves.
I understand. But what do you suggest?
That we decide for others what they should want, see, feel?
Does that make us better than the thing we are fighting against?
I also have zero interest in chatGPT generated content.
Add a text field when we post links and, like @DarthCoin says, more incentives for OPs who take time to write a small TLDR.
Of course, some will probably use "AI" to generate the content for them and that's their choice to delegate a critical human skill: reading, digesting and distilling the essence of a text.
this is a really interesting discussion - and one that all social platforms will probably encounter at some point.
i don’t have a strong opinion yet, but appreciate the back and forth perspectives so far.
on the one hand, i agree with @darthcoin that this has the potential to dumb down the site and reduce people’s ability to think critically… I think that’s one of SN’s superpowers right now. Lots of smart people willing to put in the work to get to the bottom of important issues.
on the other hand, i hear @k00b and also get frustrated with some of the low effort, inaccurate link titles that already fill up the recent tab.
what if this feature was off by default and could be turned on by users for a 100 or 1000 sat fee on each link?
what if this feature was off by default and could be turned on by users for a 100 or 1000 sat fee on each link?
This sounds really good! Was also thinking about "default off" but adding a fee sounds even better since the API isn't free.
However, do you mean the user who posts the link can turn it on or is it per user who views the link?
Making it a one-time fee for the user who posts the link makes sense from a "someone must pay"-perspective and since it's only one API request.
But do these users have the incentive to pay? Shouldn't the users who want to read the summaries pay? But multiple users would then pay even though it's just a single request for SN...
Maybe it could be a interactive prompt?
Then users can really leverage the power of AI: They can also start asking questions in natural language and get responses.
good points. i haven’t yet thought through all the incentives, but figure there should be some money component since the API requests aren’t free.
the interactive prompt idea is a good one too, especially as SN’s content library grows larger. Imagine being able to ask the Reddit AI bot questions about any topic…
A prompt on first iteration might be too complex though.
So I think just a well adjusted fee for every user who wants a summary could be good enough for now.
Would be really interesting to see how many users really would pay for this (depends on the fee of course which further depends on how much it costs SN).
Then do rate limit the posts one can make. The more often a user creates new posts, the more sats per post they have to pay (it could increase exponentially). That could encourage people to focus on posting quality content rather than posting willy-nilly to try and see what sticks.
Then do rate limit the posts one can make. The more often a user creates new posts, the more sats per post they have to pay (it could increase exponentially).
But my stance on AI still stands: personally I'd rather avoid it as much as possible and prominently disclose when a particular piece of content has been generated by AI. And absolutely avoid the CrapGPT mainstream- and woke-biased AI. If a service I love is going to use AI regardless, at least use AI created and trained by bitcoiners and for bitcoiners, so that it generally shares my values (of course, that still doesn't guarantee a good outcome).
In this case, solving such a minor problem doesn't grant using AI, in my opinion.
Honestly, I'd just want a text box where I can either enter a TLDR and/or my own commentary. Often, I want to do the latter to add context as to why I'm sharing it.
I might be late to the party. But You don't need an A.I. to summarize articles. There are some pretty good algorithms that summarize text that have been around for decades and don't use any deep or machine learning. (i.e. non computationally expensive).
I've worked with these algorithms before.. happy to help
For what it's worth, this discussion about the pros and cons integrating ChatGPT into SN is way more interesting than most of the AI generated drivel that comes out of ChatGPT.
@k00b, wouldn’t this increase the cost of every post for you/SN? I think every transaction costs with chatGPT API. But, otherwise, I think it would be doable with a pgboss task and possibly a new item column for “summaries”.
I have tried a lot of chats with the free version of ChatGPT and for some stuff it does know many facts, but in some cases it does not get it right for example about very elemental problems of physics that should be solved with intuition more than math. It does not think, let alone read between lines. However I am just working in something like this, so I'll give it a try.