pull down to refresh

Been thinking about this as someone has asked me to weigh in on considering this as a requirement for a publication.
I've been writing my thoughts on this topic. I'm not fully decided on it really. I plan on sharing my thoughts in an article (not AI generated) here once I have a clear mind on it.
There are many ways someone could use AI in writing content for the web. Here are some examples.
  1. Writing an article. I enter "write a tutorial explaining how to upgrade a Javascript package with npm" into a chatbot.
  2. Proofreading an article. I write an article and paste it into a chatbot and ask it to fix grammar and spelling errors.
  3. Improving an article. I write an article and ask a chatbot to review it and make suggestions for improving the article
  4. I write an article and ask a AI tool to generate an image or chart for the article to improve it.
  5. I ask a chatbot for ideas on what to write about.
  6. I use a chatbot to research a topic that I'm going to write about.
  7. I write an article and ask a chatbot to fact-check the article for mistakes and provide information using primary sources.
There are probably more ways one could use AI in the process of writing an article but I think this makes the point that its not a simple thing to think about and I suspect the lines will get even more blurred over time.
Its an interesting thing to think about and I'm repeatedly underwhelmed with the arguments on both sides. What do you think?
Yep50.0%
Nope28.6%
Don't know7.1%
Don't care14.3%
14 votes \ poll ended
173 sats \ 1 reply \ @Scoresby 11 Jul
If someone is able to put words in an order that sets my soul to singing, I don't care how they arrived at it -- unless they got it from someone else and are acting like it is their own.
If llms can be prompted to produce truly insightful writing that changes how I understand the world and the colors I see, then the prompters are welcome to the credit of such writing (because it certainly isn't a one-shot right now and requires thought to make it produce insight).
Given the trajectory of llms, it is likely they will in short order be producing writing that is very difficult to distinguish from genuinely thoughtful writing. It may still be slop, but it will be well-hidden slop -- which is to say it will not be so different from most of the writing currently produced by humans.
If, however, llms get to the point where very little prompting is required to open new fields of discovery in my mind, then we are confronted with a new problem: are such models deserving of some sort of status beyond a dumb tool?
If it is a problem for a person to post llm writing without attribution, it means llms are more than a dumb tool. There has to be a person to whom the attribution needs to be made.
This, for me, is the core of the debate: are llms a tool or are they an entity? If tool, no attribution necessary -- if an entity, attribution should be made - but also !!!!!!
reply
Using just those two options is an oversimplification. It’s not simply a tool, because it doesn’t assist a human in performing a task they couldn’t do — it does it for them. In the intellectual and creative realm, it’s not assisting, it’s doing. Nothing generated by AI from a human prompt is the work of that human. Nor is it an entity — it’s an organized logic that understands our language and how we use it, and it expresses results accordingly. There’s no actual intelligence behind it, just machine learning. Nothing we’ve solved or created as humans has been or will be surpassed by machines, because they can’t create solutions on their own without prior references.
reply
254 sats \ 3 replies \ @k00b 11 Jul
I think any pre and post production use of AI doesn't need to be disclosed (which looks to be everything but (1) and (4)). Although, if I were to ask a friend for production help, I'd cite them out of courtesy and you could argue we owe that to the LLM researchers.
My line is when you begin copy and pasting things from LLM output as if it's your own work unless it's something formatting related like make this list a markdown list. I just see it as dishonest to share thoughts/content where it's assumed to be from your own brain when it isn't from your own brain and not disclose it. I read most things knowing that I'm taking an attention-risk and that calculation is affected by how much attention-cost the writer paid to produce a work.
As you say, this may just be where norms are currently. Eventually, maybe no one will think their own thoughts and produce their own content beyond the prompt and there won't be this mismatch of expectations. Instead, real writers will disclose the tools they didn't use.
reply
I'd add (3), also see #1027214 for an instance where this allegedly happened (tbh I don't really believe the narrative/excuse given that it happened the way it was said): lawyer gave a draft to a bot and the bot hallucinated cases. This apparently happens a lot, #1034753
PS: that the most expensive professionals in the universe partially (or fully) outsource their work to a chatbot is truly appalling to me; but maybe these lawyers all charge normal rates under $150/hr instead of the $2500/hr I'm used to paying.
reply
Eventually, maybe no one will think their own thoughts and produce their own content beyond the prompt and there won't be this mismatch of expectations
I think you could make an argument that a plurality of people already don't think their own thoughts. This was the case before AI.
reply
Thanks. Great perspective.
reply
If it can't be enforced then I'm not really sure what the point of the rule is
reply
Indeed. Valid point. But it does set an expectation.
reply
22 sats \ 1 reply \ @OT 12 Jul
I write an article and ask a chatbot to fact-check the article for mistakes and provide information using primary sources.
I don't think I would do this. From what I've heard about current models is that they are often wrong but also very confident.
reply
Yes, but this is when using one agent vs multiple with focused tasks.
reply
I can’t trust content like this for the simple reason that it’s a work involving more than one hand, and the one that contributed the most can’t even be credited — either out of laziness or the author’s incompetence.
Proofreading an article. I write an article and paste it into a chatbot and ask it to fix grammar and spelling errors. Improving an article. I write an article and ask a chatbot to review it and make suggestions for improving the article
AI doesn’t do revision work — you could do that yourself and ensure the text follows the flow you want. From the moment it ‘corrects’ to improve flow, agreement, and so on, it’s no longer your text. Asking for improvements is the same thing — if not worse — since in addition to changing it, it will also add to it.
reply
For sentence one: Writing an article. I enter "write a tutorial explaining how to upgrade a Javascript package with npm" into a chatbot. I think it needs someone from domain to understand this. Also there is a lot of existing chatbot platforms so need of new platforms is reduced.
reply
Yep,absolutely
I ask ai to write me a xxx word post about xxx and then I edit the slop to suit my needs, it ALWAYS needs attention, editing and change.
I find that there is a lot of sentence construction I like and it would take me xxx time to create that organically.
So if I edit ai slop, is it still slop or is it original, or is it a hybrid?!
reply