pull down to refresh

Been thinking about this as someone has asked me to weigh in on considering this as a requirement for a publication.
I've been writing my thoughts on this topic. I'm not fully decided on it really. I plan on sharing my thoughts in an article (not AI generated) here once I have a clear mind on it.
There are many ways someone could use AI in writing content for the web. Here are some examples.
  1. Writing an article. I enter "write a tutorial explaining how to upgrade a Javascript package with npm" into a chatbot.
  2. Proofreading an article. I write an article and paste it into a chatbot and ask it to fix grammar and spelling errors.
  3. Improving an article. I write an article and ask a chatbot to review it and make suggestions for improving the article
  4. I write an article and ask a AI tool to generate an image or chart for the article to improve it.
  5. I ask a chatbot for ideas on what to write about.
  6. I use a chatbot to research a topic that I'm going to write about.
  7. I write an article and ask a chatbot to fact-check the article for mistakes and provide information using primary sources.
There are probably more ways one could use AI in the process of writing an article but I think this makes the point that its not a simple thing to think about and I suspect the lines will get even more blurred over time.
Its an interesting thing to think about and I'm repeatedly underwhelmed with the arguments on both sides. What do you think?
Yep45.5%
Nope27.3%
Don't know9.1%
Don't care18.2%
11 votes \ 16h left
100 sats \ 0 replies \ @Scoresby 6h
If someone is able to put words in an order that sets my soul to singing, I don't care how they arrived at it -- unless they got it from someone else and are acting like it is their own.
If llms can be prompted to produce truly insightful writing that changes how I understand the world and the colors I see, then the prompters are welcome to the credit of such writing (because it certainly isn't a one-shot right now and requires thought to make it produce insight).
Given the trajectory of llms, it is likely they will in short order be producing writing that is very difficult to distinguish from genuinely thoughtful writing. It may still be slop, but it will be well-hidden slop -- which is to say it will not be so different from most of the writing currently produced by humans.
If, however, llms get to the point where very little prompting is required to open new fields of discovery in my mind, then we are confronted with a new problem: are such models deserving of some sort of status beyond a dumb tool?
If it is a problem for a person to post llm writing without attribution, it means llms are more than a dumb tool. There has to be a person to whom the attribution needs to be made.
This, for me, is the core of the debate: are llms a tool or are they an entity? If tool, no attribution necessary -- if an entity, attribution should be made - but also !!!!!!
reply
254 sats \ 3 replies \ @k00b 8h
I think any pre and post production use of AI doesn't need to be disclosed (which looks to be everything but (1) and (4)). Although, if I were to ask a friend for production help, I'd cite them out of courtesy and you could argue we owe that to the LLM researchers.
My line is when you begin copy and pasting things from LLM output as if it's your own work unless it's something formatting related like make this list a markdown list. I just see it as dishonest to share thoughts/content where it's assumed to be from your own brain when it isn't from your own brain and not disclose it. I read most things knowing that I'm taking an attention-risk and that calculation is affected by how much attention-cost the writer paid to produce a work.
As you say, this may just be where norms are currently. Eventually, maybe no one will think their own thoughts and produce their own content beyond the prompt and there won't be this mismatch of expectations. Instead, real writers will disclose the tools they didn't use.
reply
156 sats \ 0 replies \ @optimism 7h
I'd add (3), also see #1027214 for an instance where this allegedly happened (tbh I don't really believe the narrative/excuse given that it happened the way it was said): lawyer gave a draft to a bot and the bot hallucinated cases. This apparently happens a lot, #1034753
PS: that the most expensive professionals in the universe partially (or fully) outsource their work to a chatbot is truly appalling to me; but maybe these lawyers all charge normal rates under $150/hr instead of the $2500/hr I'm used to paying.
reply
Eventually, maybe no one will think their own thoughts and produce their own content beyond the prompt and there won't be this mismatch of expectations
I think you could make an argument that a plurality of people already don't think their own thoughts. This was the case before AI.
reply
Thanks. Great perspective.
reply
0 sats \ 1 reply \ @OT 2h
I write an article and ask a chatbot to fact-check the article for mistakes and provide information using primary sources.
I don't think I would do this. From what I've heard about current models is that they are often wrong but also very confident.
reply
Yes, but this is when using one agent vs multiple with focused tasks.
reply
Yep,absolutely
0 sats \ 0 replies \ @stax 6h
I ask ai to write me a xxx word post about xxx and then I edit the slop to suit my needs, it ALWAYS needs attention, editing and change.
I find that there is a lot of sentence construction I like and it would take me xxx time to create that organically.
So if I edit ai slop, is it still slop or is it original, or is it a hybrid?!
reply
If it can't be enforced then I'm not really sure what the point of the rule is
reply
Indeed. Valid point. But it does set an expectation.
reply