pull down to refresh
80 sats \ 2 replies \ @davidw 3 Nov 2023 \ on: Taking a step back to rethink Generative AI meta
Nice reflections. To me, generative AI long-term has one role only. It should be to act as our interface with other specialised and narrowly trained AIs. To help in writing better prompts itself.
There’s just no way, that one AI can be better at general tasks than another that is highly trained. One that is built for a completely bespoke purpose. One that is trained on a very rich and restricted dataset.
Generative AI can be better than us, sure, but it is not optimal or going to lead to groundbreaking results that we may see by using more specialised ones.
There’s no way we would trust the talent of the average doctor in the world, to fix our very specific problems, to be our surgeon, when the world is full of extremely talented and specialised doctors that have been trained to specifically solve the problem(s) we are experiencing. That original doctor’s only role should be to present options to us and to find the path to the best doctor in the business for us.
Using generative AI (ChatGPT) and expecting it to solve all our problems, and to do so better than us, and in our style, and to not mislead us, when it knows we are seeking its advice, is asking a little too much I feel.
OpenAI is going to fail in their endeavours. I have reached that conclusion recently. The future is us training our own models in my opinion on our own datasets, or leveraging those found in the market that have generated the types of results we are looking for. No walled garden can outcompete that.
ChatGPT is going to continue to be loaded with ballast and sandbags, that drags down the quality of its output. You have already seen that with their models. The number of parameters for an organisation like theirs is going to decrease in usefulness not increase IMHO. Even if it allows you to upload docs, interface with databases etc.
The role of education I hope would be on training models in my opinion, not just prompts. If we want AI to think like us but be better than us, consistent quality results depend on taking control of the inputs. Not just requesting new outputs.
Great thoughts here. I particularly like what you say about having realistic expectations of AI. This week, I kept asking ChatGPT to help me generate dingbats for an Escape Room activity. I felt frustrated not getting what I needed. I thought the problem was with me - not knowing how to prompt accurately. Eventually, I realised that maybe it just didn’t have the capacity to create dingbats yet. It made me shudder. I have been putting ChatGPT on a pedestal and treating it like some kind of infallible God
reply
deleted by author
reply