This is not possible, you have to look at that what we call 'AI'. The word intelligence in that context is not equivalent to the context of intelligence of mammals. There were some discussions or suggestions to call these models not AI but PP (probability programs). And AI should only be used in the context of a real AGI (artifical general intelligence). Furthermore, it is absolutely unclear whether it is even possible to develop AGIs. There are no programmers who can speak in this area without having the same in-depth knowledge in the areas of psychology and biochemistry. At the same time, a psychology and/or biochemist needs appropriate programming skills, although it actually goes far beyond simple programming to be precise. Unfortunately, there are very few people like that in the world. It remains to be seen what results current research will deliver over time.
Furthermore, it is absolutely unclear whether it is even possible to develop AGIs.
Yes, I think we need "modified turing test". Sure, it can be difficult or impossible to tell the difference between a LLM and human, however, that LLM was seeded with conversation from conscious / intelligent humans, thus it renders the test invalid.
However all this is a moot point. Humans (esp since giving up God), love worshiping the various idols they create....therefore its conceivable a huge part of humanity decides LLMs are in fact "intelligent" and even "conscious" and treats them accordingly.
reply
I will try to find any prompts to let gpt-4o reveal that its just code or even sending it in a loop
reply