Tell me how serious or silly this idea is:
Instead of humans proving that they are a human in a CAPTCHA1, what if they tested how well you can “pretend to be a computer”? (Let’s ignore the details of such a test for now.)
I imagine this could theoretically lead to better results because there’s also a twist: if you’re actually good at it and you pass, it means you can’t enter.
So a computer would not only need to be good at pretending to be a human, but good at pretending to be a human pretending to be a machine. And if they are too good and pass, they also fail (you know what I mean lol).
A human however doesn’t even need to understand that there’s a twist. They shouldn’t be very good at pretending to be a computer anyway.
What do you think?
Footnotes
-
Completely Automated Public Turing test to tell Computers and Humans Apart ↩