pull down to refresh

This was very informative.
I now wonder if this is where the "more compute makes better models" / "bigger is better" claim originated from, despite there being a number of clear graphs that suggest that it isn't true: after 1.5-2x interpolation threshold returns diminish on throwing further compute at the problem, eg:
I'd argue instead that better input = better output. It's just extremely labor intensive to curate such a dataset and that's what we're all allergic to. We'd rather hire cheap mass labor to do this than actually invest time.
Would it be possible that attention to detail can be retained through digitization?