One of the ideas I’ve shared in the past few years is that chess moves in a game can be viewed purely as information. Those who frequently produce moves of high quality have very refined, exact, high quality information. At the other extreme are players who make moves essentially randomly, so that the quality of their information is very low. In this context, ratings can be viewed as a measure of information quality, and consistency from move-to-move and game-to-game (or conversely, lack of quality and inconsistency move-to-move and game-to-game.)
This approach, btw, explains why we would expect quick ratings to generally be lower than regular ratings. At the very low end of ratings, the moves are essentially random for either regular or quick, so we would expect those ratings to be similar. At the very high end, the ultra strong players are generally less impacted by faster time controls than more moderate players. So we would expect less impact there. In the middle, players would be more impacted by faster time controls, essentially adding more “randomness” to their moves - resulting in lower Quick Ratings than Regular. This is actually what we observe in practice. This is obviously, not a proof, but is an indication.
But when I saw the above quote from another thread, I began to wonder - if we had a computer that generated legal moves in any position and randomly selected one, what kind of rating would it achieve? Would it floor at 100, or could it do better? And if it does better, then what is it that makes some people play worse than random chance? Maybe in the extreme we can prove false the old adage “A bad plan is better than no plan at all.”