Title: Computable prediction
Abstract:
Consider a machine that calculates the probability of each next value when given 0 and 1 in sequence. Is there a computable function that can make the optimal prediction? If it exists, what properties would it have? This question has been formulated and studied in the theory called universal induction initiated by Solomonoff. Although there is no optimal computable prediction, it can be shown that an optimal prediction exists when considering the subject as a c.e. semi-measure, and mainly, the properties of that optimal prediction have been studied. Since the subject being considered is not computable, nothing can be claimed regarding the machine learning algorithms that are actually implemented.
In this lecture, we will provide a theoretical framework for discussing the quality of computable predictions. Specifically, we define the concept of "better predictions" for computable predictions in terms of reducibility and discuss the properties that "sufficiently good predictions" should have. We particularly focus on the speed of convergence and compare it with existing algorithms.