Monday, 30 July 2007

Learning Symbolic Models

It is quite a common sense that successful generalization is the key to efficient learning in difficult environment. It appears to me that this must be especially true for reinforcement learning.
One potentially very powerful idea to achieve successful generalization is to learn symbolic models. Why? It is because a symbolic model (almost by definition) allows for very powerful generalizations (e.g. actions with parameters, state representation of environments with a variable number of objects with different object types, etc.).
JAIR just published the paper on this topic by H. M. Pasula, L. S. Zettlemoyer and L. P. Kaelbling, with the title "Learning Symbolic Models of Stochastic Domains". A brief glance reveals that the authors propose a greedy learning method, assuming a particular representation. The learning problem itself was shown earlier to be NP-hard, hence this sounds like a valid approach.
However, one thing is badly missing from this approach: learning the representation itself. It appears that the authors' assumption is that the state representation of the environment is given in an appropriate symbolic form. This is in my opinion a very strong assumption -- at least when it comes to deal with certain real-world problems when only noisy sensory information of the environment is available (think of robotics).
However, I must realize that I am disappointed only because I wrongly assumed (given the title) that the paper will be about the problem of learning representations. Will we ever be able to learn (symbolic) representation in an efficient manner? What are the conditions that allow efficient learning? Do we need the flexibility of symbolic representations to scale up reinforcement learning at all? Here, at UofA several people are interested in these questions (well, who is not??). More work needs to be done, but hopefully, some day you will here more about our efforts..