Hierarchical representations of perceptual and sensorimotor information in deep neural networks
Răzvan Valentin Florian
Abstract
Designing artificial intelligent agents that are able to learn multiple tasks autonomously, incrementally and online, discovering and solving autonomously new goals and tasks, can be framed as a sequential decision problem. Optimal decisions about what actions the agents should perform in the future depend on building predictive models of the environment, which can be shown to be equivalent to generating computer programs that compress as much as possible the stream of sensorimotor information available to the agent (Hutter, 2004). However, it is not clear how to generate such compressors / models in a generic way that it computationally tractable for realistic problems. Recently, deep neural networks have achieved important successes in classifying and generating data such as images, sound and text, and this implies the underlying generation in these networks of models that represent in a compressed fashion such data. It seems that deep neural networks implicitly exploit heuristics that are relevant for such practical problems because these heuristics fit the laws of physics of our universe (Lin, Tegmark and Rolnik, 2017). The probabilistic framework of deep learning of Patel et al. (2016) shows how deep neural networks generate increasingly abstract representations of the data they process (e.g., images) and how these representations can be used as generative models that are able to predict / simulate such data. We investigate the representational capabilities of deep neural networks in the probabilistic framework of Patel et al. (2016). We also investigate whether such models can represent not only perceptual information such as images, but also sensorimotor information, by generating hierarchical representations of a mix of perceptions and actions from the agent’s sensorimotor history and by using these representations to generate predictive models that are able to simulate the future for the purpose of adaptive action selection.
References:
Hutter, M. (2004). Universal Artificial Intelligence. Berlin: Springer.
Lin, H. W., & Tegmark, M. (2016). Why does deep and cheap learning work so well?. arXiv preprint arXiv:1608.08225.
Patel, A. B., Nguyen, M. T., & Baraniuk, R. (2016). A probabilistic framework for deep learning. In Advances in Neural Information Processing Systems (p. 2558-2566).
Notice: Undefined index: publicationsCaching in /www/html/epistemio/application/controllers/PublicationController.php on line 2240
Share comment