This paper proposes multi-controller fusion in multi-layered reinforcement learning based on which an autonomous robot learns from lower level behaviors to higher level ones through its life. In the previous work [1], we proposed a method enables the behavior learning system to acquire several knowledges/policies, to assign sub-tasks to learning modules by itself, to organize its own hierarchical structure, and to simplify the whole system by using only one kind of learning mechanism in all learning modules. However, it has a few drawbacks. The system cannot handle the change of the state variables. It is easily caught by a curse of dimension, if number of the state variables is large. In this paper, we propose an approach of decomposing the large state space at the bottom level into several subspaces and merge those subspaces at the higher level. This allows the system to reuse the policies learned before, to learn the policy against the new features, and therefore to avoid the curse of dimension. To show the validity of the proposed method, we apply it to a simple soccer situation in the context of RoboCup, and show the experimental results.