This work describes a robot visual homing model that employs, for the first time, the conjugate gradient Temporal Difference (TD-conj) method. TD-conj was proved to be equivalent to a gradient TD method with a variable., denoted as (TD(lambda((conj))(t))), when both are used with function approximation techniques. This fact is employed in the model to improve its performance. Based on visual input that is passed through radial basis layer, the model takes advantage of the model-free interactive-learning capability of reinforcement learning (RL) by using a whole image measure to recognize the goal, without the aid of special landmarks. Therefore, unlike other models, this model refrains from artificially manipulating the environment or assuming a priori knowledge about it, two typical constraints that widely restrict the applicability of existing models in realistic scenarios. An on-policy on-line control method was used to train a set of neural networks. With the aid of variable eligibility traces, these networks approximates the agent's action-value function allowing it to take optimal actions to reach its home. The effectiveness of the model was experimentally verified where an agent equipped with it achieved efficacy in finding a goal location with no a priori knowledge of the environment.