To investigate the diversified applications in vehicular networks, artificial intelligence, intelligent edge computing, and vehicular networks are combined. By offloading computation tasks to devices close to vehicles, Vehicular Edge Computing (VEC) has emerged as a new computing paradigm to tackle the problem. Most existing VEC methods simply slice the application into subtasks for offloading purposes without considering the dependencies between subtasks. In practice, the dependency information is critical to the efficiency of offloading strategies. If a subtask requires the computation result of another subtask, the latter has to be processed before the former is finished. In this paper, we propose a deep reinforcement learning based offloading strategy for multi-vehicle collaboration VEC, with task dependency taken into account. With the proposed strategy, we formulate the offloading problem as an Markov Decision Process (MDP) and use the Sequence-to-Sequence (S2S) neural network to represent the policy/value function of the MDP. Furthermore, we train the S2S neural network to obtain the appropriate offloading policy using the Proximal Policy Optimization (PPO) technique. Our simulation results indicate that, by considering task dependencies during offloading, the proposed strategy outperforms existing methods in effectively reducing task offloading latencies.