This paper, having a tutorial character, is intended to provide an introduction to the theory of noncooperative differential games. Section 2 reviews the theory of static games. Different concepts of solution are discussed, including Pareto optima, Nash and Stackelberg equilibria, and the co-co (cooperative-competitive) solutions. Section 3 introduces the basic framework of differential games for two players. Open-loop solutions, where the controls implemented by the players depend only on time, are considered in Section 4. These solutions can be computed by solving a two-point boundary value problem for a system of ODEs, derived from the Pontryagin maximum principle. Section 5 deals with solutions in feedback form, where the controls are allowed to depend on time and also on the current state of the system. In this case, the search for Nash equilibrium solutions leads to a highly nonlinear system of Hamilton-Jacobi PDEs. In dimension higher than one, we show that this system is generically not hyperbolic and the Cauchy problem is thus ill posed. Due to this instability, feedback solutions are mainly considered only in the special case with linear dynamics and quadratic costs. In Section 6, a game in continuous time is approximated by a finite sequence of static games, by a time discretization. Depending of the type of solution adopted in each static game, one obtains different concepts of solutions for the original differential game. Section 7 deals with differential games in infinite time horizon, with exponentially discounted payoffs. Section 8 contains a simple example of a game with infinitely many players. This is intended to convey a flavor of the newly emerging theory of mean field games. Modeling issues, and directions of current research, are briefly discussed in Section 9. Finally, the Appendix collects background material on multivalued functions, selections and fixed point theorems, optimal control theory, and hyperbolic PDEs.