Context is an essential part of all cognitive function. However, neural network models have only considered this issue in limited ways, focusing primarily on the conditioning of a system's response by its recent history. This type of context, which we term Type I, is clearly relevant in many situations; but in other cases; the system's response for an extended period must be conditioned by stimuli encountered at a specific earlier time. For example, the decision to turn left or right at an intersection point in a navigation task depends on the goal set at the beginning of the task. We term this type of context, which sets the "frame of reference" for an entire episode, Type II context. The prefrontal cortex in mammals has been hypothesized to perform this function, but it has been difficult to incorporate this into neural network models. In the present chapter; we describe an approach called latent attractors that allows self-organizing neural systems to simultaneously incorporate both Type I and Type II context dependency. We demonstrate this by applying the approach to a series of problems requiring one or both types of context. We also argue that the latent attractor approach is a. general and flexible method for incorporating multi-scale temporal dependence into neural systems, and possibly other self-organized network models.