Introduced is a, new inductive inference paradigm, Dynamic Modeling. Within this learning paradigm, for example, function h learns function g iff, in the i-th iteration, h and g both produce output, h gets the sequence of all outputs from g in prior iterations as input, g gets all the; outputs from h, ill prior iterations as hlpltt, and, from some iteration on; the sequence of h's outputs will be programs for the output sequence of g. Dynamic Modeling provides an idealization of, for example, a social interaction in which h seeks to discover program models of g's behavior it; sees in interacting with g, and h openly discloses to g its sequence of candidate program models to see what g says hack. Sample results: every g call be so learned by some h.; there are g that; can only be learned by an h if g can also learn that h, back; there are extremely secretive h, which cannot be learned back by any g they learn, bill, which, nonetheless, succeed ill learning infinitely many g; quadratictime learnablity is strictly more powerful than lintime learnablity. This latter result, as well as others, follow immediately from general correspondence theorems obtained from a unified approach to the paradigms within inductive inference. Many proofs, some sophisticated; employ machine self-reference, a.k.a., recursion theorems.