Precisely understanding how objects move in 3D is essential for broadscenarios such as video editing, gaming, driving, and athletics. With screen-displayed computer graphics content, users only perceive limited cues to judge the object motion from the on-screen optical flow. Conventionally, visual perception is studied with stationary settings and singular objects. However, in practical applications, we-the observer-also move within complex scenes. Therefore, we must extract object motion from a combined optical flow displayed on screen, which can often lead to mis estimations due to perceptual ambiguities. We measure and model observers' perceptual accuracy of object motionsin dynamic 3D environments, a universal but under-investigated scenario in computer graphics applications. We design and employ a crowdsourcing-based psychophysical study, quantifying the relationships among patterns of scene dynamics and content, and the resulting perceptual judgments of object motion direction. The acquired psychophysical data underpins a model forgeneralized conditions. We then demonstrate the model's guidance ability tosignificantly enhance users' understanding of task object motion in gamingand animation design. With applications in measuring and compensating forobject motion errors in video and rendering, we hope the research establishesa new frontier for understanding and mitigating perceptual errors causedby the gap between screen-displayed graphics and the physical world.