People who use multiple channels at the same time, communicate more successfully about spatial problems than those who rely exclusively on either voice or pictures. To achieve a similarly successful interaction between a person and a geographic information system (GIS), we use two concurrent communication channels-graphics and speech-to construct a multimodal spatial query language in which users interact with a geographic database by drawing sketches of the desired configuration, while simultaneously talking about the spatial objects and the spatial relations drawn. Through the combined use of graphics and sketch, more intuitive and more precise specifications of spatial queries are possible. The key to this interaction is the exploitation of complementary or redundant information present in both graphical and verbal descriptions of the same spatial scenes. A multiple-resolution model of spatial relations is used to capture the essential aspects of a sketch and its corresponding verbal description. The model stresses topological properties, such as containment and neighborhood, and considers metrical properties, such as distances and directions, as refinements where necessary. This model enables the retrieval of similar, not only exact, matches between a spatial query and a geographic database. Such new methods of multimodal spatial querying and spatial similarity retrieval will empower experts as well as novice users to perform spatial searches more easily, ultimately providing new user communities access to spatial databases.