We describe a framework for vision processing (VP) which makes crucial use of a large lexical database which has been automatically derived from machine-readable dictionaries (MRDs). We suggest that MRDs encode much of the information about the physical and common-sense properties of objects needed for broad-coverage VP. Underlying this is the claim that organizing visual processing around a lexical database will allow for bidirectional mapping between images and linguistic descriptions of these images.