Voluminous medical images are critical assets for clinical decision support systems. Retrieval based on the image content can help the clinician in mining images relevant to the current case from a large database. In this paper we address the problem of retrieving relevant sub-images with similar anatomical structures as that of the query image across modalities. The images in the database are automatically annotated with information regarding body region depicted in the scan and organs present, along with their localizing bounding box. For this purpose, initially a coarse localization of body regions is done in the 2D space taking contextual information into account. Following this, finer localization and verification of organs is done using a novel, computationally efficient fuzzy approximation method for constructing 3D texture signatures of organs of interest. They are then indexed using an inverted-file data structure which helps in ranked retrieval of relevant images. Apart from retrieving sub-images across modalities by image example, automatic annotation and efficient indexing allows query by text, limited only by the semantic vocabulary. The algorithm was tested on a database of non-contrast CT and T1-weighted MR volumes. Quantitative assessment of the proposed algorithm was evaluated using ground-truth database sanitized by medical experts.