The characteristic melodic motifs (lit. pakad) of a raga in Indian classical music are an important cue to its identity. Artists, however, incorporate considerable creative variation within a raga phrase during performance while still preserving its identity in the ears of the listeners. It is of interest therefore to explore the boundaries of this categorization of the phrase identity, given the space of musical variations in the pitch interval and duration dimensions. Such an endeavour can help better model musical similarity for music retrieval and pedagogy applications. Our primary research goals lie in modelling the melodic shape corresponding to a raga phrase in a perceptually relevant manner. This work can help develop methods for the automatic discovery of musically meaningful melodic patterns from audio. In this talk, I shall motivate the relevance of studying music perception, in a computational framework. As a signal processing engineer, I started off with modelling melodic motifs with a view to being able to ‘classify’ and ‘discover’ characteristic phrases in a raga audio. Being a musician myself, I could perceive the similar patterns in a concert audio, the pitch contours of which looked quite different visually. Seeing these broad differences in vision and audition, the immediate question that came to my mind was: “How different is ‘different’?” That was the point I started reverse engineering and noted human responses to capture the correlation between subjective and objective behaviors. The broad goal is to redefine a distance measure best suited for Indian raga music that will take care of ‘microtonality’ and ‘improvisation’, the crux of this music tradition.