Image matching is a cornerstone technology in many image understanding, augmented reality and recognition applications. The state-of-the-art techniques follow a feature-based approach by extracting interest points and describing them by either rotation or affine invariant descriptors. However, requiring rotation or affine invariance comes at an additional computational cost as well as inaccurate estimates in some cases such as out-of-plane rotations. Fortunately, today most mobile devices incorporate 3-D accelerometers that measure the acceleration values along the three axes. In this paper, we propose to employ the acceleration values to calculate the in-plane and tilting rotation angles of the capturing device, in order to alleviate the need for constructing rotationally invariant descriptors. We describe an approach for incorporating the calculated rotation angles in the process of interest point extraction and description. Furthermore, we evaluate empirically the proposed approach, both in terms of computational time and accuracy on standard datasets as well as a dataset collected using a mobile phone. Our results show that the proposed approach provides savings in computational time while providing accuracy gains.