We present the idea of a flexible phased array of microphones for wearable computers. We show how such an array can be used for both source localization and signal enhancement. As a result, it can help to solve two fundamental problems for audio input to wearables: determining who is speaking when (user commands vs. nearby speech, who is speaking in a conversation, etc.) and obtaining high quality audio without the use of a headset microphone. We describe methods for learning the mapping between phase delays and world coordinates without specifying the array geometry and requiring minimal effort from the user. Last, we describe an implementation we have built of such an array using low-cost microphones and show some preliminary results for source localization and speaker-change detection.