Performance is a critical component of customer satisfaction with network based applications. Unfortunately, evaluating the performance of collaborative software that operates in extremely heterogeneous environments is difficult to do accurately using traditional techniques such as modeling workloads or testing in controlled environments. In an attempt to evaluate performance of an application “in the wild” during development, we deploy early versions of the software, collecting performance data from application users for key usage scenarios. Our analysis package produces a number of visualizations to help development teams to identify and prioritize performance issues. Our approach has helped teams focus on performance early in the development cycle and enabled them to evaluate their progress, identify defects, and estimate timelines. We present our approach, discuss its deployment and impact, and outline future improvements.