Despite the drastically different settings, cultures, incentive systems, time pressures, etc., we find that the parameters of peer review converge in contemporary software projects. We examine two Google lead projects, Android and Chrome, three Microsoft projects, Bing, Office, and MS SQL, and one project internal to AMD. We contrast our findings with data taken from traditional software inspection conducted on a Lucent project, a compiler, and from open source software peer review on six projects, including Apache, Linux, and KDE. Our measures include the review interval, the number of developers involved in review, and proxy measures for the number of defects found during review. We also introduce a measure of the degree to which knowledge is shared during review, an aspect of review practice that has traditionally only had experiential support. Our knowledge sharing measure shows that conducting peer review increases the number of distinct files a developer knows about by 66% to 150% depending on the project. This paper represents one of the first studies of contemporary review in software firms and the most diverse study of peer review to date. We discuss the practices that converge among projects as well as any divergent and anomalous practices.