We present a technique that improves random test generation by incorporating feedback obtained from executing test inputs as they are created. Our technique builds inputs incrementally by randomly selecting a method call to apply and ﬁnding arguments from among previously-constructed inputs. As soon as an input is built, it is executed and checked against a set of contracts and ﬁlters. The result of the execution determines whether the input is redundant, illegal, contract-violating, or useful for generating more inputs. The technique outputs a test suite consisting of unit tests for the classes under test. Passing tests can be used to ensure that code contracts are preserved across program changes; failing tests (that violate one or more contract) point to potential errors that should be corrected. When applied to 14 widely-used libraries comprising 780KLOC, feedback-directed random test generation ﬁnds many serious, previously-unknown errors. Compared with both systematic test generation and undirected random test generation, feedback-directed random test generation ﬁnds more errors, ﬁnds more severe errors, and produces fewer redundant tests.