It is common in classi fication methods to fi rst place data in a vector space and then learn decision boundaries. We propose reversing that process: for fi xed decision boundaries, we \learn” the location of the data. This way we (i) do not need a metric (or even stronger structure) { pairwise dissimilarities suffice; and additionally (ii) produce low-dimensional embeddings that can be analyzed visually. We achieve this by combining an entropybased embedding method with an entropybased version of semi-supervised logistic regression. We present results for clustering and semi-supervised classi fication.