In this paper, we address the performance problems inherited when we use word embedding for recommendation. Free-text documents has no structural constructing rules, and are hard to model. Hence, the problem of having an accurate model, that conveys all the important information is a nontrivial problem. We convert the document to a numeric structure using word-embedding and test two document representations: one based in the center of this numeric representation and the other one based on pre-defined set of topics. We build a free text recommendation system and study how the performance, in terms of precision and recommendation time, is affected by both representations. We then vary the number of topics used to represent documents and verify the tradeoffs inherited from having a compact representation. The more compact the recommendation, the shorter the recommendation time, however more information is lost in the compactation process. We empirically test different possibilities for the topics and find an optimal point that is 3 times faster than a baseline and almost as accurate as it.