nlg-eval
nlg-eval Evaluation code for various unsupervised automated metrics for NLG (Natural Language Generation). It takes as input a hypothesis file, and one or more references files and outputs values of metrics. Rows across these files should correspond to the same example. Metrics BLEU METEOR ROUGE CIDEr SkipThought cosine similarity Embedding Average cosine similarity Vector Extrema cosine similarity Greedy Matching score