Can we use the origional text documnet (which we sumerized) as a reference in ROUGE?

Traditionally, for evaluation, the reference in ROUGE is human generated text (summary) which we compare with system generated text (summary). So consider this, if we generate summaries with different algorithms, TextRank, LexRank, Luhn, and Gensim. Then we take the generated summary as hypothesis and original text document as a reference in ROUGE and calculate the R, P, F1 for each summary. Would the scores tell us which model captures more information form orgional text?

For example, for 250 words summary from each algorithm, to test which contained the most information (representativeness) we evaluate the generated summary with original text document with ROUGE and the algorithm that gets higher score, we imply that it captured the most representativeness.

Topic automatic-summarization evaluation nlp

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.