Reviewing a paper - common practice
I've been asked to review a paper in which the authors compare their new model (let's call it Model A) to other models (B, C, and D), and conclude theirs is superior on some metric (I know, big surprise!).
Here's the problem: in my research, my supervisors always instructed me to code up the competing models and compare my model that way. The paper I'm reviewing, by contrast, just quotes results from previous literature.
To clarify, here's what I would have had to do if I had been these authors:
- Code up model A.
- Code up models B, C, and D
- Run all models on the data set, and obtain metrics to compare the models.
Whereas this is what the authors did:
- Code up model A.
- Look up the results in published literature for models B, C, and D on the same data set to obtain metrics.
- Run the data through model A, and obtain the metric to compare against models B, C, and D.
Is their method incorrect, or somehow unethical? They make no claims regarding training time.
Topic data-science-model
Category Data Science