Inter-Annotator Agreement score for NLP?
I have several annotators who annotated strings of text for me, in order to train an NER model. The annotation is done in json format, and it consists of a string followed by the start and end index of named entities, along with their respective entity type. What is the best way to calculate the IAA score in this case? Is there a tool, or Python library available?
Topic annotation named-entity-recognition
Category Data Science