-
Home
-
European Projects
-
Collaborative Annotation of a Large Biomedical Cor.. (CALBC)
Collaborative Annotation of a Large Biomedical Corpus
(CALBC)
Start date: Jan 1, 2009,
End date: Jun 30, 2011
PROJECT
FINISHED
Description
CALBC transform a large set of documents into a corpus with rich semantic links to biomedical data resourcesThe biomedical scientific literature is the key resource for the exchange of scientific facts: researchers write publications for their peer group to propose novel theories and report groundbreaking innovative findings. The new open access policies of the publishers have removed the barriers that hindered integration of the literature content into the infrastructure of fact databases. This change led into the standardization process where scientific publications are seamlessly connected to the scientific databases.The CALBC support action will engage the community of biomedical text mining researchers into a challenge that will lead to the exchange of a large set of annotated scientific documents. This community research effort will give answers to a very difficult question: “If we take all semantic resources, for example terminologies, that are available and use them to annotate a large set of documents, how will the documents finally look like under the best conditions possible”. The solutions to this problem will deliver biomedical literature in a standardized way and will enable sophisticated retrieval methods for the literature, i.e. with better semantic support. In addition, automatic interlinking of the documents with the biomedical fact databases will be possible.This project addresses the difficult problem of annotating an unrestricted number of text documents with a large set of semantic types from the biomedical domain. We propose a collaborative approach to this annotation task in the form of an open challenge to the biomedical text mining community. The task is the annotation of named entities in a large biomedical corpus, for a variety of semantic categories. The project delivers as outcome a large, collaboratively annotated corpus, marked with the mentions of biomedical entities. The annotated corpus becomes a resource for the community, to be used as a reference for improving text-mining applications.The biomedical text mining research community has a long tradition of organizing such challenges, as a way of evaluating techniques, sharing technical knowledge, and helping to improve the results from text mining programs. However, such challenges have typically addressed relatively small corpora in a narrow sub-domain, in part because the evaluation of the results is extremely long and costly. As a result, the generated annotated corpora are too small and are only narrowly annotated to be useful in a variety of text mining applications.In contrast, we propose to create a broadly scoped and large annotated corpus (at least 100,000 Medline abstracts annotated with 5-10 semantic types) by integrating the annotations from different named entity recognition systems. Metadata will also be added to the corpus. The participating systems have different application scopes and annotation strategies, and therefore complement each other. Therefore, the annotated corpus reflects these different scopes and strategies. A secondary goal of this project is to define a standardized format for representing the annotations contributed by the participants and comparing them effectively. Currently the lack of such a format hinders progress in the evaluation of named entity recognition systems. The final corpus will also be made available formatted in RDF for exploitation in Semantic Web applications.The corpus will be used to organize challenges where participants can download the corpus, can annotate it with their own text mining solutions, submit the corpus to a central server and receive an assessment of their results through a fully automated analysis. Over a half-year period, submissions and assessments at any time can be contributed. At the end of that period all submissions of annotated corpora will be used to generate the next fully annotated corpus, which then will be used for the next round of the challenge.