
Document-level relation extraction (DocRE) is an important task in natural language processing, with applications in knowledge graph construction, question answering, and biomedical text analysis. However, existing approaches to DocRE have limitations in predicting relations between entities using fixed entity representations, which can lead to inaccurate results. In this paper, we propose a novel DocRE model that addresses these limitations by using a relation-specific entity representation method and evidence sentence augmentation. Our model uses evidence sentence augmentation to identify top-k evidence sentences for each relation and a relation-specific entity representation method that aggregates the importance of entity mentions using an attention mechanism. These two components work together to capture the context of each entity mention in relation to the specific relation being predicted and select evidence sentences that support accurate relation identification. Finally, we re-predicts entity relations based on the evidence sentences, called relationship reordering module. This module re-predicts entity relationships based on the predicted set of evidence sentences to form k sets of relationship predictions, and then averages these k+1 sets of results to obtain the final relationship predictions. Experimental results on the DocRED dataset demonstrate that our proposed model achieves an F1 score of 62.84% and an lgn F1 score of 60.79%, outperforming state-of-the-art methods.