Attention based Recurrent Neural Network for Nepali Text Summarization

Authors

  • Bipin Timalsina Central Department of Computer Science and Information Technology, Tribhuvan University, Kathmandu, Nepal
  • Nawaraj Paudel Central Department of Computer Science and Information Technology, Tribhuvan University, Kathmandu, Nepal
  • Tej Bahadur Shahi Central Department of Computer Science and Information Technology, Tribhuvan University, Kathmandu, Nepal

DOI:

https://doi.org/10.3126/jist.v27i1.46709

Keywords:

Abstractive text summarization, encoder-decoder, long short term memory, Nepali language processing, recurrent neural network

Abstract

Automatic text summarization has been a challenging topic in natural language processing (NLP) as it demands preserving important information while summarizing the large text into a summary. Extractive and abstractive text summarization are widely investigated approaches for text summarization. In extractive summarization, the important sentence from the large text is extracted and combined to create a summary whereas abstractive summarization creates a summary that is more focused on meaning, rather than content. Therefore, abstractive summarization gained more attention from researchers in the recent past. However, text summarization is still an untouched topic in the Nepali language. To this end, we proposed an abstractive text summarization for Nepali text. Here, we, first, create a Nepali text dataset by scraping Nepali news from the online news portals. Second, we design a deep learning-based text summarization model based on an encoder-decoder recurrent neural network with attention. More precisely, Long Short-Term Memory (LSTM) cells are used in the encoder and decoder layer. Third, we build nine different models by selecting various hyper-parameters such as the number of hidden layers and the number of nodes. Finally, we report the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) score for each model to evaluate their performance. Among nine different models created by adjusting different numbers of layers and hidden states, the model with a single-layer encoder and 256 hidden states outperformed all other models with F-Score values of 15.74, 3.29, and 15.21 for ROUGE-1 ROUGE-2 and ROUGE-L, respectively.

Downloads

Download data is not yet available.
Abstract
419
PDF
291

Downloads

Published

2022-06-30

How to Cite

Timalsina, B., Paudel, N., & Shahi, T. B. (2022). Attention based Recurrent Neural Network for Nepali Text Summarization . Journal of Institute of Science and Technology, 27(1), 141–148. https://doi.org/10.3126/jist.v27i1.46709

Issue

Section

Research Articles