Major-minor long short-term memory for word-level language model

Shuang, K., Li, R., Gu, M.Y., Loo, Jonathan ORCID: https://orcid.org/0000-0002-2197-8126 and Su, Sen (2020) Major-minor long short-term memory for word-level language model. IEEE Transactions on Neural Networks and Learning Systems, 31 (10). pp. 3932-3946. ISSN 2162-237X

[thumbnail of Loo_etal_IEEE_TNNLS_2019_Major-minor_long_short-term_memory_for_word-level_language_model.pdf]
Preview
PDF
Loo_etal_IEEE_TNNLS_2019_Major-minor_long_short-term_memory_for_word-level_language_model.pdf - Accepted Version

Download (2MB) | Preview

Abstract

Language model plays an important role in natural language processing (NLP) systems like machine translation, speech recognition, learning token embeddings, natural language generation and text classification. Recently, the multi-layer Long Short-Term Memory (LSTM) models have been demonstrated to achieve promising performance on word-level language modeling. For each LSTM layer, larger hidden size usually means more diverse semantic features, which enables the language model to perform better. However, we have observed that when a certain LSTM layer reaches a sufficiently large scale, the promotion of overall effect will slow down as its hidden size increases. In this paper, we analyze that an important factor leading to this phenomenon is the high correlation between the newly extended hidden states and original hidden states, which hinders diverse feature expression of the LSTM. As a result, when the scale is large enough, simply lengthening the LSTM hidden states will cost tremendous extra parameters but has little effect. We propose a simple yet effective improvement on each LSTM layer consisting of a large-scale Major LSTM and a smallscale Minor LSTM to break the high correlation between the two parts of hidden states, which we call Major-Minor LSTMs (MMLSTMs). In experiments, we demonstrate the language model with MMLSTMs surpasses the existing state-of-the-art model on Penn Treebank (PTB) and WikiText-2 (WT2) datasets, and outperforms the baseline by 3.3 points in perplexity on WikiText-103 dataset without increasing model parameter counts.

Item Type: Article
Identifier: 10.1109/tnnls.2019.2947563
Additional Information: © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This work was supported in part by the National Key Research and Development Program of China (No. 2017YFB1400603).
Keywords: language model, Long Short-Term Memory (LSTM), Natural Language Processing (NLP), shortcut connections.
Subjects: Computing > Intelligent systems
Related URLs:
Depositing User: Jonathan Loo
Date Deposited: 30 Oct 2019 14:11
Last Modified: 06 Feb 2024 16:01
URI: https://repository.uwl.ac.uk/id/eprint/6490

Downloads

Downloads per month over past year

Actions (login required)

View Item View Item

Menu