Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
An LSTM-Based Plagiarism Detection via Attention Mechanism and a Population-Based Approach for Pre-training Parameters with Imbalanced Classes
Isfahan University of Technology, Iran.
Hakim Sabzevari Univesity, Iran.
RISE Research Institutes of Sweden, Digitala system, Industriella system. Mälardalen University, Sweden.ORCID-id: 0000-0003-3354-1463
RISE Research Institutes of Sweden, Digitala system, Industriella system.ORCID-id: 0000-0002-1512-0844
2021 (engelsk)Inngår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Open AccessVolume 13110 LNCS, Pages 690-701, Springer Science and Business Media Deutschland GmbH , 2021, s. 690-701Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Plagiarism is one of the leading problems in academic and industrial environments, which its goal is to find the similar items in a typical document or source code. This paper proposes an architecture based on a Long Short-Term Memory (LSTM) and attention mechanism called LSTM-AM-ABC boosted by a population-based approach for parameter initialization. Gradient-based optimization algorithms such as back-propagation (BP) are widely used in the literature for learning process in LSTM, attention mechanism, and feed-forward neural network, while they suffer from some problems such as getting stuck in local optima. To tackle this problem, population-based metaheuristic (PBMH) algorithms can be used. To this end, this paper employs a PBMH algorithm, artificial bee colony (ABC), to moderate the problem. Our proposed algorithm can find the initial values for model learning in all LSTM, attention mechanism, and feed-forward neural network, simultaneously. In other words, ABC algorithm finds a promising point for starting BP algorithm. For evaluation, we compare our proposed algorithm with both conventional and population-based methods. The results clearly show that the proposed method can provide competitive performance. 

sted, utgiver, år, opplag, sider
Springer Science and Business Media Deutschland GmbH , 2021. s. 690-701
Emneord [en]
Artificial bee colony, Attention mechanism, Back-propagation, LSTM, Plagiarism, Feedforward neural networks, Intellectual property, Learning algorithms, Optimization, Academic environment, Attention mechanisms, Back Propagation, Feed forward neural net works, Imbalanced class, Industrial environments, Meta-heuristics algorithms, Plagiarism detection, Pre-training, Training parameters, Long short-term memory
HSV kategori
Identifikatorer
URN: urn:nbn:se:ri:diva-57902DOI: 10.1007/978-3-030-92238-2_57Scopus ID: 2-s2.0-85121899875ISBN: 9783030922375 (tryckt)OAI: oai:DiVA.org:ri-57902DiVA, id: diva2:1626084
Konferanse
28th International Conference on Neural Information Processing, ICONIP 2021Virtual, Online. 8 December 2021 through 12 December 2021
Tilgjengelig fra: 2022-01-10 Laget: 2022-01-10 Sist oppdatert: 2025-09-23bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Person

Helali Moghadam, MahshidSaadatmand, Mehrdad

Søk i DiVA

Av forfatter/redaktør
Helali Moghadam, MahshidSaadatmand, Mehrdad
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric

doi
isbn
urn-nbn
Totalt: 182 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
v. 2.47.0