Please use this identifier to cite or link to this item:
https://ruomoplus.lib.uom.gr/handle/8000/1980| Title: | Transfer learning for software vulnerability prediction using Transformer models | Authors: | Kalouptsoglou, Ilias Siavvas, Miltiadis Ampatzoglou, Apostolos Kehagias, Dionysios Chatzigeorgiou, Alexander |
Author Department Affiliations: | Department of Applied Informatics Department of Applied Informatics Department of Applied Informatics |
Author School Affiliations: | School of Information Sciences School of Information Sciences School of Information Sciences |
Subjects: | FRASCATI__Natural sciences FRASCATI__Natural sciences__Computer and information sciences |
Keywords: | Deep learning Software security Transfer learning Transformer Vulnerability prediction |
Issue Date: | 1-Sep-2025 | Publisher: | Elsevier | Journal: | The Journal of systems and software | ISSN: | 0164-1212 | Volume: | 227 | Start page: | 112448 | Abstract: | Recently software security community has exploited text mining and deep learning methods to identify vulnerabilities. To this end, the progress in the field of Natural Language Processing (NLP) has opened a new direction in constructing Vulnerability Prediction (VP) models by employing Transformer-based pre-trained models. This study investigates the capacity of Generative Pre-trained Transformer (GPT), and Bidirectional Encoder Representations from Transformers (BERT) to enhance the VP process by capturing semantic and syntactic information in the source code. Specifically, we examine different ways of using CodeGPT and CodeBERT to build VP models to maximize the benefit of their use for the downstream task of VP. To enhance the performance of the models we explore fine-tuning, word embedding, and sentence embedding extraction methods. We also compare VP models based on Transformers trained on code from scratch or after natural language pre-training. Furthermore, we compare these architectures to state-of-the-art text mining and graph-based approaches. The results showcase that training a separate deep learning predictor with pre-trained word embeddings is a more efficient approach in VP than either fine-tuning or extracting sentence-level features. The findings also highlight the importance of context-aware embeddings in the models’ attempt to identify vulnerable patterns in the source code. |
URI: | https://ruomoplus.lib.uom.gr/handle/8000/1980 | DOI: | 10.1016/j.jss.2025.112448 | Rights: | Αναφορά Δημιουργού-Μη Εμπορική Χρήση 4.0 Διεθνές Attribution-NonCommercial-NoDerivatives 4.0 Διεθνές |
Corresponding Item Departments: | Department of Applied Informatics Department of Applied Informatics Department of Applied Informatics |
| Appears in Collections: | Articles |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| JSSOFTWARE-D-24-00688_R1.pdf | 3,81 MB | Adobe PDF | View/Open |
SCOPUSTM
Citations
9
checked on Apr 18, 2026
Page view(s)
1,864
checked on Apr 18, 2026
Download(s)
38
checked on Apr 18, 2026
Google ScholarTM
Check
Altmetric
Altmetric
This item is licensed under a Creative Commons License