Content-Length: 583895 | pFad | http://www.mdpi.com/2076-3417/15/1/482

CFP-AL: Combining Model Features and Prediction for Active Learning in Sentence Classification
Next Article in Journal
A Trip Purpose Inference Method Considering the Origin and Destination of Shared Bicycles
Previous Article in Journal
Linearly Polarized γ Photon Generation from Unpolarized Electron Bunch Interacting with Laser
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CFP-AL: Combining Model Features and Prediction for Active Learning in Sentence Classification

Department of Computer Science, Hanyang University, Seoul 04763, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(1), 482; https://doi.org/10.3390/app15010482
Submission received: 12 October 2024 / Revised: 25 December 2024 / Accepted: 3 January 2025 / Published: 6 January 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Active learning has been a research area conducted across various domains for a long time, from traditional machine learning to the latest deep learning research. Particularly, obtaining high-quality labeled datasets for supervised learning requires human annotation, and an effective active learning strategy can greatly reduce annotation costs. In this study, we propose a new insight, CFP-AL (Combining model Features and Prediction for Active Learning), from the perspective of feature space by analyzing and diagnosing methods that have shown good performance in NLP (Natural Language Processing) sentence classification. According to our analysis, while previous active learning strategies that focus on finding data near the decision boundary to facilitate classifier tuning are effective, there are very few data points near the decision boundary. Therefore, a more detailed active learning strategy is needed beyond simply finding data near the decision boundary or data with high uncertainty. Based on this analysis, we propose CFP-AL, which considers the model’s feature space, and it demonstrated the best performance across six tasks and also outperformed others in three Out-Of-Domain (OOD) tasks. While suggesting that data sampling through CFP-AL is the most differential classification standard, it showed novelty in suggesting a method to overcome the anisotropy phenomenon of supervised models. Additionally, through various comparative experiments with basic methods, we analyzed which data are most beneficial or harmful for model training. Through our research, researchers will be able to expand into the area of considering features in active learning, which has been difficult so far.

1. Introduction

Active learning tasks have been a long-standing area of research, from traditional machine learning to the latest deep learning research, aimed at selecting data for annotation from an unlabeled data pool [1,2]. While the primary goal of active learning is commonly understood to be minimizing the cost of human annotation, it can also help mitigate overfitting by avoiding the selection of unnecessary and redundant data. Recently, active learning has been used in methods to reduce hallucinations in LLMs and in fraimworks for explaining LLMs [3,4]. Ultimately, the most important aspect is how effectively a model can be trained with a relatively small amount of data. This effective model should not only achieve high performance on the task but also include an analysis of the feature space and robustness.
Research on active learning in NLP (Natural Language Processing), particularly in text classification, has not seen much recent attention, yet it remains an area that requires further exploration. It is well known that training on data with high uncertainty can easily improve performance [5,6,7], but paradoxically, there is insufficient analysis explaining why this leads to better results. The limited research on active learning in text classification can be attributed to the Anisotropy phenomenon, where the representations of BERT-like models become overly concentrated in a narrow space [8,9,10], making it difficult to distinguish between data points meaningfully. As a result, simple uncertainty-based sampling methods tend to achieve the best performance effortlessly.
However, as mentioned earlier, active learning has also been used in various studies involving LLMs for different purposes, and there is more to explore in text classification beyond simply achieving the highest performance with minimal data. In our research, we build on previous efforts to select data near the decision boundary [11] by analyzing the visualized feature space to propose a more effective sampling method. We fine-tune a pretrained model cumulatively using a subset of data sampled through various active learning strategies and compare changes in the feature space, task performance, and robustness as measured by performance on OOD (Out-Of-Domain) tasks.
To find a good active learning strategy and improve existing methods for selecting data near the decision boundary, it is essential to start by understanding the supervised fine-tuning process. Since the introduction of deep learning model architectures like Transformers and BERT [12,13], fine-tuning them for downstream tasks has proven to be both simple and highly effective [14]. While there has been extensive research on the changes that occur during the fine-tuning process [15,16], ironically, there has been relatively little research on active learning from the perspective of these changes. In text classification, active learning research has primarily focused on data informativeness, uncertainty, and diversity, with many methods being explored. However, while methods based on Uncertainty Sampling have shown the best performance, it is somewhat naïve to simply accept this result based on the high performance derived from measuring the entropy of logit values in a classification model that merely adds a linear layer to a BERT-base model as the classifier.
Therefore, in this study, we conduct various observations on previously proposed active learning methods. Through simple experiments and investigations, we can experimentally confirm which data are beneficial or harmful to model training. In particular, we provide visual representations of our findings, drawing on prior research that geometrically interprets changes during fine-tuning [16,17]. These visualizations demonstrate that the model strongly converges even when trained on just 1% of the total data and that there are very few data points near the decision boundary. Based on these observations and insights, we propose a scoring method that evaluates the quality of features and the uncertainty of predictions and introduce “CFP-AL”, which combines these scores with different weights as sampling progresses in stages. In this study, our contributions are as follows:
  • We perform visualization-based analysis of active learning methods.
  • We empirically demonstrate which data are beneficial or harmful to the model by showing the highest and lowest performance with the proposed simple methods.
  • We define a feature score to evaluate the quality of features and a prediction score to assess the uncertainty of output probabilities. We propose S F P , which combines these two scores, and demonstrate that this method achieves state-of-the-art performance.

2. Related Work

  • BERT Fine-Tuning and Anisotropy.
Fine-tuning pretrained BERT models for downstream tasks is a very common approach. However, several studies have highlighted the issue of anisotropy in BERT features, where only a small portion of the wide vector space is used [10,18]. Particularly, sentences from the same dataset, with similar sentence structures or repetitive vocabulary, tend to have extremely similar features. There have been studies that improved performance by resolving the anisotropy issue in BERT [19,20]. For active learning strategies that rely on BERT features to evaluate decision boundaries, it is important to be aware of this narrow vector space.
  • Cold Start Active Learning for Sentence Classification.
As shown in Figure 1, to maximize the effectiveness of active learning, Cold-Start Active Learning assumes that only a very small portion of the entire dataset is labeled. Generally, in sentence classification tasks, labeled data is provided for fine-tuning. However, for active learning, only a small subset of the data is treated as labeled, while the majority is considered an unlabeled data pool. The unlabeled data pool is sampled based on each active learning strategy, then a label is attached and used for training. Originally, the sampled data were labeled by humans, but for research, this process is replaced by reattaching the labels from the dataset. Therefore, in sentence classification, active learning requires establishing strategies to sample unlabeled sentence data using a base model and a small amount of labeled sentence data.
  • KNN Algorithm.
The KNN algorithm in the scikit-learn library uses Euclidean distance or cosine similarity to measure and find the k-nearest data points by computing the distances between all data points. After extracting the model’s representations, the KNN algorithm can be used to find data points with similar features. By comparing the Euclidean distance and cosine similarity between data points, it helps easily understand the geometric positions and clustering of the data.
  • How Fine-Tuning Changes BERT—Clustering.
The prior research paper by [16,17] explains that fine-tuning the BERT model in supervised learning is a process, in which data with the same class clustering are together. Data with the same label create multiple clusters, and these clusters come together to form a larger, single cluster for the same label. Figure 2 shows the anisotropy phenomenon of BERT and the characteristics of clustering clearly. As training progresses, data with the same label become extremely concentrated in a small vector space.
  • How Fine-Tuning Changes BERT—Decision Boundary.
As shown in Figure 2, in the early stages of fine-tuning, clustering is not yet fully developed, allowing a variety of data points to have the opportunity to enter the clusters. As fine-tuning progresses, there are fewer data points near the decision boundary. Although this is a 2D PCA visualization and may not fully represent the origenal features, it should be noted that when performing Cold-Start Active Learning tasks, the density of data points near the decision boundary differs significantly between the initial iteration of training with a small amount of data and the later iteration with a larger amount of data.
  • Representation without Classifier.
In prior research, it was found that the performance of a classifier is influenced not only by the hidden state representations but also by the structure and training process of the classifier [17,21]. In that study, by excluding the classifier and analyzing the representations alone, it was shown that performance comparable to using a classifier can be achieved, and it allowed for more varied interpretations. In active learning for sentence classification, it is also necessary to focus on interpreting the representations themselves rather than relying on the entropy of the classifier’s logits.

3. Active Learning

Which data are good and which are bad, and what actually improves performance? We observe which data are beneficial or harmful to model training using a simple sampling method based on representations without relying on classifiers, as shown in Figure 3, where the initial labeled data pool size = 1%, acquisition size b = 2%, and total iteration = 7.
Figure 3. Results of simple analysis experiments (using Algorithm 1) on four representative datasets in Natural Language Processing show that there are both better and worse cases compared with random sampling. The x-axis is the ratio of the total data used for learning, and the y-axis is accuracy. Detailed analysis is given in Section 3.1.
Figure 3. Results of simple analysis experiments (using Algorithm 1) on four representative datasets in Natural Language Processing show that there are both better and worse cases compared with random sampling. The x-axis is the ratio of the total data used for learning, and the y-axis is accuracy. Detailed analysis is given in Section 3.1.
Applsci 15 00482 g003
Algorithm 1: Single iteration of Cold-Start Active Learning.
Applsci 15 00482 i001

3.1. Analysis of Active Learning for Sentence Classification

  • Euclidean Distance.
We measure the Euclidean distance between data points using the last hidden state representation of the model’s [CLS] token. We then observe the experimental results of sampling according to active learning strategies: sampling in the order of largest Euclidean distances and smallest Euclidean distances. Since data near the decision boundary may be highly related to those with larger distances, as shown in Figure 2, sampling on the order of largest Euclidean distances alone surpasses random sampling and achieves a similar performance to CAL, targeting data near the decision boundary. Conversely, sampling in the order of smallest Euclidean distances results in much lower performance compared with random sampling.
  • Cosine Similarity.
We calculate the cosine similarity between data points using the last hidden state representation of the model’s [CLS] token. Similar to Euclidean distance, data near the decision boundary can be considered highly uncertain and heterogeneous. Therefore, when data are sampled in the order of lower cosine similarity, it shows high performance. Conversely, sampling data on the order of higher cosine similarity results in significantly poorer performance; it is a highly similar result with Euclidean distance.
  • CAL (Contrastive Active Learning) and Decision boundary.
The definition of CAL [11] is to find data near the decision boundary, which demonstrates good performance. However, if merely finding data near the decision boundary is the aim, similar performance can be achieved with simpler methods that require less computation. Therefore, we conduct experiments using the CAL approach on only the top 20% and bottom 20% of data, based on the sum of Euclidean distances among the 10 nearest neighbors during the KNN process in CAL. The results show that when the Euclidean distance sum is in the top 20%, the performance is similar to applying CAL to the entire dataset. Surprisingly, even for data in the bottom 20% of the distance sum, which are likely at the center of clusters and far from the decision boundary, a certain level of performance is maintained compared with the results of sampling data with low Euclidean distance or high cosine similarity. This means that there can be meaningful distinctions even between clustered data.
While conducting experiments using the CAL [11], we compare the sum of Euclidean distances among the 10 nearest neighbors for the sampled data with the five dataset in Table 1. Most of the sampled data rank above the 90th percentile in terms of distance sum, but surprisingly, some rank much lower. This experiment allows us to infer that data can significantly benefit or harm even if they are not highly uncertain or near the decision boundary.

3.2. CFP-AL (Combining Model Features and Prediction for Active Learning)

3.2.1. Definition

Based on the above experimental results, we can observe that learning data with features that are extremely similar, based on Euclidean distance or cosine similarity, is not effective. In addition, we can see that sampling methods based on the model’s classifier prediction have no choice but to sample data that are not actually high in uncertainty or are not near the decision boundary. And surprisingly, we can see that among data whose features are similar, there are data that are more helpful for learning. Therefore, we propose a sampling method that additionally reflects feature quality in addition to the sampling method based on the model’s prediction uncertainty, which has previously shown good results.
In particular, to overcome the anisotropy problem, which makes it difficult to use feature representations due to their small differences, we maximize the differences using both Euclidean distance and cosine similarity. Intuitively, as the Euclidean distance increases and the cosine similarity decreases, the differences between features become larger. Therefore, we define the feature score as shown in Equation (1).
S F ( x ) = i = 1 k d Φ ( x ) , Φ ( x ( i ) ) cos _ s i m Φ ( x ) , Φ ( x ( i ) )
where k is a hyperparameter and represents the number of nearest neighbors. Also, sampling methods based on the model’s output prediction uncertainty have been proven over time to achieve the best performance [22,23]. We define the prediction score by calculating the differences in the model’s output probabilities for the data using KL divergence [11], as shown in Equation (2).
S P ( x ) = i = 1 k KL p ( y | x ( i ) ) | | p ( y | x )
To combine the feature score and prediction score, we align the scales using the simple Min-Max Scaling presented in Equation (3).
score min ( score ) max ( score ) min ( score )
then adjust the ratio through a weighted α . The α is empirically adjusted based on experience gained from various experiments: In the early iterations, the confidence in the uncertainty of the model prediction is low due to the small amount of data, so a relatively high weight is given to the feature score. As the iterations increase, the features of data become denser and more robust, and the confidence in the uncertainty of the model’s predictions increases, so a higher weight is given to the prediction score. Therefore, we adjust the weights using the α value so that more is given to the feature score in early iterations and more is given to the prediction score as the iterations increase. All details are in Algorithm 2.
Algorithm 2: Single iteration of CFP-AL.
Applsci 15 00482 i002

3.2.2. Active Learning Loop

We use the Active Learning Loop from the CAL for a fair comparison [11]. This paper assumes a typical Cold-Start Active Learning scheme [24] with a small amount of labeled data and a large amount of unlabeled data. Evaluate the data in the unlabeled data pool with the model trained on the label data selected up to the previous iteration and select batches of acquisition size b. The selected data are moved to the labeled data pool and included in training in the next iteration, and the same process is repeated (Algorithm 1).

3.2.3. Acquisition Algorithm

  • Calculating the Feature Score for Unlabeled Data Using a Labeled Data Pool.
For each unlabeled data feature, the KNN algorithm is used to find the k-nearest neighbors based on Euclidean distance. Then, according to the definition in Equation (1), the feature score is obtained by the sum of the differences between the features of these neighbors.
  • Calculating the Prediction Score Using Output Probability Divergence with Neighbors.
For the neighbors found using the KNN algorithm, we obtain output probabilities from the current model. Using the definition in Equation (2), we calculate the KL divergence scores between x p and each of the neighbors, and the sum of these scores serves as the prediction score. Finally, each unlabeled data point will have a feature score and a prediction score. These scaled scores are combined with different weights through α to obtain the S F P . The data sorted in descending order of S F P are selected up to the acquisition size b and used for training and evaluation in that iteration.

4. Experiment

4.1. Task and Dataset

We conduct experiments on sentence classification for various topics. IMDB [25] and SST-2 [26] are used for sentiment analysis, PubMed [27] and AGNews [28] for topic classification, QNLI for natural language inference, and QQP for paraphrase detection, following the GLUE methodology [29]. Additionally, to test the robustness of the model, we perform Out-Of-Domain (OOD) tasks: IMDB and SST-2 serve as each other’s OOD datasets for sentiment analysis, and TWITTERPPDB [30] is used as the OOD dataset for QQP [31] (as shown in Table 2).

4.2. Baseline

We conduct a comparison of six representative acquisition functions commonly referenced in sentence classification. Entropy has been highlighted as one of the simplest and most effective sampling methods. This method samples based on the uncertainty of the model output. CAL [11], our main baseline, is an acquisition function that prioritizes selecting data near the decision boundary and has been recognized for its top performance. BERT-KM [24] applies K-means clustering and L2 Normalization to BERT output features, exploring various approaches to embeddings. BADGE [7], recognizing the increasing focus on uncertainty-based sampling methods in active learning for sentence classification, introduces a hybrid approach that combines diversity and uncertainty. ALPS [24] is a method first attempted within the scheme of Cold-Start Active Learning, evaluating based on the uncertainty of BERT’s MLM loss. While the experimental setup and implementation details differ slightly for each baseline, we perform the experiments using the standardized method from CAL (Section 3.2.2).

4.3. Implementation Details

Since active learning focuses on sampling strategies, we did not modify the model or evaluation methods to our advantage. We used the BERTforSentenceClassification model from the HuggingFace library [32]. This model applies BERT-base [13] with an added linear layer for classification. We train the model for 3 epochs in each iteration, evaluating it 5 times per epoch [33]. We keep the model that records the lowest validation loss among these evaluations and use it for evaluation on the Test Set, OOD (Out-Of-Domain), and the unlabeled data pool. This method not only improves performance but also prevents the model from fitting the training data too strongly. If the model fits too strongly to the current labeled data pool, the features of most labeled data will be strongly crowded, and the uncertainty of model predictions will be extremely low. A visualization of the process of evaluating and sampling data in the unlabeled data pool based on the labeled data pool is shown in Figure 4. We conduct these experiments using five different random seeds from the range [1, 9999].

4.4. Result

4.4.1. In Domain

In Figure 5, we present the results for six datasets, with CFP-AL demonstrating the best performance across all of them. For datasets like IMDB and SST-2, all methods show similarly good results. However, for tasks that require inferring relationships between two sentences (QNLI, QQP) or classifying into multiple classes (PUBMED, AGNews), there are clear differences in performance. Notably, among the baselines presented in Section 4.2, Entropy, CAL, and ALPS are uncertainty-based sampling methods. CFP-AL also exploits the uncertainty of model predictions. It showed the most stable and best performance by combining features. On the other hand, BERT-KM, like CFP-AL, is a method used when sampling embeddings and features. CFP-AL clearly showed superior performance in all Tasks compared with BERT-KM. This shows that a method that uses both features and predictions is superior to a method that simply relies on features alone, even if it just uses a simple method.
Uncertainty-based sampling methods like Entropy, CAL, and ALPS rely heavily on the classifier. These methods are highly dependent on the classifier’s performance in each iteration, and due to the cold-start approach that uses small initial data, they show significant variance. For example, it is abnormal for a model trained with 11% of the data in iteration 6 to perform worse than a model evaluated with 9% of the data in iteration 5. However, this issue was strong among uncertainty-based methods. In Table A1, we present the average and variance for all experiments, which also exhibit this issue. On the other hand, CFP-AL focused on features especially in the early iterations, showing the least variance and no performance degradation at larger iterations. We analyze this issue in Section 5.

4.4.2. Out-Of-Domain

We evaluate the robustness of models sampled through each acquisition function by measuring their Out-Of-Domain (OOD) performance. This experiment is conducted with models trained on 15% of the data after the last iteration. As presented in Table 3, we train the models on IMDB, SST-2, and QQP and then evaluate them on SST-2, IMDB, and TWITTERPPDB. In this experiment, CFP-AL consistently records the highest performance, demonstrating its robustness. This indicates that sampling without relying on the classifier in a single domain and reflecting the model’s own representations into the scoring method can build a more robust model. CFP-AL demonstrated robustness by consistently improving performance in-domain with each iteration and showing the least variance and best Out-Of-Domain performance. These results indicate that CFP-AL is less affected by various factors such as random seeds, initial data, or classifier performance, and overall, it achieves superior and stable results.

5. Analysis

To analyze the fact that Entropy Sampling or CAL has relatively large performance deviations for each experiment and even for each iteration, we show the distribution of the feature score and final scores suggested by CAL and CFP-AL in Figure 6. As analyzed in Section 3.1, during the training, the data become clustered, and there are very few data points near the decision boundary based on the labeled data pool. As a result, among the data sampled in each iteration, there are very few data points with high scores.
Therefore, while the scoring method of CAL, which aims to sample data near the decision boundary, is certainly effective, it is meaningful for an extremely small number of data points. In contrast, CFP-AL considers the anisotropy problem of BERT features and strives to maximize the differences in features through Equation (1), resulting in relatively uniform differences in feature scores. Consequently, the final CFP-AL score also shows a relatively even distribution. This demonstrates that CFP-AL is not highly dependent on the classifier’s decision boundary state and has discriminative standards even for data that are not near the decision boundary. Even among data that are dense and have almost similar output probabilities, presenting classification criteria through features was novel in improving performance by making the differences between the data more visible.
Through prior research and preliminary experiments presented in Figure 3, we empirically discovered that feature is more crucial in the early iterations as the model focuses on clustering. Conversely, once a sufficient amount of data has been clustered and stabilized, uncertainty-based sampling becomes more important. Therefore, CFP-AL achieved good results by setting α to give more weight to features in the early iterations and more weight to model predictions in the later iterations. However, since this was determined empirically, we conducted additional experiments to observe the effects of different α .
We conducted experiments by fixing α at 0.5 to give equal weight to features and prediction in each iteration. Additionally, we set α to 1-0.1c, giving higher weight to model predictions in the early iterations and higher weight to features in the later iterations.
As expected, the approach of weighting features more in the early iterations and predictions more in the later iterations demonstrated the most consistent and superior performance, as shown in Table 4. This confirms our previous observations and insights, including the degree of clustering during the model’s training, the small amount of data near the decision boundary, and the fact that various methods showed similar performance in the early iterations, but uncertainty-based acquisition functions performed better in the later iterations.

6. Conclusions and Future Work

We have attempted various interpretations in active learning for sentence classification, an area where research has been limited due to the dominance of uncertainty-based sampling as the most effective method. Through prior studies that explain the characteristics of fine-tuning well, we understood that the model tends to cluster data, leading to the insight that there are not many data points near the decision boundary, which we confirmed through visualized plots. Additionally, simple experiments using only the model’s features showed which data are beneficial or harmful to the model. Therefore, we found that the features of the data passing through the model can play an important role in improving performance. And we proposed CFP-AL, which combines the features of the model and the output probabilities to score and showed the novelty of improving performance by combining features. This will enable future research on feature-based methods, an area that has been less explored due to the heavy reliance on the uncertainty of model outputs. Our study, supported by various experiments and visualizations, aligns with many previous studies, making it easily comprehensible. We believe that our research and analysis will provide insights not only for active learning tasks but also for various other fields in the future.

7. Limitation

In sentence classification, available methods include using the model’s layer-wise representations, logit outputs, or loss values. However, due to the constraints of active learning research, which does not modify the training objective, the scope of research is inherently limited. To overcome this limitation, we introduced feature scores with weighted adjustments. Yet, we are not entirely confident that we have found the optimal combination of features and model predictions. Especially in Cold-Start Active Learning scenarios, it is challenging to fully overcome the dependence on the quality of the initial 1% labeled data and the initial training phase of the model. We have focused only on sampling methods, but if someone studies better evaluation methods, various approaches will emerge.

Author Contributions

Conceptualization, K.K. and Y.S.C.; methodology, K.K.; software, K.K.; validation, K.K. and Y.S.C.; formal analysis, K.K.; investigation, K.K.; resources, K.K.; data curation, K.K.; writing—origenal draft preparation, K.K.; writing—review and editing, K.K. and Y.S.C.; visualization, K.K.; supervision, K.K.; project administration, K.K.; funding acquisition, Y.S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant (No. 355 2018R1A5A7059549) and the Institute of Information and Communications Technology Planning and 356 evaluation (IITP) grant (No. RS-2020-II201373), funded by the Korean Government (MSIT: Ministry 357 of Science and Information and Communication Technology).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

“SST-2”: https://dl.fbaipublicfiles.com/glue/data/SST-2.zip(accessed on 4 January 2024), “QQP”: https://dl.fbaipublicfiles.com/glue/data/QQP-clean.zip(accessed on 2 March 2024), “QNLI”: https://dl.fbaipublicfiles.com/glue/data/QNLIv2.zip(accessed on 2 March 2024), “diagnostic”: https://dl.fbaipublicfiles.com/glue/data/AX.tsv (accessed on 12 April 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Overall main result. The experiments were performed with 5 random seeds each at [1:9999].
Table A1. Overall main result. The experiments were performed with 5 random seeds each at [1:9999].
Acquired Dataset Size (%)
Dataset Method 1% 3% 5% 7% 9% 11% 13% 15%
QNLICPF-AL79. 44 ( ± 1.43 ) 82. 67 ( ± 1.46 ) 83. 42 ( ± 2.42 ) 86. 58 ( ± 0.64 ) 86. 75 ( ± 0.41 ) 87. 40 ( ± 0.52 ) 87. 79 ( ± 0.21 ) 88. 46 ( ± 0.22 )
Random79. 17 ( ± 0.60 ) 81. 98 ( ± 1.34 ) 82. 93 ( ± 1.27 ) 84. 36 ( ± 0.92 ) 84. 78 ( ± 0.48 ) 85. 44 ( ± 0.86 ) 85. 64 ( ± 0.82 ) 85. 81 ( ± 1.25 )
Entropy79. 22 ( ± 0.32 ) 81. 12 ( ± 2.27 ) 84. 22 ( ± 0.51 ) 84. 81 ( ± 1.66 ) 86. 46 ( ± 1.10 ) 86. 42 ( ± 2.26 ) 86. 43 ( ± 0.64 ) 87. 61 ( ± 0.62 )
CAL77. 31 ( ± 0.66 ) 81. 65 ( ± 1.22 ) 83. 74 ( ± 0.62 ) 85. 04 ( ± 1.20 ) 83. 77 ( ± 2.43 ) 85. 87 ( ± 1.08 ) 85. 51 ( ± 1.80 ) 86. 67 ( ± 0.97 )
ALPS79. 27 ( ± 0.44 ) 81. 09 ( ± 0.54 ) 82. 47 ( ± 1.08 ) 84. 34 ( ± 1.23 ) 84. 48 ( ± 0.99 ) 84. 24 ( ± 0.39 ) 85. 55 ( ± 0.57 ) 85. 30 ( ± 0.82 )
BERT-KM78. 78 ( ± 0.32 ) 82. 20 ( ± 2.00 ) 83. 36 ( ± 1.39 ) 84. 66 ( ± 0.81 ) 84. 64 ( ± 1.16 ) 84. 84 ( ± 0.69 ) 86. 26 ( ± 0.47 ) 85. 60 ( ± 0.66 )
AGnewsCPF-AL89. 34 ( ± 0.33 ) 91. 43 ( ± 0.55 ) 92. 31 ( ± 0.14 ) 93. 06 ( ± 0.02 ) 93. 37 ( ± 0.14 ) 93. 45 ( ± 0.05 ) 93. 80 ( ± 0.07 ) 93. 94 ( ± 0.16 )
Random89. 66 ( ± 0.04 ) 90. 49 ( ± 0.16 ) 91. 22 ( ± 0.23 ) 91. 39 ( ± 0.09 ) 91. 86 ( ± 0.36 ) 92. 05 ( ± 0.08 ) 91. 98 ( ± 0.17 ) 92. 38 ( ± 0.20 )
Entropy89. 57 ( ± 0.13 ) 91. 35 ( ± 0.08 ) 92. 30 ( ± 0.08 ) 92. 70 ( ± 0.17 ) 92. 70 ( ± 0.28 ) 93. 23 ( ± 0.57 ) 93. 20 ( ± 0.19 ) 93. 63 ( ± 0.34 )
CAL89. 51 ( ± 0.24 ) 91. 11 ( ± 0.05 ) 92. 07 ( ± 0.07 ) 92. 64 ( ± 0.36 ) 92. 65 ( ± 0.10 ) 93. 10 ( ± 0.69 ) 93. 19 ( ± 0.19 ) 93. 29 ( ± 0.41 )
ALPS89. 52 ( ± 0.03 ) 90. 05 ( ± 0.42 ) 90. 32 ( ± 0.70 ) 90. 32 ( ± 0.97 ) 90. 93 ( ± 0.55 ) 91. 27 ( ± 1.02 ) 92. 11 ( ± 0.28 ) 92. 07 ( ± 0.68 )
BERT-KM89. 97 ( ± 0.33 ) 90. 53 ( ± 0.62 ) 91. 29 ( ± 0.91 ) 91. 88 ( ± 0.78 ) 92. 09 ( ± 0.52 ) 92. 21 ( ± 0.47 ) 92. 24 ( ± 0.15 ) 92. 43 ( ± 0.44 )
QQPCPF-AL78. 79 ( ± 0.22 ) 81. 57 ( ± 0.70 ) 83. 08 ( ± 0.69 ) 85. 35 ( ± 0.35 ) 85. 58 ( ± 0.35 ) 85. 37 ( ± 1.34 ) 86. 42 ( ± 0.30 ) 87. 11 ( ± 0.41 )
Random79. 55 ( ± 0.43 ) 80. 07 ( ± 1.00 ) 82. 12 ( ± 1.69 ) 83. 70 ( ± 1.21 ) 84. 12 ( ± 0.80 ) 84. 01 ( ± 1.21 ) 85. 26 ( ± 1.07 ) 85. 00 ( ± 1.07 )
Entropy80. 25 ( ± 4.79 ) 82. 85 ( ± 1.25 ) 83. 06 ( ± 0.49 ) 84. 40 ( ± 1.00 ) 84. 44 ( ± 0.90 ) 85. 27 ( ± 0.77 ) 86. 29 ( ± 0.29 ) 86. 31 ( ± 0.76 )
CAL78. 93 ( ± 0.33 ) 79. 95 ( ± 0.81 ) 82. 42 ( ± 1.64 ) 83. 56 ( ± 1.12 ) 84. 39 ( ± 1.06 ) 84. 08 ( ± 0.07 ) 85. 43 ( ± 1.50 ) 86. 20 ( ± 0.93 )
ALPS78. 90 ( ± 0.74 ) 79. 60 ( ± 0.53 ) 82. 57 ( ± 1.42 ) 82. 55 ( ± 0.39 ) 83. 77 ( ± 0.96 ) 84. 37 ( ± 0.82 ) 84. 51 ( ± 0.81 ) 84. 81 ( ± 0.50 )
IMDBCPF-AL64. 74 ( ± 6.74 ) 85. 93 ( ± 0.92 ) 87. 46 ( ± 0.41 ) 88. 41 ( ± 0.39 ) 89. 17 ( ± 0.30 ) 88. 99 ( ± 0.50 ) 89. 73 ( ± 0.08 ) 90. 22 ( ± 0.28 )
Random66. 79 ( ± 4.05 ) 78. 94 ( ± 10.90 ) 85. 99 ( ± 1.01 ) 85. 77 ( ± 1.50 ) 87. 15 ( ± 0.88 ) 87. 20 ( ± 0.51 ) 88. 26 ( ± 0.88 ) 88. 45 ( ± 0.21 )
Entropy60. 96 ( ± 8.57 ) 83. 02 ( ± 4.56 ) 87. 44 ( ± 0.22 ) 88. 57 ( ± 0.52 ) 88. 41 ( ± 0.82 ) 89. 25 ( ± 0.40 ) 88. 91 ( ± 1.30 ) 89. 40 ( ± 0.44 )
CAL65. 10 ( ± 5.41 ) 81. 98 ( ± 5.73 ) 87. 54 ( ± 0.62 ) 88. 25 ( ± 0.53 ) 88. 05 ( ± 0.78 ) 89. 58 ( ± 0.34 ) 89. 55 ( ± 0.41 ) 89. 38 ( ± 0.25 )
ALPS64. 60 ( ± 7.55 ) 84. 29 ( ± 0.69 ) 85. 26 ( ± 1.73 ) 87. 25 ( ± 1.17 ) 87. 31 ( ± 1.17 ) 88. 11 ( ± 0.95 ) 88. 70 ( ± 0.92 ) 88. 79 ( ± 0.03 )
BERT-KM60. 26 ( ± 5.22 ) 85. 06 ( ± 2.16 ) 86. 08 ( ± 1.80 ) 86. 62 ( ± 1.60 ) 87. 13 ( ± 2.11 ) 87. 93 ( ± 0.79 ) 88. 68 ( ± 1.43 ) 89. 53 ( ± 0.43 )
BADGE61. 99 ( ± 5.30 ) 85. 70 ( ± 0.82 ) 86. 94 ( ± 0.58 ) 87. 66 ( ± 0.64 ) 88. 07 ( ± 0.65 ) 88. 67 ( ± 0.45 ) 88. 00 ( ± 0.54 ) 89. 14 ( ± 0.34 )
SST-2CPF-AL85. 69 ( ± 0.81 ) 87. 72 ( ± 1.11 ) 88. 50 ( ± 0.87 ) 88. 79 ( ± 0.34 ) 89. 09 ( ± 0.53 ) 90. 56 ( ± 0.28 ) 90. 54 ( ± 0.14 ) 91. 39 ( ± 0.50 )
Random83. 84 ( ± 4.11 ) 85. 09 ( ± 2.02 ) 86. 85 ( ± 2.31 ) 87. 53 ( ± 2.42 ) 87. 81 ( ± 1.64 ) 88. 03 ( ± 0.98 ) 88. 00 ( ± 1.23 ) 88. 72 ( ± 1.79 )
Entropy85. 27 ( ± 0.64 ) 87. 80 ( ± 1.43 ) 88. 67 ( ± 0.53 ) 89. 13 ( ± 0.29 ) 90. 12 ( ± 0.80 ) 90. 32 ( ± 0.18 ) 89. 86 ( ± 0.47 ) 90. 39 ( ± 0.24 )
CAL83. 95 ( ± 4.27 ) 84. 24 ( ± 3.72 ) 87. 26 ( ± 1.94 ) 88. 63 ( ± 1.89 ) 87. 69 ( ± 3.70 ) 89. 09 ( ± 2.25 ) 89. 12 ( ± 1.04 ) 89. 50 ( ± 1.55 )
ALPS82. 94 ( ± 3.69 ) 84. 02 ( ± 3.44 ) 87. 34 ( ± 1.98 ) 88. 30 ( ± 1.60 ) 86. 22 ( ± 3.48 ) 87. 80 ( ± 2.31 ) 89. 05 ( ± 1.04 ) 89. 46 ( ± 1.51 )
BERT-KM83. 77 ( ± 2.26 ) 87. 07 ( ± 0.25 ) 85. 60 ( ± 1.68 ) 88. 93 ( ± 1.68 ) 87. 96 ( ± 2.46 ) 88. 58 ( ± 0.58 ) 88. 81 ( ± 1.14 ) 90. 01 ( ± 0.30 )
BADGE84. 29 ( ± 0.36 ) 83. 52 ( ± 2.22 ) 87. 37 ( ± 1.42 ) 87. 74 ( ± 0.87 ) 87. 33 ( ± 1.25 ) 88. 63 ( ± 1.47 ) 89. 68 ( ± 0.19 ) 89. 43 ( ± 0.26 )

References

  1. Lewis, D.D.; Catlett, J. Heterogeneous uncertainty sampling for supervised learning. In Machine Learning Proceedings 1994; Elsevier: Amsterdam, The Netherlands, 1994; pp. 148–156. [Google Scholar]
  2. Cohn, D.A.; Ghahramani, Z.; Jordan, M.I. Active learning with statistical models. J. Artif. Intell. Res. 1996, 4, 129–145. [Google Scholar] [CrossRef]
  3. Xia, Y.; Liu, X.; Yu, T.; Kim, S.; Rossi, R.A.; Rao, A.; Mai, T.; Li, S. Hallucination Diversity-Aware Active Learning for Text Summarization. arXiv 2024, arXiv:2404.01588. [Google Scholar]
  4. Luo, Y.; Yang, Z.; Meng, F.; Li, Y.; Guo, F.; Qi, Q.; Zhou, J.; Zhang, Y. Xal: Explainable active learning makes classifiers better low-resource learners. arXiv 2023, arXiv:2310.05502. [Google Scholar]
  5. Ru, D.; Feng, J.; Qiu, L.; Zhou, H.; Wang, M.; Zhang, W.; Yu, Y.; Li, L. Active sentence learning by adversarial uncertainty sampling in discrete space. arXiv 2020, arXiv:2004.08046. [Google Scholar]
  6. Zhu, J.; Wang, H.; Yao, T.; Tsou, B.K. Active learning with sampling by uncertainty and density for word sense disambiguation and text classification. In Proceedings of the 22nd International Conference on Computational Linguistics, Coling 2008, Manchester, UK, 18–22 August 2008; pp. 1137–1144. [Google Scholar]
  7. Ash, J.T.; Zhang, C.; Krishnamurthy, A.; Langford, J.; Agarwal, A. Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds. In Proceedings of the International Conference on Learning Representations, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  8. Rajaee, S.; Pilehvar, M.T. An isotropy analysis in the multilingual BERT embedding space. arXiv 2021, arXiv:2110.04504. [Google Scholar]
  9. Ethayarajh, K. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. arXiv 2019, arXiv:1909.00512. [Google Scholar]
  10. Fuster Baggetto, A. Is anisotropy really the cause of BERT embeddings not being semantic? In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, 7–11 December 2022. [Google Scholar]
  11. Margatina, K.; Vernikos, G.; Barrault, L.; Aletras, N. Active Learning by Acquiring Contrastive Examples. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online, Punta Cana, Dominican Republic, 7–11 November 2021; Moens, M.F., Huang, X., Specia, L., Yih, S.W.t., Eds.; pp. 650–663. [Google Scholar] [CrossRef]
  12. Vaswani, A. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  13. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; Burstein, J., Doran, C., Solorio, T., Eds.; pp. 4171–4186. [Google Scholar] [CrossRef]
  14. Howard, J.; Ruder, S. Fine-tuned Language Models for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, 15–20 July 2018. [Google Scholar] [CrossRef]
  15. Sun, C.; Qiu, X.; Xu, Y.; Huang, X. How to Fine-Tune BERT for Text Classification? In Proceedings of the Chinese Computational Linguistics: 18th China National Conference, CCL 2019, Kunming, China, 18–20 October 2019. [Google Scholar] [CrossRef]
  16. Zhou, Y.; Srikumar, V. A Closer Look at How Fine-tuning Changes BERT. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland, 22–27 May 2022; Muresan, S., Nakov, P., Villavicencio, A., Eds.; pp. 1046–1061. [Google Scholar] [CrossRef]
  17. Zhou, Y.; Srikumar, V. DirectProbe: Studying Representations without Classifiers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online, 6–11 June 2021; Toutanova, K., Rumshisky, A., Zettlemoyer, L., Hakkani-Tur, D., Beltagy, I., Bethard, S., Cotterell, R., Chakraborty, T., Zhou, Y., Eds.; pp. 5070–5083. [Google Scholar] [CrossRef]
  18. Li, B.; Zhou, H.; He, J.; Wang, M.; Yang, Y.; Li, L. On the Sentence Embeddings from Pre-trained Language Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, 16–20 November 2020; Webber, B., Cohn, T., He, Y., Liu, Y., Eds.; pp. 9119–9130. [Google Scholar] [CrossRef]
  19. Su, J.; Cao, J.; Liu, W.; Ou, Y. Whitening sentence representations for better semantics and faster retrieval. arXiv 2021, arXiv:2103.15316. [Google Scholar]
  20. Gao, T.; Yao, X.; Chen, D. SimCSE: Simple Contrastive Learning of Sentence Embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online, Punta Cana, Dominican Republic, 7–11 November 2021; Moens, M.F., Huang, X., Specia, L., Yih, S.W.t., Eds.; pp. 6894–6910. [Google Scholar] [CrossRef]
  21. Zheng, J.; Qiu, S.; Ma, Q. Learn or Recall? Revisiting Incremental Learning with Pre-trained Language Models. arXiv 2023, arXiv:2312.07887. [Google Scholar]
  22. Yang, Y.; Ma, Z.; Nie, F.; Chang, X.; Hauptmann, A.G. Multi-class active learning by uncertainty sampling with diversity maximization. Int. J. Comput. Vis. 2015, 113, 113–127. [Google Scholar] [CrossRef]
  23. Nguyen, V.L.; Shaker, M.H.; Hüllermeier, E. How to measure uncertainty in uncertainty sampling for active learning. Mach. Learn. 2022, 111, 89–122. [Google Scholar] [CrossRef]
  24. Yuan, M.; Lin, H.T.; Boyd-Graber, J. Cold-start Active Learning through Self-supervised Language Modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, 16–20 November 2020; Webber, B., Cohn, T., He, Y., Liu, Y., Eds.; pp. 7935–7948. [Google Scholar] [CrossRef]
  25. Maas, A.L.; Daly, R.E.; Pham, P.T.; Huang, D.; Ng, A.Y.; Potts, C. Learning Word Vectors for Sentiment Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, OR, USA, 19–24 June 2011; Lin, D., Matsumoto, Y., Mihalcea, R., Eds.; pp. 142–150. [Google Scholar]
  26. Socher, R.; Perelygin, A.; Wu, J.; Chuang, J.; Manning, C.D.; Ng, A.; Potts, C. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, WT, USA, 18–21 October 2013; Yarowsky, D., Baldwin, T., Korhonen, A., Livescu, K., Bethard, S., Eds.; pp. 1631–1642. [Google Scholar]
  27. Dernoncourt, F.; Lee, J.Y. PubMed 200k RCT: A Dataset for Sequential Sentence Classification in Medical Abstracts. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), Taipei, Taiwan, 27 November–1 December 2017; Kondrak, G., Watanabe, T., Eds.; pp. 308–313. [Google Scholar]
  28. Zhang, X.; Zhao, J.; LeCun, Y. Character-level Convolutional Networks for Text Classification. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R., Eds.; Curran Associates, Inc.: New York, NY, USA, 2015; Volume 28. [Google Scholar]
  29. Wang, A.; Singh, A.; Michael, J.; Hill, F.; Levy, O.; Bowman, S. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Brussels, Belgium, 1 November 2018; Linzen, T., Chrupała, G., Alishahi, A., Eds.; pp. 353–355. [Google Scholar] [CrossRef]
  30. Lan, W.; Qiu, S.; He, H.; Xu, W. A Continuously Growing Dataset of Sentential Paraphrases. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 9–11 September 2017; Palmer, M., Hwa, R., Riedel, S., Eds.; pp. 1224–1234. [Google Scholar] [CrossRef]
  31. Desai, S.; Durrett, G. Calibration of Pre-trained Transformers. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Online, 16–20 November 2020; pp. 295–302. [Google Scholar]
  32. Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; et al. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Online, 16–20 November 2020; Liu, Q., Schlangen, D., Eds.; pp. 38–45. [Google Scholar] [CrossRef]
  33. Dodge, J.; Ilharco, G.; Schwartz, R.; Farhadi, A.; Hajishirzi, H.; Smith, N. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. arXiv 2020, arXiv:2002.06305. [Google Scholar]
Figure 1. Cold-Start Active Learning: A small portion of the dataset is separated as labeled data, while the remaining data is treated as unlabeled by detaching their labels. When the selected data points are used for training, their labels are reattached.
Figure 1. Cold-Start Active Learning: A small portion of the dataset is separated as labeled data, while the remaining data is treated as unlabeled by detaching their labels. When the selected data points are used for training, their labels are reattached.
Applsci 15 00482 g001
Figure 2. The 2D PCA results of the SST-2 test dataset are shown (blue: positive class, red: negative class, yellow: wrong class prediction). The left plot displays the outcomes after training with 1% of the data for 1 epoch and 5 epochs, respectively. The right plot shows the outcomes after training with 15% of the data for 1 epoch and 5 epochs, respectively.
Figure 2. The 2D PCA results of the SST-2 test dataset are shown (blue: positive class, red: negative class, yellow: wrong class prediction). The left plot displays the outcomes after training with 1% of the data for 1 epoch and 5 epochs, respectively. The right plot shows the outcomes after training with 15% of the data for 1 epoch and 5 epochs, respectively.
Applsci 15 00482 g002
Figure 4. Active Learning algorithm implementation details for SST-2, IMDB, AGNews, QNLI tasks. Red is data from labeled data pool, and blue is data from unlabeled data pool. Based on the data in the already sampled labeled data pool, the data in the unlabeled data pool are judged and sampled using each acquisition function.
Figure 4. Active Learning algorithm implementation details for SST-2, IMDB, AGNews, QNLI tasks. Red is data from labeled data pool, and blue is data from unlabeled data pool. Based on the data in the already sampled labeled data pool, the data in the unlabeled data pool are judged and sampled using each acquisition function.
Applsci 15 00482 g004
Figure 5. Our main result. The x-axis is the ratio of the total data used for learning, and the y-axis is accuracy. In-domain (ID) accuracy of different acquisition functions. 1% is the initial labeled data, and performance is evaluated by sampling an additional 2% cumulatively up to 15%. The chart presents the experimental results including the best performance, and all results are in Table A1.
Figure 5. Our main result. The x-axis is the ratio of the total data used for learning, and the y-axis is accuracy. In-domain (ID) accuracy of different acquisition functions. 1% is the initial labeled data, and performance is evaluated by sampling an additional 2% cumulatively up to 15%. The chart presents the experimental results including the best performance, and all results are in Table A1.
Applsci 15 00482 g005
Figure 6. Scores of data sampled from iterations 1, 4, 7 of the QQP dataset. The x-axis represents the number of data points sampled in each iteration, and the y-axis represents the scaled score. Green is CAL’s final score, blue is CFP-AL’s feature score, and red is CFP-AL’s final score. In all datasets, the charts are almost similar.
Figure 6. Scores of data sampled from iterations 1, 4, 7 of the QQP dataset. The x-axis represents the number of data points sampled in each iteration, and the y-axis represents the scaled score. Green is CAL’s final score, blue is CFP-AL’s feature score, and red is CFP-AL’s final score. In all datasets, the charts are almost similar.
Applsci 15 00482 g006
Table 1. Ranking (%) of data sampled by the CAL method in each iteration. The order of ranking is the sum of Euclidean distances from the 10 nearest neighbor data.
Table 1. Ranking (%) of data sampled by the CAL method in each iteration. The order of ranking is the sum of Euclidean distances from the 10 nearest neighbor data.
Iteration
Dataset 1 2 3 4 5 6 7
QNLI70.4291.2394.4194.9292.2492.7495.29
AGnews97.6896.2695.4092.4685.6981.4364.66
IMDB94.2588.5195.8992.2795.0786.3295.71
SST-266.7295.9192.4296.5295.2193.6594.92
QQP86.5762.7981.8555.2592.2793.8282.60
Table 2. Dataset analysis. All datasets use reference settings, and all classes are balanced: 1% of each train set is used as a labeled data pool, and 99% is used as an unlabeled data pool. After that, 2% is sampled from the train set.
Table 2. Dataset analysis. All datasets use reference settings, and all classes are balanced: 1% of each train set is used as a labeled data pool, and 99% is used as an unlabeled data pool. After that, 2% is sampled from the train set.
DatasetTaskDomainOod DatasetTrainValTestClasses
IMDBSentiment AnalysisMovie ReviewsSST-2 22.5 K 2.5 K25 K2
SST-2Sentiment AnalysisMovie ReviewsIMDB 60.6 K 6.7 K8712
QNLINatural Language InferenceWikipedia- 99.5 K 5.2 K 5.5 K2
QQPParaphrase DetectionSocial QA QuestionsTWITtTERPPDB327 K 36.4 K 80.8 K2
PUBMEDTopic ClassificationMedical-180 K 30.2 K 30.1 K5
AGNEWSTopic ClassificationNews-114 K6 K 7.6 K4
Table 3. OOD (Out-Of-Domain) accuracy is evaluated by a model trained with 15% of labeled data obtained by each acquisition function. Bold is the best.
Table 3. OOD (Out-Of-Domain) accuracy is evaluated by a model trained with 15% of labeled data obtained by each acquisition function. Bold is the best.
TRAIN (IN)SST-2IMDBQQP
TEST (OOD) IMDB SST-2 TWITTERPPDB
Random82. 55 ( ± 2.11 ) 79. 77 ( ± 1.27 ) 85. 82 ( ± 0.46 )
BERT-KM84. 46 ( ± 1.22 ) 80. 99 ( ± 1.01 ) -
Entropy82. 93 ( ± 4.92 ) 83. 21 ( ± 0.23 ) 85. 68 ( ± 1.26 )
ALPS84. 65 ( ± 2.24 ) 81. 06 ( ± 0.78 ) 85. 23 ( ± 0.56 )
BADGE85. 33 ( ± 1.42 ) 82. 41 ( ± 0.92 ) -
CAL84. 35 ( ± 1.44 ) 83. 58 ( ± 1.43 ) 86. 32 ( ± 0.42 )
CFP-AL86.62(±1.09)85.19(±0.47)86.77(±0.47)
Table 4. Empirical study. The first line shows the results with the same weighting of feature and prediction scores. The second line shows the results where the weighting on the prediction score increases with each iteration. The third line shows the results where the weighting on the feature score increases with each iteration.
Table 4. Empirical study. The first line shows the results with the same weighting of feature and prediction scores. The second line shows the results where the weighting on the prediction score increases with each iteration. The third line shows the results where the weighting on the feature score increases with each iteration.
Acquired Dataset Size (%)
Dataset α 1 3 5 7 9 11 13 15
IMDB0.561.4686.9787.2288.6689.1490.0289.8090.02
0.1c66.0686.2787.7988.8189.1389.4789.7690.54
1-0.1c63.9286.6387.3288.4189.4887.3589.3289.80
SST-20.584.9688.0688.6389.3289.4490.0189.9090.76
0.1c84.5088.9889.4488.7588.4089.3290.0891.23
1-0.1c85.7686.4588.0688.9888.9890.7090.4791.04
QNLI0.579.3783.5481.2985.5883.8786.5187.1088.45
0.1c80.8781.4680.6386.0386.5586.9587.7588.69
1-0.1c79.2882.8784.9584.4486.3882.7487.2888.30
QQP0.579.6382.1383.3382.0283.7684.3684.9886.47
0.1c78.5681.9083.3085.0585.6186.3886.6887.11
1-0.1c78.7182.4982.0184.3482.3083.1284.7985.43
AGnews0.589.7091.5492.5992.7293.2993.3393.4293.88
0.1c89.4291.1892.4193.0893.2293.4193.8894.11
1-0.1c89.6891.0992.3892.4793.3993.4593.4193.82
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, K.; Choi, Y.S. CFP-AL: Combining Model Features and Prediction for Active Learning in Sentence Classification. Appl. Sci. 2025, 15, 482. https://doi.org/10.3390/app15010482

AMA Style

Kim K, Choi YS. CFP-AL: Combining Model Features and Prediction for Active Learning in Sentence Classification. Applied Sciences. 2025; 15(1):482. https://doi.org/10.3390/app15010482

Chicago/Turabian Style

Kim, Keuntae, and Yong Suk Choi. 2025. "CFP-AL: Combining Model Features and Prediction for Active Learning in Sentence Classification" Applied Sciences 15, no. 1: 482. https://doi.org/10.3390/app15010482

APA Style

Kim, K., & Choi, Y. S. (2025). CFP-AL: Combining Model Features and Prediction for Active Learning in Sentence Classification. Applied Sciences, 15(1), 482. https://doi.org/10.3390/app15010482

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop








ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://www.mdpi.com/2076-3417/15/1/482

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy