Skip to search formSkip to main contentSkip to account menu
- Corpus ID: 270123204
@inproceedings{Kim2024LearningDD, title={Learning Discriminative Dynamics with Label Corruption for Noisy Label Detection}, author={Suyeon Kim and Dongha Lee and SeongKu Kang and Sukang Chae and Sanghwan Jang and Hwanjo Yu}, year={2024}, url={https://api.semanticscholar.org/CorpusID:270123204}}
- Suyeon Kim, Dongha Lee, Hwanjo Yu
- Published 30 May 2024
- Computer Science
DynaCor first introduces a label corruption strategy that augments the original dataset with intentionally corrupted labels, enabling indirect simulation of the model's behavior on noisy labels, and learns to identify clean and noisy instances by inducing two clearly distinguishable clusters from the latent representations of training dynamics.
Figures and Tables from this paper
- figure 1
- table 1
- table 2
- figure 2
- table 3
- figure 3
- table 4
- figure 4
- table 5
- figure 5
- table 6
- figure 6
- table 7
Ask This Paper
BETA
AI-Powered
Ask This Paper
BETA
AI-Powered
Unknown Error
An unexpected error occurred. Please try again.
No Answer Found
Ask another question that can be answered by this paper or rephrase your question.
We are still processing this paper
Please try again later.
Question Answering Unavailable
Please try again later.
No Response
The server took too long to answer your question. You can either rephrase your question or wait until it is less busy.
AI-Generated
Thank you for your feedback!
We're sorry, something went wrong while submitting this feedback.
Thank you for your feedback!
We're sorry, something went wrong while submitting this feedback.
Supporting Statements
Our system tries to constrain to information found in this paper. Results quality may vary. Learn more about how we generate these answers.
Feedback?
68 References
- Zhaowei ZhuZihao DongYang Liu
- 2022
Computer Science
ICML
A training-free solution to detect corrupted labels by checking the noisy label consensuses of nearby features and a ranking-based approach that scores each instance and filters out a guaranteed number of instances that are likely to be corrupted.
- Tong XiaoTian XiaYi YangChang HuangXiaogang Wang
- 2015
Computer Science
2015 IEEE Conference on Computer Vision and…
A general framework to train CNNs with only a limited number of clean labels and millions of easily obtained noisy labels is introduced and the relationships between images, class labels and label noises are model with a probabilistic graphical model and further integrate it into an end-to-end deep learning system.
- 1,022
- Highly Influential
- PDF
- Wenkai ChenChuang ZhuYi Chen
- 2023
Computer Science
ECML/PKDD
PGDF (Prior Guided Denoising Framework), a novel framework to learn a deep model to suppress noise by generating the samples' prior knowledge, which is integrated into both dividing samples step and semi-supervised step, is proposed and demonstrates substantial improvements over state-of-the-art methods.
- 6
- Highly Influential[PDF]
- Jiaheng WeiZhaowei ZhuHao ChengTongliang LiuGang NiuYang Liu
- 2022
Computer Science
ICLR
It is shown that the real-world noise patterns impose new and outstanding challenges as compared to synthetic label noise, and the availability of these two datasets would facilitate the development and evaluation of future learning with noisy label solutions.
- 152
- Highly Influential[PDF]
- Taehyeon KimJongwoo KoSangwook ChoJ. ChoiSe-Young Yun
- 2021
Computer Science
NeurIPS
A novel detector for filtering label noise that focuses on each data's latent representation dynamics and measure the alignment between the latent distribution and each representation using the eigendecomposition of the data gram matrix, providing a robust detector with derivative-free simple methods having theoretical guarantees.
- 72
- Highly Influential[PDF]
- Chiyuan ZhangSamy BengioMoritz HardtB. RechtO. Vinyals
- 2021
Computer Science
Commun. ACM
These experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data, and corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity.
- 1,459
- Highly Influential
- PDF
- Hao ChengZhaowei ZhuXingyu LiYifei GongXing SunYang Liu
- 2021
Computer Science
ICLR
This paper proposes CORES^2 (COnfidence REgularized Sample Sieve), which progressively sieves out corrupted samples and provides a generic machinery for anatomizing noisy datasets and a flexible interface for various robust training techniques to further improve the performance.
- 161
- Highly Influential[PDF]
- Sheng LiuJonathan Niles-WeedN. RazavianC. Fernandez‐Granda
- 2020
Computer Science
NeurIPS
It is proved that early learning and memorization are fundamental phenomena in high-dimensional classification tasks, even in simple linear models, and a new technique for noisy classification tasks is developed, which exploits the progress of the early learning phase.
- 407
- Highly Influential[PDF]
- Junnan LiR. SocherS. Hoi
- 2020
Computer Science
ICLR
This work proposes DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques, which models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples.
- 782
- Highly Influential[PDF]
- Geoff PleissTianyi ZhangEthan R. ElenbergKilian Q. Weinberger
- 2020
Computer Science
NeurIPS
A new method to identify overly ambiguous or outrightly mislabeled samples and mitigate their impact when training neural networks is introduced, at the heart of which is the Area Under the Margin (AUM) statistic.
- 200
- Highly Influential[PDF]
...
...
Related Papers
Showing 1 through 3 of 0 Related Papers