[PDF] Learning Discriminative Dynamics with Label Corruption for Noisy Label Detection | Semantic Scholar (2024)

Skip to search formSkip to main contentSkip to account menu

Semantic ScholarSemantic Scholar's Logo
  • Corpus ID: 270123204
@inproceedings{Kim2024LearningDD, title={Learning Discriminative Dynamics with Label Corruption for Noisy Label Detection}, author={Suyeon Kim and Dongha Lee and SeongKu Kang and Sukang Chae and Sanghwan Jang and Hwanjo Yu}, year={2024}, url={https://api.semanticscholar.org/CorpusID:270123204}}
  • Suyeon Kim, Dongha Lee, Hwanjo Yu
  • Published 30 May 2024
  • Computer Science

DynaCor first introduces a label corruption strategy that augments the original dataset with intentionally corrupted labels, enabling indirect simulation of the model's behavior on noisy labels, and learns to identify clean and noisy instances by inducing two clearly distinguishable clusters from the latent representations of training dynamics.

Figures and Tables from this paper

  • figure 1
  • table 1
  • table 2
  • figure 2
  • table 3
  • figure 3
  • table 4
  • figure 4
  • table 5
  • figure 5
  • table 6
  • figure 6
  • table 7

Ask This Paper

BETA

AI-Powered

Our system tries to constrain to information found in this paper. Results quality may vary. Learn more about how we generate these answers.

Feedback?

68 References

Detecting Corrupted Labels Without Training a Model to Predict
    Zhaowei ZhuZihao DongYang Liu

    Computer Science

    ICML

  • 2022

A training-free solution to detect corrupted labels by checking the noisy label consensuses of nearby features and a ranking-based approach that scores each instance and filters out a guaranteed number of instances that are likely to be corrupted.

Learning from massive noisy labeled data for image classification
    Tong XiaoTian XiaYi YangChang HuangXiaogang Wang

    Computer Science

    2015 IEEE Conference on Computer Vision and…

  • 2015

A general framework to train CNNs with only a limited number of clean labels and millions of easily obtained noisy labels is introduced and the relationships between images, class labels and label noises are model with a probabilistic graphical model and further integrate it into an end-to-end deep learning system.

  • 1,022
  • Highly Influential
  • PDF
Sample Prior Guided Robust Model Learning to Suppress Noisy Labels
    Wenkai ChenChuang ZhuYi Chen

    Computer Science

    ECML/PKDD

  • 2023

PGDF (Prior Guided Denoising Framework), a novel framework to learn a deep model to suppress noise by generating the samples' prior knowledge, which is integrated into both dividing samples step and semi-supervised step, is proposed and demonstrates substantial improvements over state-of-the-art methods.

Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations
    Jiaheng WeiZhaowei ZhuHao ChengTongliang LiuGang NiuYang Liu

    Computer Science

    ICLR

  • 2022

It is shown that the real-world noise patterns impose new and outstanding challenges as compared to synthetic label noise, and the availability of these two datasets would facilitate the development and evaluation of future learning with noisy label solutions.

  • 152
  • Highly Influential
  • [PDF]
FINE Samples for Learning with Noisy Labels
    Taehyeon KimJongwoo KoSangwook ChoJ. ChoiSe-Young Yun

    Computer Science

    NeurIPS

  • 2021

A novel detector for filtering label noise that focuses on each data's latent representation dynamics and measure the alignment between the latent distribution and each representation using the eigendecomposition of the data gram matrix, providing a robust detector with derivative-free simple methods having theoretical guarantees.

  • 72
  • Highly Influential
  • [PDF]
Understanding deep learning (still) requires rethinking generalization
    Chiyuan ZhangSamy BengioMoritz HardtB. RechtO. Vinyals

    Computer Science

    Commun. ACM

  • 2021

These experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data, and corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity.

  • 1,459
  • Highly Influential
  • PDF
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
    Hao ChengZhaowei ZhuXingyu LiYifei GongXing SunYang Liu

    Computer Science

    ICLR

  • 2021

This paper proposes CORES^2 (COnfidence REgularized Sample Sieve), which progressively sieves out corrupted samples and provides a generic machinery for anatomizing noisy datasets and a flexible interface for various robust training techniques to further improve the performance.

  • 161
  • Highly Influential
  • [PDF]
Early-Learning Regularization Prevents Memorization of Noisy Labels
    Sheng LiuJonathan Niles-WeedN. RazavianC. Fernandez‐Granda

    Computer Science

    NeurIPS

  • 2020

It is proved that early learning and memorization are fundamental phenomena in high-dimensional classification tasks, even in simple linear models, and a new technique for noisy classification tasks is developed, which exploits the progress of the early learning phase.

  • 407
  • Highly Influential
  • [PDF]
DivideMix: Learning with Noisy Labels as Semi-supervised Learning
    Junnan LiR. SocherS. Hoi

    Computer Science

    ICLR

  • 2020

This work proposes DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques, which models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples.

  • 782
  • Highly Influential
  • [PDF]
Identifying Mislabeled Data using the Area Under the Margin Ranking
    Geoff PleissTianyi ZhangEthan R. ElenbergKilian Q. Weinberger

    Computer Science

    NeurIPS

  • 2020

A new method to identify overly ambiguous or outrightly mislabeled samples and mitigate their impact when training neural networks is introduced, at the heart of which is the Area Under the Margin (AUM) statistic.

  • 200
  • Highly Influential
  • [PDF]

...

...

Related Papers

Showing 1 through 3 of 0 Related Papers

    [PDF] Learning Discriminative Dynamics with Label Corruption for Noisy Label Detection | Semantic Scholar (2024)
    Top Articles
    Latest Posts
    Article information

    Author: Horacio Brakus JD

    Last Updated:

    Views: 5661

    Rating: 4 / 5 (51 voted)

    Reviews: 90% of readers found this page helpful

    Author information

    Name: Horacio Brakus JD

    Birthday: 1999-08-21

    Address: Apt. 524 43384 Minnie Prairie, South Edda, MA 62804

    Phone: +5931039998219

    Job: Sales Strategist

    Hobby: Sculling, Kitesurfing, Orienteering, Painting, Computer programming, Creative writing, Scuba diving

    Introduction: My name is Horacio Brakus JD, I am a lively, splendid, jolly, vivacious, vast, cheerful, agreeable person who loves writing and wants to share my knowledge and understanding with you.