Workshop Generalization and Overfitting

A huge part of the recent success of highly parametrized ML models is due to their apparent ability to generalize to unseen data. This ability is seemingly in tension with mathematical results from traditional statistics (e.g. bias-variance trade-off) and statistical learning theory (e.g. PAC theorems) which rely heavily on either strong assumptions about the underlying probability distribution or restrictions on the hypothesis class. The predominant engineering epistemology claims failure of ML theory and suggests that contemporary ML models generalize well even beyond the classical overfitting regime.

This workshop aims to shed light at the generalization overfit tension and will address the following questions:

  • What measures of generalization and overfit are used in theory and in practice?
  • Do ML models really generalize well?
  • Are ML models really overfit?
  • What is overfitting anyhow?
  • Which theoretical explanations exist for the generalization overfit phenomena?
  • Which pragmatic explanations exist for the generalization overfit phenomena?

WebEx Link:
https://unistuttgart.webex.com/unistuttgart/j.php?MTID=m909ba73ec2c98127421ac0424f22863d

Should you want to join in person please write to nico.formanek(at)hlrs.de.

A schedule can be found at https://philo.hlrs.de/?p=415.

Participants (confirmed):

  * Tom Sterkenburg (ML Epistemology, LMU Munich)
  * Timo Freiesleben (ML Epistemology, Uni Tübigen)
  * Jan-Willem Romeijn (Philosophy of Statistics, U Groningen)
  * Petr Špelda (Philosophy of Induction, Charles University)
  * Vít Střítecký (Philosophy of AI, Charles University)

 

Location

70569 Stuttgart – Vaihingen, Nobelstrasse 19, HLRS, Room Berkeley/Shanghai.

Start date

May 28, 2024
09:30

End date

May 28, 2024

Back to list