Modal

MOdels for Data Analysis and Learning

User Tools

Site Tools


Sidebar

Navigation


Contact us


Research Organizations


Current Collaborations


Related Inria teams


Website Administrator

© 2016-2018 Modal-Team. All rights reserved.

seminars

Modal Seminar (2019-2020)

Usual day: Tuesday at 11.00.

Place: Inria Lille - Nord Europe.

How to get there: en français, in english.

Organizers: Hemant Tyagi and Pascal Germain

Calendar feed: iCalendar (hosted by the seminars platform of University of Lille)

Most slides are available: check past sessions and archives.

Archives: 2018-2019, 2017-2018, 2016-2017, 2015-2016, 2014-2015, 2013-2014.

Upcoming

Signe Riemer-Sørensen

  • Date: October 8, 2019 (Tuesday) at 11.00 (Plenary Room)
  • Affiliation: SINTEF,
  • Webpage: Link.
  • Title: Machine learning in the real world
  • Abstract: Machine learning algorithms are flexible and powerful, but the data requirements are high and rarely met by the available data. Real world data is often medium sized (relative to problem side), noisy and full of missing values. At the same time, in order to deploy machine learning in industrial settings, they must be robust, explainable and have quantified uncertainties. I will show practical examples of these challenges from our recent projects and some case-by-case solutions, but also highlight remaining issues.

Han Bao

  • Date: September 23, 2019 (Monday) at 11.00 (Room A00)
  • Affiliation: University of Tokyo
  • Webpage: Link.
  • Title: Unsupervised Domain Adaptation Based on Source-guided Discrepancy
  • Abstract: Unsupervised domain adaptation is the problem setting where data generating distributions in the source and target domains are different, and labels in the target domain are unavailable. One important question in unsupervised domain adaptation is how to measure the difference between the source and target domains. A previously proposed discrepancy that does not use the source domain labels requires high computational cost to estimate and may lead to a loose generalization error bound in the target domain. To mitigate these problems, we propose a novel discrepancy called source-guided discrepancy (S-disc), which exploits labels in the source domain. As a consequence, S-disc can be computed efficiently with a finite sample convergence guarantee. In addition, we show that S-disc can provide a tighter generalization error bound than the one based on an existing discrepancy. Finally, we report experimental results that demonstrate the advantages of S-disc over the existing discrepancies.

Past Seminars

Michaël Fanuel

  • Date: September 11, 2019 (Wednesday) at 14.00 (Plenary Room)
  • Affiliation: KU Leuven
  • Webpage: Link.
  • Title: Landmark sampling, diversity and kernel methods
  • Abstract: In machine learning, there is a revived interest for kernel methods, e.g. for designing interpretable convolutional networks or in the context of Gaussian processes. More generally, in kernel-based learning, a central question concerns large scale approximations of the kernel matrix. A popular method for finding a low rank approximation of kernel matrices is the so-called Nystrom method, which relies on the sampling of 'good' landmark points in a dataset. We will discuss an approach for selecting 'diverse' landmarks with some theoretical guarantees. Our work makes a connection between kernelized Christoffel functions, ridge leverage scores and determinantal point processes.

Archives: 2018-2019, 2017-2018, 2016-2017, 2015-2016, 2014-2015, 2013-2014.

Other Seminars

seminars.txt · Last modified: 2019/09/16 09:02 by tyagi