Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
科学研究費助成事業 研究成果報告書
様 式 C-19、F-19-1、Z-19 (共通)
機関番号:
研究種目:
課題番号:
研究課題名(和文)
研究代表者
研究課題名(英文)
交付決定額(研究期間全体):(直接経費)
14301
若手研究(A)
2016~2014
Fast Optimal Transport and Applications to Inference and Simulation in Large Scale Statistical Machine Learning
Fast Optimal Transport and Applications to Inference and Simulation in Large Scale Statistical Machine Learning
80597344研究者番号:
Cuturi Marco(CUTURI, Marco)
京都大学・情報学研究科・准教授
研究期間:
26700002
平成 年 月 日現在29 6 26
円 19,300,000
研究成果の概要(和文):We showed over the course of this project that optimal transport theory could have an impact on real world applications (machine learning, imaging, graphics) using a proper regularization of the well known optimal transport problem.
研究成果の概要(英文):This funding was used to push forward the idea that optimal transport could be used numerically to solve real life problems using a regularization approach. We have demonstrated over the course of this project that these ideas were feasible, and have shown their applicability to a very wide range of applications, ranging from graphics and medical imaging to graphics and machine learning. These ideas were presented in top conferences and journals.
研究分野: 統計学
キーワード: 最適輸送理論 機械学習 最適化 グラフィックス
2版
様 式
1.研究開始当初の背景
The optimal transport usefulThat geometry provides with a useful distance between measures (distances, also knownmover’s), but also with a very rich family of geometric tools to probability distributions in an intuitive way. As a first illustrabelow, which illustrate how two mixtures of Gaussians can be interpolated. The first Gaussian, the higher curve on the left, can be gradually turned into the mixture on the right (thedistribution on the right(Euclidean) interpolation, which results essentially in purely “vertical” changes, as displayed belowfor interpolations
The optimal transport interpolation (displayed the second figure measures has a more physical interpretation: the mass of each measure is transported by moving the first measure on the left tothe measure on the right in a spatialand meaningful called a displacement interpolation.
式 C-19、F-19
研究開始当初の背景
The optimal transport useful tool to compare probability distributions.
geometry provides with a useful distance between measures (the sodistances, also knownmover’s), but also with a very rich family of geometric tools to probability distributions in an intuitive way.
As a first illustration, below, which illustrate how two mixtures of Gaussians can be interpolated. The first Gaussian, the higher curve on the left, can be gradually turned into the mixture on the right (thedistribution on the right(Euclidean) interpolation, which results essentially in purely “vertical” changes, as displayed below, where intermediary lines stand for interpolations.
The optimal transport interpolation (displayed the second figure measures has a more physical interpretation: the mass of each measure is transported by moving the first measure on the left tothe measure on the right in a spatialand meaningful way.called a displacement interpolation.
C-19、F-19
研究開始当初の背景
The optimal transport (OT) geometry is a tool to compare probability distributions.
geometry provides not only with a useful distance between
the so-called optimal transport distances, also known as Wasserstein or earth mover’s), but also with a very rich family of geometric tools to interpolateprobability distributions in an intuitive way.
tion, consider the two figures below, which illustrate how two mixtures of Gaussians can be interpolated. The first Gaussian, the higher curve on the left, can be gradually turned into the mixture on the right (thedistribution on the right) by mean(Euclidean) interpolation, which results essentially in purely “vertical” changes, as
, where intermediary lines stand .
The optimal transport interpolation (displayed the second figure below) between measures has a more physical interpretation: the mass of each measure is transported by moving the first measure on the left tothe measure on the right in a spatial
way. That transformation is called a displacement interpolation.
C-19、F-19-1、Z-19
geometry is a very tool to compare probability distributions.
not only mathematicians with a useful distance between probability
optimal transport Wasserstein or earth
mover’s), but also with a very rich family of interpolate and handle
probability distributions in an intuitive way.
consider the two figures below, which illustrate how two mixtures of Gaussians can be interpolated. The first Gaussian, the higher curve on the left, can be gradually turned into the mixture on the right (the higher
) by means of a standard (Euclidean) interpolation, which results essentially in purely “vertical” changes, as
, where intermediary lines stand
The optimal transport interpolation (displayed below) between these two
measures has a more physical interpretation: the mass of each measure is transported “laterallyby moving the first measure on the left towards the measure on the right in a spatially coherent
That transformation is called a displacement interpolation.
、Z-19、CK-19
very tool to compare probability distributions.
mathematicians probability
optimal transport Wasserstein or earth
mover’s), but also with a very rich family of and handle
consider the two figures below, which illustrate how two mixtures of Gaussians can be interpolated. The first Gaussian, the higher curve on the left, can be gradually
higher s of a standard
(Euclidean) interpolation, which results essentially in purely “vertical” changes, as
, where intermediary lines stand
The optimal transport interpolation (displayed in these two
measures has a more physical interpretation: the laterally”,
wards coherent
That transformation is
The main observation behind this simple example is that optimal transport provides very different tools than the standard Euclidean geometry. Because of its origins in fluid mechanics and themovement of gases and masses, ttransport geometrymathematics literature, equationsthanks OT has, often the case with tools that have anphysical interpretationassociated with plausible displacement Although the simple example abovecreate, terms of the in terms of the number of pointswhen considering empirical measurescomputations are usually intractabletherefore asat all developments. Within that novel numerical method that approximate very efficiently optimal transport between that approach is regularization. of that method is an existing known as the Sinkhorn After approximation of optimal favorable mathematical and computational properties (parallelism) I seized thatSinkhorn almachine learning and computational statistics.This opportunity was the main context behind my kakenhi Wakate 2.研究の目的 The goal of this project was to build upon my early proposal to regularize the optimal transport with an entropic termcontribution was mostly limited re-discovering a known algorithm. very early in the project that there were several worthwhile research opportunities and lowSinkhorn algorithmthis kakenhi palgorithms, methodologiesgoals wer1.
、CK-19(共通)
The main observation behind this simple example is that optimal transport provides very different tools than the standard Euclidean geometry.
Because of its origins in fluid mechanics and themovement of gases and masses, ttransport geometrymathematics literature, equations. That field has received ample attention thanks to Cédric Villani’s fields medal
OT has, however, an important drawback. As is ten the case with tools that have an
physical interpretationassociated with plausible displacement Although the simple example abovecreate, higher-dimensional problems (both in terms of the dimension of the in terms of the number of pointswhen considering empirical measurescomputations are usually intractabletherefore as no surpriat all in practice for data analysisdevelopments.
Within that contextnovel numerical method that approximate very efficiently optimal transport between probability that approach is regularization. of that method is an existing known as the Sinkhorn After showing that this algorithm results in an approximation of optimal favorable mathematical and computational properties (quadratic complexity, parallelism) I seized thatSinkhorn algorithm for more ambitious tasks in machine learning and computational statistics.This opportunity was the main context behind my kakenhi Wakate-A project.
2.研究の目的
The goal of this project was to build upon my early proposal to regularize the optimal transport with an entropic termcontribution was mostly limited
discovering a known algorithm. very early in the project that there were several worthwhile research opportunities and low-hanging fruitsSinkhorn algorithmthis kakenhi projects on three main contributions: algorithms, methodologiesgoals were therefore to:
Explore further the computational of that regularization. Study whether it was
(共通)
The main observation behind this simple example is that optimal transport provides very different tools than the standard Euclidean geometry.
Because of its origins in fluid mechanics and themovement of gases and masses, ttransport geometry has been widely mathematics literature, notably partial differential
. That field has received ample attention to Cédric Villani’s fields medal
however, an important drawback. As is ten the case with tools that have an
physical interpretation, the computational cost associated with recreating that physically plausible displacement is usually very high. Although the simple example above
dimensional problems (both in dimension of the
in terms of the number of pointswhen considering empirical measurescomputations are usually intractable
no surprise that OT was barely in practice for data analysis
context, I have proposed in 2013 a novel numerical method that approximate very efficiently optimal transport
probability measures. that approach is regularization. of that method is an existing known as the Sinkhorn fixed
that this algorithm results in an approximation of optimal transport that has favorable mathematical and computational
quadratic complexity, parallelism) I seized that opportunity to use the
gorithm for more ambitious tasks in machine learning and computational statistics.This opportunity was the main context behind my
A project.
2.研究の目的
The goal of this project was to build upon my early proposal to regularize the optimal transport with an entropic term. In that paper my contribution was mostly limited
discovering a known algorithm. very early in the project that there were several worthwhile research opportunities and
hanging fruits that could be solved using the Sinkhorn algorithm. Therefore,
rojects on three main contributions: algorithms, methodologies and
therefore to: Explore further the computational of that regularization. Study whether it was
The main observation behind this simple example is that optimal transport provides very different tools than the standard Euclidean geometry.
Because of its origins in fluid mechanics and themovement of gases and masses, the optimal
widely studied in the notably partial differential
. That field has received ample attention to Cédric Villani’s fields medal in 2010.
however, an important drawback. As is ten the case with tools that have an intuitive
, the computational cost recreating that physically
is usually very high. Although the simple example above is easy to
dimensional problems (both in dimension of the ambient space and
in terms of the number of points that are tracked when considering empirical measures) such computations are usually intractable. It comes
OT was barely in practice for data analysis until recent
I have proposed in 2013 a novel numerical method that can be used to approximate very efficiently optimal transport
measures. The main tool of that approach is regularization. The main engine of that method is an existing iterative algorithm
fixed-point iteration. that this algorithm results in an
transport that has favorable mathematical and computational
quadratic complexity, differentiability, opportunity to use the
gorithm for more ambitious tasks in machine learning and computational statistics.This opportunity was the main context behind my
The goal of this project was to build upon my early proposal to regularize the optimal transport
. In that paper my contribution was mostly limited to that of
discovering a known algorithm. I believed very early in the project that there were several worthwhile research opportunities and
that could be solved using the Therefore, I focused during
rojects on three main contributions: and applications
Explore further the computational aspects of that regularization. Study whether it was
The main observation behind this simple example is that optimal transport provides very different
Because of its origins in fluid mechanics and the he optimal
studied in the notably partial differential
. That field has received ample attention in 2010.
however, an important drawback. As is intuitive
, the computational cost recreating that physically
is usually very high. is easy to
dimensional problems (both in ambient space and
that are tracked ) such
It comes OT was barely used
until recent
I have proposed in 2013 a can be used to
approximate very efficiently optimal transport The main tool of The main engine
algorithm point iteration.
that this algorithm results in an transport that has
favorable mathematical and computational differentiability,
opportunity to use the gorithm for more ambitious tasks in
machine learning and computational statistics. This opportunity was the main context behind my
The goal of this project was to build upon my early proposal to regularize the optimal transport
. In that paper my to that of I believed
very early in the project that there were several worthwhile research opportunities and
that could be solved using the I focused during
rojects on three main contributions: applications. My
aspects of that regularization. Study whether it was
possible to use other standard algorithms and theoretical tools from convex optimization (Fenchel duality, Bregman divergences, acceleration and quasi Newton methods) to obtain better numerical performance.
2. Create new flexible methodologies that could incorporate a priori (geometric) information on the space of observations to compare probability distributions on such observations. These methodologies were to be used with different usages: averaging (the problem of computing barycenters and centrois), dimensionality reduction, or supervised learning with structured output spaces (when the output of a classifier is itself a probability distribution), interpolations in graphics or between texts seen as clouds of points in word-embedding spaces, etc.
3. Test these new algorithms and methodologies on real world data, such as that coming from natural language processing, neuroscience, imaging or graphics, and see whether they could result in substantially different or better results.
3.研究の方法 Our research was goal driven, and we tried to fulfill all of the three goals outlined above in parallel. To realize the first goal, I have worked extensively with a team from INRIA (France) that was specialized in optimization (G. Carlier, J.D. Benamou, G. Peyré, F. Bach). In terms of methodologies, I worked with two students at Kyoto University (V. Seguy, A. Rolet) as well as a postdoctoral researcher from U. Berkeley (A. Ramdas). To obtain convincing results in applicative fields I teamed up with experts in graphics (J. Solomon, MIT; N. Bonneel, INRIA) and an expert in neuroscience (A. Gramfort, Telecom). From a computational perspective, I detected very early the potential benefits of using general purpose graphical processing units (GPGPU). I have therefore invested a substantial amount of money to equip myself with an efficient computational server with several NVIDIA cards, and taught myself as well as students working with me how to use such cards. 4.研究成果
This Wakate-A project has directly funded a total of 13 publications, among which 2 ACM SIGGRAPH papers, 3 NIPS conference proceedings, 2 ICML conference proceedings, 1 JMLR and 2 SIAM journal articles. All of these conferences and journals are top tier, if not the most respected venues in their respective fields. Several of these papers have now more than 50 citations, which reflects their immediate impact in these fields. From an algorithmic perspective, we proposed in papers [3] (see bibliographic section) novel convex optimization tools to compute barycenters. These tools were natural generalizations of the Sinkhorn algorithm. That paper has already been cited more than 90 times in 2 years. In [2] we proposed to solve more advanced and general variational problems involving Wasserstein distances using Fenchel duality. We believe that this paper holds several ideas for future applied work. In [5] we proposed the first stochastic optimization approach to approximate optimal transport. That paper is being used now by the deep learning community to train generative models for natural images. In the field of computer graphics, our methods gave for the very first time the ability to compute automatically, and in very reasonable time, shape interpolations in 3 dimensions with very natural results [11]. That feat was made possible through the important observation that the Sinkhorn algorithm can be implemented in linear time when using shapes in regular grids with separable metrics. We propose such interpolations in the figure below, in which the three 3D shapes at the corners of the triangle are handled as 3D uniform volumetric distributions. We were able to obtain shortly after, in our second SIGGRAPH paper [8], an automatic differentiation algorithm to solve the inverse problem related to that interpolation goal.
Some of the applications in [8] including analysis of MRI data. We have also explored such applications in [12]. In machine learning, our goal was to analyze histograms of features, such as bagwe were able to propose dictionary learning methods carry out dimensionality reduction. Finally, two of our recent papers provide several research opportunities. We have explored more advanced ideas that generalize the Wasserstein distance and give it additional, invariance properties in [7]. We were also the first to provide an algorithm to estimate the density of a probability distribution under a Wasserstein loss between data and the probability model [6]. These ideas have now reached the ML community,Wasserstein GAN have received now wide attention. We can thus safely claim that this Wakateproject was introduce these results both to audiences interested in the mathematical, statistical, apphardware, and machine learning aspects of optimal transport. 5.主な発表論文(研究代表者、研究分担者は下線) 〔雑誌論文〕(計1.
2.
Some of the applications in [8] including analysis of MRI data. We have also explored such applications in [12].
In machine learning, our goal was to analyze histograms of features, such as bagwe were able to propose dictionary learning
ethods [9] as well as PCA methods carry out dimensionality reduction.
Finally, two of our recent papers provide several research opportunities. We have explored more advanced ideas that generalize the Wasserstein distance and give it additional, invariance properties in [7]. We were also the first to provide an algorithm to estimate the density of a probability distribution under a Wasserstein loss between data and the probability model [6]. These ideas have now reached the ML community, where algorithms such as Wasserstein GAN have received now wide attention.
We can thus safely claim that this Wakateproject was a successintroduce these results both to audiences interested in the mathematical, statistical, apphardware, and machine learning aspects of optimal transport.
.主な発表論文(研究代表者、研究分担者は下線)
〔雑誌論文〕(計 Ramdas, N. Garcia Trillos,
Wasserstein TwoRelated Families of NonparaEntropy, 19(2), 2017
M. Cuturi, G. Peyré. A Smoothed Dual Formulation for Variational Wasserstein Problems, SIAM Journal on Imaging Sciences, 9(1), 320
Some of the applications in [8] including analysis of MRI data. We have also explored such applications in [12].
In machine learning, our goal was to analyze histograms of features, such as bagwe were able to propose dictionary learning
as well as PCA methods carry out dimensionality reduction.
Finally, two of our recent papers provide several research opportunities. We have explored more advanced ideas that generalize the Wasserstein distance and give it additional, invariance properties in [7]. We were also the first to provide an algorithm to estimate the density of a probability distribution under a Wasserstein loss between data and the probability model [6]. These ideas have now reached the ML
where algorithms such as Wasserstein GAN have received now wide
We can thus safely claim that this Wakatesuccess, since we were able to
introduce these results both to audiences interested in the mathematical, statistical, apphardware, and machine learning aspects of
.主な発表論文等 (研究代表者、研究分担者及び連携研究者
13 件) Ramdas, N. Garcia Trillos, Wasserstein Two-Sample Testing Related Families of NonparaEntropy, 19(2), 2017
, G. Peyré. A Smoothed Dual Formulation for Variational Wasserstein Problems, SIAM Journal on Imaging Sciences, 9(1), 320–343, 2016. 28 citations.
Some of the applications in [8] including analysis of MRI data. We have also explored such
In machine learning, our goal was to analyze histograms of features, such as bag-of-words, and we were able to propose dictionary learning
as well as PCA methods [10] carry out dimensionality reduction.
Finally, two of our recent papers provide several research opportunities. We have explored more advanced ideas that generalize the Wasserstein distance and give it additional, desirable invariance properties in [7]. We were also the first to provide an algorithm to estimate the density of a probability distribution under a Wasserstein loss between data and the probability model [6]. These ideas have now reached the ML
where algorithms such as Wasserstein GAN have received now wide
We can thus safely claim that this Wakate, since we were able to
introduce these results both to audiences interested in the mathematical, statistical, apphardware, and machine learning aspects of
及び連携研究者
Ramdas, N. Garcia Trillos, M. Cuturi, On
Sample Testing Related Families of Nonparametric Tests,
, G. Peyré. A Smoothed Dual Formulation for Variational Wasserstein Problems, SIAM Journal on Imaging
343, 2016. 28 citations.
Some of the applications in [8] including analysis of MRI data. We have also explored such
In machine learning, our goal was to analyze words, and
we were able to propose dictionary learning [10] to
Finally, two of our recent papers provide several research opportunities. We have explored more advanced ideas that generalize the Wasserstein
desirable invariance properties in [7]. We were also the first to provide an algorithm to estimate the density of a probability distribution under a Wasserstein loss between data and the probability model [6]. These ideas have now reached the ML
where algorithms such as Wasserstein GAN have received now wide
We can thus safely claim that this Wakate-A , since we were able to
introduce these results both to audiences interested in the mathematical, statistical, applied, hardware, and machine learning aspects of
及び連携研究者に
, On Sample Testing and
metric Tests,
, G. Peyré. A Smoothed Dual Formulation for Variational Wasserstein Problems, SIAM Journal on Imaging
343, 2016. 28 citations.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
J.D. Benamou, G. Carlier, Nenna, G. Peyre. Iterative Bregman projections for regularproblems, SIAM Journal on Scientific Computing Vol. 37 (2), A11112015. 93 citations.
M. Cuturi learning. Journal of Machine Learning Research (15) 533
A. Genevay, Stochastic Optimization for Large Scale Optimal Transport, In Advances in Neural Information Processing Systems (NIPS) 30, 2016. Acceptance: 568/2500=22%. 10 citations.
G. Montavon, K.Wasserstein Training of Boltzmann Machines, In Advances in Neural Information Processing Systems (NIPS) 30, 2016. Acceptance: 568/2500=22%. 8 citations.
G. Peyre, Gromov-Wasserstein averaging of kernel and distance matrices. Proceedings of the33rd International Conference on Machine Learning (ICML), JMLR W&CP, volume 48, 2016. ACMcceptance: 322/1327=24%. 5 citations.
N. Bonneel, G. Peyre, Wasserstein Barycentric Coordinates: Histogram Regression using Optimal Transport, ACM SIGGRAPHAcceptance: 119/467=25%. 6 citations.
A. Rolet, Dictionary Learning with a Smoothed Wasserstein Loss, Proc. of AISTATS 2016. Acceptance: 165/537=30%. 8 citations.
V. Seguy and Geodesic Analysis for ProbMeasures under the Optimal Transport Metric, In Advances in Neural Information Processing Systems (NIPS) 29, 2015. Acceptance: 403/1838=22%. 11 citations.
J. Solomon, F. de Goes, G. Peyre, Cuturi, A. Butscher, A. Nguyen, T. Du, L. Guibas. Convoldistances: efficient optimal transportation on geometric domains, ACM SIGGRAPH 2015. Acceptance: 118/650=18%. 49 citations.
A. Gramfort, G. Peyre, optimal transport averaging of neuroimaging data, Information Processing in Medical Imaging (IPMI) 2015, 10 citations.
M. Cuturi and A. Doucet. Fast computation of Wasserstein barycenters. Proceedings of the 31st International Conference on Machine Learning (ICML), JMLR W&CP, volume 32, 2014. Acceptance:
J.D. Benamou, G. Carlier, Nenna, G. Peyre. Iterative Bregman projections for regularized transportation problems, SIAM Journal on Scientific Computing Vol. 37 (2), A11112015. 93 citations.
and D. Avis. Ground metric learning. Journal of Machine Learning Research (15) 533-564, 2014. 28A. Genevay, M. Cuturi, G.Stochastic Optimization for Large Scale Optimal Transport, In Advances in Neural Information Processing Systems (NIPS) 30, 2016. Acceptance: 568/2500=22%. 10
G. Montavon, K.-R. Muller and Wasserstein Training of Boltzmann Machines, In Advances in Neural Information Processing Systems (NIPS) 30, 2016. Acceptance: 568/2500=22%. 8 citations.G. Peyre, M. Cuturi
Wasserstein averaging of kernel and distance matrices. Proceedings of the33rd International Conference on Machine Learning (ICML), JMLR W&CP, volume 48, 2016. ACMcceptance: 322/1327=24%.
N. Bonneel, G. Peyre, Wasserstein Barycentric Coordinates: Histogram Regression using Optimal Transport, ACM SIGGRAPHAcceptance: 119/467=25%. 6 citations.A. Rolet, M. Cuturi, G. Peyre. Fast Dictionary Learning with a Smoothed Wasserstein Loss, Proc. of AISTATS 2016. Acceptance: 165/537=30%. 8 citations.V. Seguy and M. Geodesic Analysis for ProbMeasures under the Optimal Transport Metric, In Advances in Neural Information Processing Systems (NIPS) 29, 2015. Acceptance: 403/1838=22%. 11 citations.J. Solomon, F. de Goes, G. Peyre,
, A. Butscher, A. Nguyen, T. Du, L. Guibas. Convolutional Wasserstein distances: efficient optimal transportation on geometric domains, ACM SIGGRAPH 2015. Acceptance: 118/650=18%. 49
A. Gramfort, G. Peyre, optimal transport averaging of neuroimaging data, Information Processing in Medical Imaging (IPMI) 2015, 10
and A. Doucet. Fast computation
of Wasserstein barycenters. Proceedings of the 31st International Conference on Machine Learning (ICML), JMLR W&CP, volume 32, 2014. Acceptance:
J.D. Benamou, G. Carlier, M. CuturiNenna, G. Peyre. Iterative Bregman
ized transportation problems, SIAM Journal on Scientific Computing Vol. 37 (2), A1111-A1138,
and D. Avis. Ground metric learning. Journal of Machine Learning
564, 2014. 28 citations., G. Peyré, F. Bach.
Stochastic Optimization for Large Scale Optimal Transport, In Advances in Neural Information Processing Systems (NIPS) 30, 2016. Acceptance: 568/2500=22%. 10
R. Muller and M. CuturiWasserstein Training of Restricted Boltzmann Machines, In Advances in Neural Information Processing Systems (NIPS) 30, 2016. Acceptance: 568/2500=22%. 8 citations.
and J. Solomon. Wasserstein averaging of kernel
and distance matrices. Proceedings of the33rd International Conference on Machine Learning (ICML), JMLR W&CP, volume 48, 2016. ACMcceptance: 322/1327=24%.
N. Bonneel, G. Peyre, M. CuturiWasserstein Barycentric Coordinates: Histogram Regression using Optimal Transport, ACM SIGGRAPH Acceptance: 119/467=25%. 6 citations.
, G. Peyre. Fast Dictionary Learning with a Smoothed Wasserstein Loss, Proc. of AISTATS 2016. Acceptance: 165/537=30%. 8 citations.
. Cuturi. Principal Geodesic Analysis for Probability Measures under the Optimal Transport Metric, In Advances in Neural Information Processing Systems (NIPS) 29, 2015. Acceptance: 403/1838=22%. 11 citations.J. Solomon, F. de Goes, G. Peyre,
, A. Butscher, A. Nguyen, T. Du, L. utional Wasserstein
distances: efficient optimal transportation on geometric domains, ACM SIGGRAPH 2015. Acceptance: 118/650=18%. 49
A. Gramfort, G. Peyre, M. Cuturi, Fast optimal transport averaging of neuroimaging data, Information Processing in Medical Imaging (IPMI) 2015, 10
and A. Doucet. Fast computation of Wasserstein barycenters. Proceedings of the 31st International Conference on Machine Learning (ICML), JMLR W&CP, volume 32, 2014. Acceptance:
M. Cuturi, L. Nenna, G. Peyre. Iterative Bregman
ized transportation problems, SIAM Journal on Scientific
A1138,
and D. Avis. Ground metric learning. Journal of Machine Learning
citations. Peyré, F. Bach.
Stochastic Optimization for Large Scale Optimal Transport, In Advances in Neural Information Processing Systems (NIPS) 30, 2016. Acceptance: 568/2500=22%. 10
M. Cuturi. Restricted
Boltzmann Machines, In Advances in Neural Information Processing Systems (NIPS) 30, 2016. Acceptance:
and J. Solomon. Wasserstein averaging of kernel
and distance matrices. Proceedings of the 33rd International Conference on Machine Learning (ICML), JMLR W&CP, volume 48, 2016. ACMcceptance: 322/1327=24%.
M. Cuturi. Wasserstein Barycentric Coordinates: Histogram Regression using Optimal
2016. Acceptance: 119/467=25%. 6 citations.
, G. Peyre. Fast Dictionary Learning with a Smoothed Wasserstein Loss, Proc. of AISTATS 2016. Acceptance: 165/537=30%. 8 citations.
. Principal ability
Measures under the Optimal Transport Metric, In Advances in Neural Information Processing Systems (NIPS) 29, 2015. Acceptance: 403/1838=22%. 11 citations. J. Solomon, F. de Goes, G. Peyre, M.
, A. Butscher, A. Nguyen, T. Du, L. utional Wasserstein
distances: efficient optimal transportation on geometric domains, ACM SIGGRAPH 2015. Acceptance: 118/650=18%. 49
, Fast optimal transport averaging of neuroimaging data, Information Processing in Medical Imaging (IPMI) 2015, 10
and A. Doucet. Fast computation of Wasserstein barycenters. Proceedings of the 31st International Conference on Machine Learning (ICML), JMLR W&CP, volume 32, 2014. Acceptance:
310/1238=25%. 87 citations 〔学会発表〕(計 19 件) 1. 26.09.16 M.Cuturi, Regularized Optimal
Transport, IPAM Workshop, ML meets many particle problems, UCLA
2. 19.09.16 M.Cuturi, Soft-DTW, ECML/PKDD Workshop, Adv. Analytics and Learning on Temporal Data, Riva del Garda
3. 18.07.16 M.Cuturi, Regularized Optimal Transport, Computational Optimal Transport, U. Montréal
4. 03.06.16 M.Cuturi, Regularized Optimal Transport, Korea/Japan ML Symposium, Seoul
5. 07.04.16 M.Cuturi, Regularized Optimal Transport, Statlearn 2016, Vannes
6. 16.03.16 M.Cuturi, Regularized Optimal Transport, EECS - BLISS Seminar, UC Berkeley
7. 19.02.16 M.Cuturi, Regularized Optimal Transport, Microsoft Research Cambridge talk
8. 04.12.2015 M.Cuturi, Regularized Optimal Transport, ML Tea seminar, MIT
9. 01.12.2015 M.Cuturi, Regularized Optimal Transport, CILVR Seminar, NYU
10. 25.11.2015 M.Cuturi, Regularized Optimal Transport, IBIS Workshop, Tsukuba
11. 28.10.2015 M.Cuturi, Regularized Optimal Transport, Germany-Japan Adaptive BCI Workshop, Kyoto
12. 06.10.2015 M.Cuturi, Regularized Optimal Transport, Optimization in Machine Learning, Vision and Image Processing, Toulouse
13. 18.09.2015 M.Cuturi, Regularized Optimal Transport, GPU Technology Conference, Tokyo
14. 06.03.2015 M.Cuturi, Regularized Optimal Transport, New trends in Optimal Transport, Bonn
15. 15.02.2015 M.Cuturi, Regularized Optimal Transport, Advances in Numerical Optimal Transport, Banff, Canada
16. 11.01.2015 M.Cuturi, Regularized Optimal Transport, Optimization and Statistical Learning Workshop, Les Houches, France
17. 18.12.2014 M.Cuturi, Regularized Optimal Transport, Learning Theory Workshop, FoCM, Montevideo, Uruguay
18. 23.09.2014 M.Cuturi, Regularized Optimal Transport, Gatsby Neuroscience Unit, UCL, London
19. 22.05.2014 M.Cuturi, Regularized Optimal Transport, Deep Learning Workshop, Shonan village, Japan
6.研究組織 (1)研究代表者 クトゥリ マルコ(Cuturi Marco) 京都大学・情報学研究科・准教授 研究者番号:80597344