このページは http://www.slideshare.net/jpatanooga/metronome-ml-confnov2013v20131113 の内容を掲載しています。
Online learning techniques, such as Stochastic Gradient Descent (SGD), are powerful when applied ...
Online learning techniques, such as Stochastic Gradient Descent (SGD), are powerful when applied to risk minimization and convex games on large problems. However, their sequential design prevents them from taking advantage of newer distributed frameworks such as Hadoop/MapReduce. In this session, we will take a look at how we parallelize parameter estimation for linear models on the next-gen YARN framework Iterative Reduce and the parallel machine learning library Metronome. We also take a look at non-linear modeling with the introduction of parallel neural network training in Metronome as well.