SSSSLIDE

Feb 2013 HUG: Large Scale Data Ingest Using Apache Flume

byYahoo! Developer Network

2013/02/22にアップロード

Apache Flume is a highly scalable, distributed, fault tolerant data collection framework for Apache Hadoop and Apache HBase. Flume is designed to transfer massive volumes of event data in a highly scalable way into HDFS or HBase. Flume is declarative and easy to configure and can easily be deployed to a large number of machines using configuration management systems like Puppet or Cloudera Manager. In this talk, we will cover the basic components of Flume, configuring and deploying flume. We will also briefly talk about the metrics Flume exposes, and the various ways in which these can be collected. Apache

Flume is a Top Level Project (TLP) at the Apache Software Foundation, and has made several releases since entering incubation in June, 2011. Flume graduated to become a TLP in July, 2012. The current release of Flume is Flume 1.3.1.

Presenter: Hari Shreedharan, PMC Member and Committer, Apache Flume, Software Engineer, Cloudera

参照元: http://www.slideshare.net/ydn/flume-hug