このページは http://www.slideshare.net/OReillyStrata/large-scale-etl-with-hadoop の内容を掲載しています。
Hadoop is commonly used for processing large swaths of data in batch. While many of the necessary...
Hadoop is commonly used for processing large swaths of data in batch. While many of the necessary building blocks for data processing exist within the Hadoop ecosystem – HDFS, MapReduce, HBase, Hive, Pig, Oozie, and so on – it can be a challenge to assemble and operationalize them as a production ETL platform. This presentation covers one approach to data ingest, organization, format selection, process orchestration, and external system integration, based on collective experience acquired across many production Hadoop deployments.