This course is designed for developers who need to create applications to analyse big data stored in Apache Hadoop using Pig and Hive. Topics include: Hadoop, YARN, HDFS, MapReduce, data ingestion, workflow definition and using Pig and Hive to perform data analytics on big data and an introduction to Spark Core and Spark SQL.
Who is the course for
Software developers who need to understand and develop applications for Hadoop.
Attendees should be familiar with programming principles and have experience in software development. SQL knowledge is also helpful. No prior Hadoop knowledge is required.
What you will learn
- Describe Hadoop, YARN and use cases for Hadoop
- Describe Hadoop ecosystem tools and frameworks
- Describe the HDFS architecture
- Use the Hadoop client to input data into HDFS
- Transfer data between Hadoop and a relational database
- Explain YARN and MapReduce architectures
- Run a MapReduce job on YARN
- Use Pig to explore and transform data in HDFS
- Use Hive to explore and analyse data sets
- Understand how Hive tables are defined and implemented
- Use the new Hive windowing functions
- Explain and use the various Hive file formats
- Create and populate a Hive table that uses ORC file formats
- Use Hive to run SQL-like queries to perform data analysis
- Use Hive to join datasets using a variety of techniques
- Write efficient Hive queries
- Create ngrams and context ngrams using Hive
- Perform data analytics using the DataFu Pig library
- Explain the uses and purpose of HCatalog
- Use HCatalog with Pig and Hive
- Define and schedule an Oozie workflow
- Present the Spark ecosystem and high-level architecture
- Perform data analysis with Spark’s Resilient Distributed Dataset API
- Explore Spark SQL and the DataFrame API
- Use HDFS commands to add/remove files and folders
- Use Sqoop to transfer data between HDFS and a RDBMS
- Run MapReduce and YARN application jobs
- Explore, transform, split and join datasets using Pig
- Use Pig to transform and export a dataset for use with Hive
- Use HCatLoader and HCatStorer
- Use Hive to discover useful information in a dataset
- Describe how Hive queries get executed as MapReduce jobs
- Perform a join of two datasets with Hive
- Use advanced Hive features: windowing, views, ORC files
- Use Hive analytics functions
- Write a custom reducer in Python
- Analyse clickstream data and compute quantiles with DataFu
- Use Hive to compute ngrams on Avro-formatted files
- Define an Oozie workflow
- Use Spark Core to read files and perform data analysis
- Create and join DataFrames with Spark SQL
- 50% Lecture/Discussion
- 50% Hands-on Labs
Related Training Courses
HDP Developer: Java Applications This 4-day course provides Java programmers a deep-dive into Hadoop 2.x application development.
HDP Operations: Hadoop Administration 1 This 4-day course is designed for Hortonworks Data Platform administrators, and covers installation, configuration, maintenance, security and performance topics.
HDP Operations: Hadoop Administration 2 This 3-day course is designed for experienced administrators who manage Hortonworks Data Platform (HDP) 2.3 clusters with Ambari.
HDP Administrator: Security This 3-day course is designed for experienced administrators who will be implementing secure Hadoop clusters using authentication, authorisation, auditing and data protection strategies and tools.
HDP Analyst: Data Science This 3-day course provides instruction on the processes and practice of data science, including machine learning and natural language processing.
HDP Operations: Hortonworks Data Flow This 3-day course is designed for ‘Data Stewards’ or ‘Data Flow Managers’ who are looking forward to automate the flow of data between systems.
HDP Analyst: Apache HBase Essentials This 2-day workshop introduces HBase basics, structure and operations in an intensely hands-on experience.
HDP Operations: Apache HBase Advanced Management This 4-day course is designed for administrators who will be installing, configuring and managing HBase clusters.
HDP Developer: Enterprise Spark 1 This 4-day course is designed as an entry point for developers who need to create applications to analyse Big Data stored in Apache Hadoop using Spark.