Big Data and Hadoop Developer
|Training Type ||Self-Paced Training|
|Audience and Prerequisites ||The program is best suited for Developers, Engineers and Architects who want to pursue there future in Hadoop and related tolls to solve Real World Data Problems.|
The participant should be from Programming background or should have an ability toprogram on JAVA, Scala or Python as some the Hands – On exercise will be fromthese Programming Languages. Familiarly with Linux commands and SQL is helpful for better understanding.
No prior knowledge or understanding of Hadoop or Big Data is required.
Get the Basic Understanding and Concepts of Big Data and Hadoop.
It is an 10 days course in which you will receive a new Video link to watch daily. After watching video you can do Hands-On.
1. Big Data Introduction.
Understanding Business Analytics lifecycle.
Understanding Hadoop Characteristics.
Understanding Hadoop Ecosystem.
Understanding Hadoop Core components.
2. HDFS Internal
- What is HDFS in Hadoop? | HDFS overview and Introduction.
- HDFS Architecture.
- HDFS Nodes.
- Function of Name Node.
- Functionof Data Node.
- Function of Secondary name node
- Commands of HDFS.
- HDFS Hands-On Demo - How to create a directory in HDFS and store files in directory?
3. Introduction to Map-Reduce:
- Mapreduce Overview and introduction.
- Mapreduce Buliding Blocks or mapreduce architecture.
- What is mapper in Mapereduce?
- What is reducer in Mapreduce?
- Word count Job Explanation.
- Mapreduce Programming Example - How Word Count Program.
4. Pig and Advanced Pig:
- What is Pig in Hadoop? | An overview and Introduction to Apache Pig.
- Features of Apache Pig.
- Apache Pig V/S MapReduce. | Difference between Apache Pig and Mapeduce.
- Apache Pig Architecture.
- Explanation of Basic Apache Pig Commands.
- Configuring Flume Agents.
- Apache Pig Data Types and Complex Data Types.
- Apache Pig Commands Example.
- Apache Pig Latin Script overview and Example.
- Apache Pig Hands-On Demo - How to write and execute a Pig Latin Script in Cloudera VM.
5. Hive and Advanced Hive:
- Hive Overview.
- What is Hive?
- What Hive is not?
- Hive Architecture.
- Data Storage in Hive.
- Hive QL – Commands.
- Data Storage in Hive.
- Hive Hands-On Demo - A complete Real Time demonstration how the Hive works .
6. H-Base and Advanced H-Base:
- What is H-Base?
- H-base architecture.
- H-Base Components.
- H-Base Data Model.
- H-Base shell Commands.
- Hands-On - How to create H-base table and intert data in it on Cloudera VM?
- OOZIE Overview.
- What is OOZIE?
- OOZIE Workfows
- Sample OOZIE Workflow
- OOZIE functional Components.
- How to make a work flow?
- OOZIE Commands.
- Hands-On: Running an OOZIE Job.
- Scoop Introduction.
- Scoop Workflow.
- Sqoop condition based imports, Sqoop incremental imports, Sqoop query based imports.
- Sqoop Commands.
- Sqoop Exports.
- Sqoop Jobs.
- Hands on Sqoop: How to import Data from RDBMS to Hadoop HDFS on cloudera VM.
- What is Flume? | Introduction to Flume.
- An Overview of Flume.
- Flume V/S Sqoop. | Difference between Flume and Sqoop.
- Flume Architecture.
- Building Blocks of Flume. | Flume Components.
- Configuring Flume Agents.
- Flume Components.
- Hive Hands-On Demo - Practical Lab Example.
- Hadoop Yarn Overview and Introduction.
- What is Hadoop Yarn?
- Mapreduce 1 Framwork Execution.
- Yarn Architecture.
- Yarn Components.
- What is Mapreduce(MR2)?
- Mapreduce(MR2) Programming Example - How to run Running word count application in (Mapreduce)MR2?
- Impala Overview
- Why Impala?
- Hive vs Impala
- Impala Architecture
- Impala shell commands.
- Cloudera Impala Hands-On Demo - Creating a table in Impala.
|Class Room Location ||Online - At Your Desk.|
Get Ready for developing Big data Applications on Hadoop and for any Hadoop Developer Exams and Jobs. Training and Practice on Real-Time Hadoop Clusters!
Training will be conducted with the help of LMS (Learning Management System), GOTOWebinar Application and all video will be accessed by the participants after the Training through LMS. A Separate access to our Labs for Projects and Assignments are to be given.
Certification Program for Expertise in Developing Big Data Solution on Hadoop.
Why Big Data Technologies Trainings on Real-Time Clusters is important?
world experience means that one gets to work on real machinery in a
real production environment. Quite often the experience of working in a
real environment is far different from that of a simulated one.
Obviously, the real environment is valued more by the Big Data world as
opposed to the simulated one. We make that possible as we provide
Training basically on a cluster of computers (networked computers) which
comes pre-installed with the necessary technology stack, including
Apache Hadoop, Apache Spark,and other related technologies.
compared to a simulated environment such as a Virtual Machine of Hadoop
(basically a virtual machine that needs to be downloaded and run on a
single computer), a cluster based learning provides a far more real
experience. A virtual machine is run only on one machine while most of
the Big Data technology components run on multiple computers.