Hadoop Overview

Available Upon Request
Book Your Seat Today!

Kindly advise me your company detail and our consultant will contact you soonest!

Course Objectives

Become Hadoop Spark expert by learning core Big Data technologies and gain hands-on knowledge of Hadoop and Spark along with their eco-system components like HDFS, Map-Reduce, Sqoop, core Spark, Spark RDDs, Apache Spark SQL, and Spark Streaming through this Spark Hadoop course.

Target Audience

  • Software developers, Project Managers and architects
  • BI, ETL and Data Warehousing Professionals
  • Mainframe and testing Professionals
  • Business analysts and Analytics professionals
  • DBAs and DB professionals
  • Professionals willing to learn Data Science techniques
  • Any graduate focusing to build career in Big Data

Training Outline

The Case for Apache Hadoop
  • Why Hadoop?
  • Fundamental Concepts
  • Core Hadoop Components
Hadoop Cluster Installation
  • Rationale for a Cluster Management Solution
  • Cloudera Manager Features
  • Cloudera Manager Installation
  • Hadoop (CDH) Installation
The Hadoop Distributed File System (HDFS)
  • HDFS Features
  • Writing and Reading Files
  • NameNode Memory Considerations
  • Overview of HDFS Security
  • Web UIs for HDFS
  • Using the Hadoop File Shell
MapReduce and Spark on YARN
  • The Role of Computational Frameworks
  • YARN: The Cluster Resource Manager
  • MapReduce Concepts
  • Apache Spark Concepts
  • Running Computational Frameworks on YARN
  • Exploring YARN Applications Through the Web UIs, and the Shell
  • YARN Application Logs
Hadoop Configuration and Daemon Logs
  • Cloudera Manager Constructs for Managing Configurations
  • Locating Configurations and Applying Configuration Changes
  • Managing Role Instances and Adding Services
  • Configuring the HDFS Service
  • Configuring Hadoop Daemon Logs
  • Configuring the YARN Service
Getting Data Into HDFS
  • Ingesting Data from External Sources with Flume
  • Ingesting Data from Relational Databases with Sqoop
  • REST Interfaces
  • Introduction to Kafka & Use Cases
  • Best Practices for Importing Data
Planning Your Hadoop Cluster
  • General Planning Considerations
  • Choosing the Right Hardware
  • Virtualization Options*
  • Network Considerations
  • Configuring Nodes
Hadoop Clients Including Hue
  • What Are Hadoop Clients?
  • Installing and Configuring Hadoop Clients
  • Installing and Configuring Hue
  • Hue Authentication and Authorization
Hadoop Security
  • Why Hadoop Security Is Important
  • Hadoop’s Security System Concepts
  • What Kerberos Is and how it Works
  • Securing a Hadoop Cluster with Kerberos
  • Other Security Concepts
Managing Resources
  • Configuring cgroups with Static Service Pools
  • The Fair Scheduler
  • Configuring Dynamic Resource Pools
  • YARN Memory and CPU Settings
  • Impala Query Scheduling


As such no prior knowledge of any technology is required to learn Big Data Spark and Hadoop.