Report Assignment

1748 Words7 Pages
Budde Tn buddet1989@gmail.com • Over all 4 years of IT experience in analysis, design and development using Hadoop, Java and J2EE.
• 3+ years' experience in Big Data technologies and Hadoop ecosystem projects like Map Reduce, YARN, HDFS, Apache Cassandra, Spark, NoSQL, HBase, Oozie, Hive, Tableau, Sqoop, Pig, Storm, Kafka, HCatalog, Zoo Keeper and Flume
• Excellent understanding / knowledge of Hadoop architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce programming paradigm.
• Knowledge of Data Analytics and Business Analytics processes.
• Hands on experience with Spark streaming to receive real time data using Kafka
• Creating Spark SQL queries for faster requests.
• Experience
…show more content…
• Good Understanding of RDBMS through Database Design, writing queries using databases like Oracle, SQL Server, DB2 and MySQL.
• Worked extensively with Dimensional modeling, Data migration, Data cleansing, Data profiling, and ETL Processes features for data warehouses.
• A team player and self-motivator possessing excellent analytical, communication, problem solving, decision making and Organizational skills
WORK EXPERIENCE
Hadoop Developer
VINMATICS- ON - January 2017 to Present
Responsibilities:
• Involved in creating Hive Tables, loading with data and writing Hive queries which will invoke and run MapReduce jobs in the backend.
• Wrote the Map Reduce jobs to parse the web logs which are stored in HDFS.
• Importing and exporting data into HDFS and HIVE using Sqoop.
• Involved in working with Impala for data retrieval process.
• Experience in partitioning the Big Data according the business requirements using Hive Indexing, partitioning and Bucketing.
• Responsible for design development of Spark SQL Scripts based on Functional Specifications
• Responsible for Spark Streaming configuration based on type of Input Source
• Developed the services to run the Map-Reduce jobs as per the requirement basis.
• Responsible for loading data from UNIX file systems to HDFS. Installed and configured Hive and also written Pig/Hive UDFs.
• Responsible to manage data coming from different sources.
• Developing business logic using scala.
• Writing MapReduce (Hadoop)
Get Access