Madhusudhan Senior Data Engineer

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Madhusudhan Guggila (+91-7993972284) madhu.guggila@gmail.

com
Professional Summary

 11+ Years of professional experience in Big Data Engineer with strong emphasis on Design, Development,
Implementation, Migration and Deployment of Data Pipelines.
 6+ Years solid experience in cloud platform and migrating the apps to the apps to cloud platform
 Good Experience in Health Care Domain and Banking Domain
 Professional experience in Big Data Development and Ecosystem like Hadoop Ecosystem such as HDFS,
Spark, Hive in Cloudera, Hortonworks and MAPR platform
 Experience on working with AWS Code Pipeline, AWS Data pipeline, AWS Glue, RDS, .EC2, Lambda,
SQS, SNS, IAM, CloudWatch metrics, Redshift and S3 Bucket
 Experience Data Extraction, Transformation and Loading of data from multiple data sources into target
databases, using Azure Databricks, SQL Server, Oracle
 Having Good Knowledge on GCP cloud Data Engineer on Creation of Data Pipelines, Migration of data
from on prem to GCP cloud
 Experience in building end to end data pipelines using Pyspark/Scala (spark) for ingesting data from OLTP
systems to Hadoop Datalake which takes care of metadata captivity, change data capture, Email Alerts, Job
failure and recovery.
 Strong exposure on SQL, Python Scala Languages.
 Experience in working Agile (Scrum, Sprint) and waterfall methodologies

Academic Profile

 Bachler of Technology in Information Technology from Vignana Bharathi Institute of Technology (JNTU) from
2008 – 2012

Job Profile

 Worked as Software Associate for Centaur Infotech Solutions from July 2012 - Oct 2014
 Worked as Associate for Cognizant Technological Solution from Oct 2014 - May 2018
 Worked as Senior Consultant for Capgemini Technology Solutions since May 2018 to April 2020
 Currently working as Senior Data engineer for Wells Fargo India from May 2020 to Till Now

Skills Profile

Database MYSQL, Oracle, SQL Server


Big Data Technologies Hadoop, Hive, Sqoop, HBase, Spark, Hortonworks, Cloudera, MAPR
Languages Scala, Unix, Python, SQL, Java
IDE Eclipse, Visual Studio, IntelliJ, PyCharm, Jupiter Notebook
Version Control TFS, GIT
AWS EMR, Athena, S3, DBS, Glue, EC2, Redshift, Dynamo DB, Lambda
Azure Data Factory, Azure Synapse Analytics, Azure Data bricks, Azure SQL, Azure
AZURE
Storage GenV2
GCP BigQuerey, Cloud Storage, Big Table, Data proc, cloud SQL , Data Fusion
Professional Experience

(Duration) May 2020 – Till now

Company: WellsFargo India

(Role) Senior Cloud Engineer /Lead


Description:
Multinational Banking maintains day by daily transaction related to Consumer lending, Commercial Data, Credit
card Reports need to maintain data lake for storing and analyzing the data and sending to downstream partners for
creating the insights and further reports.
Responsibilities:
 Building large-scale ETL batch and real-time data pipelines with data processing framework in google cloud plat-
form
 Migrating the data from on-premise data center to google cloud platform.
 Implemented data optimization techniques to improve the efficiency and reduce data latency complexity and redun-
dancy
 Implemented data strategies and develop physical and logical data models.
 Providing technical guidance to the team such as code review, mentoring and helping team member troubleshoot
technical issue
 Co-ordinate with data scientist and business analyst for data preparation and analysis

Professional Experience
(Duration) May 2018 – April 2020

(Client) Johnson & Johnson


(Role) Cloud Big Data Developer/Senior Consultant
Description:
Client maintain policy and claims transaction of the customers based on everyday transactions in the legacy systems. DB2
dump files are the source system. Data is transferred from dump files to Hadoop system in the corresponding tables without
any change in its nature and structure.

Responsibilities:

 Responsible for developing and deploying ETL solutions on hadoop and aws cloud platform
 Created data ingestion framework using spark framework
 Ensured data consistency and accuracy through data validation
 Performed capacity planning and estimated the requirement for lowering or increasing the capacity of the hadoop
cluster
 Responsible for creating and maintaining comprehensive documentation of hadoop configuration, python
code,kafka setup and data processing work flows

Professional Experience

(Duration) Oct 2014 – Aoril 2018


(Client) UnitedHealth Group Inc.
(Role) Big Data Developer
.

Description:
To build an innovative technology ecosystem with the processing capability and capacity that allows ingestion and
processing of high volumes of data into a Production ready data store. There is an expectation from customers that the
definitive database of all commercially trading entities is available in a market.

Responsibilities:

 Worked with engagement team to understand the requirement and perform impact analysis for the same.
 Implemented Pig Scripts According business rules
 Converted Pig Scripts into Cascading programming for processing the data
 Integrated Hbase with Cascading for Storing the Structured Data
 Integrated Hive with Hbase for Reporting purpose
 Automated and Scheduled Entire Process Using Oozie
 Implemented JUnit for testing the Cascading Modules
 Validated the Source Files from Client Using Unix scripts

Professional Experience

(Duration) July 2012 – Oct 2014

(Client) Canary.
(Role) Java Developer

Description:
The Health care Med is a web application developed in Java/J2EE. The web portal can be accessed globally by
any users and doctors. The web Portal has the facility to book an appointment with the doctor and all the
registered user can access the medical dictionary which comprises of Herbal, Ayurveda. This portal also has the
facility to view Lab Test Results. All the appointments are stored in the database, which can be accessed by the
doctor using the web portal.

Responsibilities:

 Gathered information and analyzed the requirement.


 Design and developed as per client requirement.
 Written Business logics for validations
 worked as one of the Developer in the team.
 Worked on client and server-side validations using JSP and Servlets.
Personal Details:
Date of Birth: 23-05-1989

Languages Known: English, Hindi, Telugu

Nationality Indian

(Guggila
Madhusudhan)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy