Madhusudhan Senior Data Engineer
Madhusudhan Senior Data Engineer
Madhusudhan Senior Data Engineer
com
Professional Summary
11+ Years of professional experience in Big Data Engineer with strong emphasis on Design, Development,
Implementation, Migration and Deployment of Data Pipelines.
6+ Years solid experience in cloud platform and migrating the apps to the apps to cloud platform
Good Experience in Health Care Domain and Banking Domain
Professional experience in Big Data Development and Ecosystem like Hadoop Ecosystem such as HDFS,
Spark, Hive in Cloudera, Hortonworks and MAPR platform
Experience on working with AWS Code Pipeline, AWS Data pipeline, AWS Glue, RDS, .EC2, Lambda,
SQS, SNS, IAM, CloudWatch metrics, Redshift and S3 Bucket
Experience Data Extraction, Transformation and Loading of data from multiple data sources into target
databases, using Azure Databricks, SQL Server, Oracle
Having Good Knowledge on GCP cloud Data Engineer on Creation of Data Pipelines, Migration of data
from on prem to GCP cloud
Experience in building end to end data pipelines using Pyspark/Scala (spark) for ingesting data from OLTP
systems to Hadoop Datalake which takes care of metadata captivity, change data capture, Email Alerts, Job
failure and recovery.
Strong exposure on SQL, Python Scala Languages.
Experience in working Agile (Scrum, Sprint) and waterfall methodologies
Academic Profile
Bachler of Technology in Information Technology from Vignana Bharathi Institute of Technology (JNTU) from
2008 – 2012
Job Profile
Worked as Software Associate for Centaur Infotech Solutions from July 2012 - Oct 2014
Worked as Associate for Cognizant Technological Solution from Oct 2014 - May 2018
Worked as Senior Consultant for Capgemini Technology Solutions since May 2018 to April 2020
Currently working as Senior Data engineer for Wells Fargo India from May 2020 to Till Now
Skills Profile
Professional Experience
(Duration) May 2018 – April 2020
Responsibilities:
Responsible for developing and deploying ETL solutions on hadoop and aws cloud platform
Created data ingestion framework using spark framework
Ensured data consistency and accuracy through data validation
Performed capacity planning and estimated the requirement for lowering or increasing the capacity of the hadoop
cluster
Responsible for creating and maintaining comprehensive documentation of hadoop configuration, python
code,kafka setup and data processing work flows
Professional Experience
Description:
To build an innovative technology ecosystem with the processing capability and capacity that allows ingestion and
processing of high volumes of data into a Production ready data store. There is an expectation from customers that the
definitive database of all commercially trading entities is available in a market.
Responsibilities:
Worked with engagement team to understand the requirement and perform impact analysis for the same.
Implemented Pig Scripts According business rules
Converted Pig Scripts into Cascading programming for processing the data
Integrated Hbase with Cascading for Storing the Structured Data
Integrated Hive with Hbase for Reporting purpose
Automated and Scheduled Entire Process Using Oozie
Implemented JUnit for testing the Cascading Modules
Validated the Source Files from Client Using Unix scripts
Professional Experience
(Client) Canary.
(Role) Java Developer
Description:
The Health care Med is a web application developed in Java/J2EE. The web portal can be accessed globally by
any users and doctors. The web Portal has the facility to book an appointment with the doctor and all the
registered user can access the medical dictionary which comprises of Herbal, Ayurveda. This portal also has the
facility to view Lab Test Results. All the appointments are stored in the database, which can be accessed by the
doctor using the web portal.
Responsibilities:
Nationality Indian
(Guggila
Madhusudhan)