0% found this document useful (0 votes)
92 views

Resume Mohit

Resume

Uploaded by

mohit tamrakar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views

Resume Mohit

Resume

Uploaded by

mohit tamrakar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Mohit Tamrakar

DATA ENGINEER | Data Migration | Regulatory reporting | Power BI | PySpark | DBT | SQL | ETL | AWS
Phone: (+65)-84229652
mohittamrakar603@gmail.com
Current Role : Data Engineer

PROFESSIONAL SUMMARY

 Having 11 Years of experience in designing, developing and maintaining large business applications
such as data migration, integration, conversion, and Testing.
 Designed and developed Spark applications to implement complex data transformations and aggregations
for batch processing jobs, leveraging Spark SQL and Data-Frames.
 Extensive experience with AWS cloud services, including Amazon Redshift, Amazon S3, and Amazon Glue.
 Implemented scalable and cost-effective data solutions on AWS, ensuring optimal performance for analytics
workloads.
 Proficient in using DBT to build, document, and maintain data models, enabling efficient analysis and report -
ing.
 Developed and managed DBT projects, incorporating best practices for data transformation and modeling.
 4 Year work experience in PowerBI
 Proficient in developing and implementing Spark RDD-based data processing workflows using Scala, Java, or
Python programming languages.
 Optimized Spark jobs and data processing workflows for scalability, performance, and cost efficiency using
techniques such as partitioning, compression, and caching
 3 Year work experience in SSIS , SSAS, DQS,SSRS and crystal report

TECHNICAL SKILLS

Data Eco System : DBT , Hadoop, Sqoop , Hive, Apache Spark ,Airflow.
Cloud Skills : AWS, Azure
Distribution : Cloudera 5.12
Databases : Oracle ,PLSQL, MS SQL,Redshift ,SQL Server
Languages : Scala , Python
Operating Systems : Linux, Cent OS and Windows
Tools : SQL SERVER INTEGRATION, IBM Data Stage, Power BI,
DBT (data build tool), Azure Data Factory ,Airflow ,DBT
PROFESSIONAL EXPERIENCE

Project Name: Project Genie Data Migration Aug 2023-Till present

Project Role: Data Engineer

Responsibilities
 Collaborate with data architects and analysts to design and implement data models that support effi-
cient data retrieval and analysis.
 Proficient in using DBT to build, document, and maintain data models, enabling efficient analysis and
reporting.
 Developed and managed DBT projects, incorporating best practices for data transformation and model-
ing.
 Extensive experience with AWS cloud services, including Amazon Redshift, Amazon S3, and Amazon
Athena.
 Implemented scalable and cost-effective data solutions on AWS, ensuring optimal performance for ana -
lytics workloads.
 Designed and implemented complex data models using DBT, addressing business requirements and en-
suring data accuracy.

 Utilized DBT's transformation capabilities to reshape raw data into meaningful insights, facilitating
data-driven decision-making.

 Designed and developed Spark applications to implement complex data transformations and aggrega-
tions for batch processing jobs, leveraging Spark SQL and Data-Frames.
 Optimized Spark jobs and data processing workflows for scalability, performance, and cost efficiency
using techniques such as partitioning, compression, and caching.
 Expertise in using Spark serialization and compression techniques, such as block-level compression,
dictionary encoding, and off-heap storage, to reduce data storage and processing overhead.
 Ability to troubleshoot common issues with Spark SQL, such as data processing errors, performance
bottlenecks, and scalability limitations.
 Proficient in handling hive partitions and buckets with respect to the business requirement.

Technologies: AWS ,DBT, SPARK SQL, Airflow , Pyspark , Python ,

Project Name: APAC Data-ware house Oct 2019– July 2023


Project Role: Data Engineer

Responsibilities :

 Experienced in ETL (Extract, Transform, Load) testing methodologies and processes.


 Proficient in testing data extraction processes from various sources, including databases, files,
and APIs.
 Skilled in validating and verifying data transformation rules and business logic applied during
ETL processes.
 Strong understanding of data warehouse concepts and testing data loading into data warehouse
systems.
 Experienced in importing and exporting large datasets between Hadoop and relational data-
bases using Sqoop.
 Proficient in writing Sqoop commands to transfer data between Hadoop and various databases
such as MySQL, and SQL Server.
 Skilled in configuring Sqoop jobs for incremental data transfers using Sqoop's incremental im -
port feature.
 Proficient in performing data validation and cleansing during data transfer using Sqoop's vali-
dation and cleansing options.
 Adept in scheduling and automating Sqoop jobs for incremental runs.
 Expertise in querying Hive tables using SQL-like syntax and performing data analysis using tools
like Apache Spark.
 Skilled in integrating Hive tables with other big data technologies, such as Hadoop, HBase, and
Impala.
 Familiarity with Hive metastore and its role in managing table metadata and schema evolution.
 Knowledge of Hive table formats, including ORC, Parquet, and Avro, and their advantages and
disadvantages for different use cases.
 Proficient in developing and implementing Spark RDD-based data processing workflows using
Scala, or Python programming languages.
 Experienced in optimizing Spark RDD performance by tuning various configuration settings,
such as memory allocation, caching, and serialization.
 Expertise in using Spark RDD transformations and actions to process large-scale structured and
unstructured data sets, including filtering, mapping, reducing, grouping, and aggregating data.
 Skilled in using Spark RDD persistency and caching mechanisms to reduce data processing over-
head and improve query performance.

Technologies: Hadoop, HDFS, Hive, Sqoop, Map Reduce, Spark sql ,Scala

Project Name: One-SUMX – WKFS (ABN AMRO BANK) DEC 2016– Sept -2019
Project Role: ETL Developer

Responsibilities

 Regulatory Reporting Development:


o Develop, maintain, and optimize PL/SQL code and stored procedures for generating regulatory
reports required by financial authorities and regulatory bodies.
 Data Extraction and Transformation:
o Extract relevant financial data from the One-SUMX system's database and other sources.
o Transform and aggregate data to meet the specific formatting and calculation requirements of
regulatory reports.
 Report Generation:
o Design and implement PL/SQL processes for generating accurate and timely regulatory reports,
ensuring compliance with regulatory guidelines and deadlines.
 Data Validation and Quality Assurance:
o Implement data validation checks and quality assurance processes to verify the accuracy and
completeness of data in regulatory reports.
o Collaborate with business analysts and subject matter experts to ensure data correctness.
 Regulatory Compliance:
o Stay informed about relevant financial regulations, such as Basel III, Dodd-Frank, or IFRS, and
ensures that regulatory reports adhere to these standards.
o Implement and maintain changes in reporting requirements as regulations evolve.
 Data Security and Confidentiality:
o Implement and maintain data security measures to protect sensitive financial information in
regulatory reports.
o Ensure compliance with data privacy and confidentiality regulations.
 Performance Optimization:
o Optimize SQL queries and PL/SQL code to enhance the performance of data extraction, transfor-
mation, and reporting processes.
o Monitor and fine-tune the reporting system for efficiency.
 Audit Support:
o Assist with internal and external audits by providing necessary documentation and ensuring the
accuracy and completeness of regulatory reports.
 Continuous Improvement:
o Identify opportunities to enhance the efficiency and accuracy of regulatory reporting processes.
o Implement best practices for regulatory reporting within the One-SUMX system.
 Training and Knowledge Sharing:
o Share knowledge and expertise with team members, including junior developers, to ensure the
effective development and maintenance of regulatory reports.
 Adherence to Deadlines:
o Ensure that regulatory reports are generated and submitted within required timeframes to
meet compliance deadlines.

Technologies: PLSQL, SQL SERVER, SQL SERVER Integration services .Power BI, SSRS Crystal Report.

Project Name: Generali Insurance Apr 2016– Dec 2016

Project Role: Database Developer

Responsibilities

 Performance Tuning: Identify and resolve performance bottlenecks in SQL queries and PL/SQL code
by analyzing execution plans and making necessary optimizations.
 Data Modeling : Create and maintain data models using tools like ER diagrams to ensure data accuracy
and integrity.
 Data Migration: Develop scripts and procedures for data migration and transformation between data-
bases or systems.
 Database Design: Collaborate with database architects and other stakeholders to design and plan data-
base structures, including tables, views, indexes, and relationships.
 PL/SQL Coding: Write, test, and optimize PL/SQL code to implement business logic, stored procedures,
functions, and triggers in Oracle databases.

Technologies: PL-SQL Developer, Putty, Toad, eclipse, core java.

Project Name: TMF Web-Apps (TATA MOTORS) May 2013– Mar2016

Project Role: PL-SQL Developer and Team Lead

Responsibilities

 Database Design: Collaborate with database architects and other stakeholders to design and plan data-
base structures, including tables, views, indexes, and relationships.
 PL/SQL Coding: Write, test, and optimize PL/SQL code to implement business logic, stored procedures,
functions, and triggers in Oracle databases.
 Performance Tuning: Identify and resolve performance bottlenecks in SQL queries and PL/SQL code
by analyzing execution plans and making necessary optimizations.
 Data Modeling: Create and maintain data models using tools like ER diagrams to ensure data accuracy
and integrity.
 Security: Implement and maintain database security measures, including user access control, role man-
agement, and data encryption.
 Documentation: Create and maintain documentation for database schemas, code, and processes to en-
sure knowledge transfer and compliance with company standards.
 Collaboration: Collaborate with other developers, database administrators, and system administrators
to troubleshoot issues and ensure seamless integration with other systems.
 Version Control: Use version control systems (e.g., Git) to manage and track changes to database ob -
jects and code.
 Testing: Develop and execute test plans and procedures to ensure the reliability and functionality of
database applications.
 Performance Monitoring: Monitor database performance using tools like Oracle Enterprise Manager
or custom scripts to proactively identify and address issues.
 Query Optimization: Optimize SQL queries and PL/SQL code for better performance and resource uti -
lization.

Technologies and Tools : PL-SQL Developer ,Putty ,Toad , eclipse ,core java .
WORK EXPERIENCE

 TATA Consultancy Services -Mar 2013– Till date

EDUCATION

Institute/College Duration Percentage Obtained


Bachelor Of Engineering - Gyan Ganga Institute of tech-
nology & sciences Jabalpur (MP) 2008-2012 70%
Govt Higher secondary school Barhi dist. Katni (MP) 2008 77.55%
Sara swati high school barhi dist. Katni (MP) 2006 83.6%

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy