Resume Mohit
Resume Mohit
DATA ENGINEER | Data Migration | Regulatory reporting | Power BI | PySpark | DBT | SQL | ETL | AWS
Phone: (+65)-84229652
mohittamrakar603@gmail.com
Current Role : Data Engineer
PROFESSIONAL SUMMARY
Having 11 Years of experience in designing, developing and maintaining large business applications
such as data migration, integration, conversion, and Testing.
Designed and developed Spark applications to implement complex data transformations and aggregations
for batch processing jobs, leveraging Spark SQL and Data-Frames.
Extensive experience with AWS cloud services, including Amazon Redshift, Amazon S3, and Amazon Glue.
Implemented scalable and cost-effective data solutions on AWS, ensuring optimal performance for analytics
workloads.
Proficient in using DBT to build, document, and maintain data models, enabling efficient analysis and report -
ing.
Developed and managed DBT projects, incorporating best practices for data transformation and modeling.
4 Year work experience in PowerBI
Proficient in developing and implementing Spark RDD-based data processing workflows using Scala, Java, or
Python programming languages.
Optimized Spark jobs and data processing workflows for scalability, performance, and cost efficiency using
techniques such as partitioning, compression, and caching
3 Year work experience in SSIS , SSAS, DQS,SSRS and crystal report
TECHNICAL SKILLS
Data Eco System : DBT , Hadoop, Sqoop , Hive, Apache Spark ,Airflow.
Cloud Skills : AWS, Azure
Distribution : Cloudera 5.12
Databases : Oracle ,PLSQL, MS SQL,Redshift ,SQL Server
Languages : Scala , Python
Operating Systems : Linux, Cent OS and Windows
Tools : SQL SERVER INTEGRATION, IBM Data Stage, Power BI,
DBT (data build tool), Azure Data Factory ,Airflow ,DBT
PROFESSIONAL EXPERIENCE
Responsibilities
Collaborate with data architects and analysts to design and implement data models that support effi-
cient data retrieval and analysis.
Proficient in using DBT to build, document, and maintain data models, enabling efficient analysis and
reporting.
Developed and managed DBT projects, incorporating best practices for data transformation and model-
ing.
Extensive experience with AWS cloud services, including Amazon Redshift, Amazon S3, and Amazon
Athena.
Implemented scalable and cost-effective data solutions on AWS, ensuring optimal performance for ana -
lytics workloads.
Designed and implemented complex data models using DBT, addressing business requirements and en-
suring data accuracy.
Utilized DBT's transformation capabilities to reshape raw data into meaningful insights, facilitating
data-driven decision-making.
Designed and developed Spark applications to implement complex data transformations and aggrega-
tions for batch processing jobs, leveraging Spark SQL and Data-Frames.
Optimized Spark jobs and data processing workflows for scalability, performance, and cost efficiency
using techniques such as partitioning, compression, and caching.
Expertise in using Spark serialization and compression techniques, such as block-level compression,
dictionary encoding, and off-heap storage, to reduce data storage and processing overhead.
Ability to troubleshoot common issues with Spark SQL, such as data processing errors, performance
bottlenecks, and scalability limitations.
Proficient in handling hive partitions and buckets with respect to the business requirement.
Responsibilities :
Technologies: Hadoop, HDFS, Hive, Sqoop, Map Reduce, Spark sql ,Scala
Project Name: One-SUMX – WKFS (ABN AMRO BANK) DEC 2016– Sept -2019
Project Role: ETL Developer
Responsibilities
Technologies: PLSQL, SQL SERVER, SQL SERVER Integration services .Power BI, SSRS Crystal Report.
Responsibilities
Performance Tuning: Identify and resolve performance bottlenecks in SQL queries and PL/SQL code
by analyzing execution plans and making necessary optimizations.
Data Modeling : Create and maintain data models using tools like ER diagrams to ensure data accuracy
and integrity.
Data Migration: Develop scripts and procedures for data migration and transformation between data-
bases or systems.
Database Design: Collaborate with database architects and other stakeholders to design and plan data-
base structures, including tables, views, indexes, and relationships.
PL/SQL Coding: Write, test, and optimize PL/SQL code to implement business logic, stored procedures,
functions, and triggers in Oracle databases.
Responsibilities
Database Design: Collaborate with database architects and other stakeholders to design and plan data-
base structures, including tables, views, indexes, and relationships.
PL/SQL Coding: Write, test, and optimize PL/SQL code to implement business logic, stored procedures,
functions, and triggers in Oracle databases.
Performance Tuning: Identify and resolve performance bottlenecks in SQL queries and PL/SQL code
by analyzing execution plans and making necessary optimizations.
Data Modeling: Create and maintain data models using tools like ER diagrams to ensure data accuracy
and integrity.
Security: Implement and maintain database security measures, including user access control, role man-
agement, and data encryption.
Documentation: Create and maintain documentation for database schemas, code, and processes to en-
sure knowledge transfer and compliance with company standards.
Collaboration: Collaborate with other developers, database administrators, and system administrators
to troubleshoot issues and ensure seamless integration with other systems.
Version Control: Use version control systems (e.g., Git) to manage and track changes to database ob -
jects and code.
Testing: Develop and execute test plans and procedures to ensure the reliability and functionality of
database applications.
Performance Monitoring: Monitor database performance using tools like Oracle Enterprise Manager
or custom scripts to proactively identify and address issues.
Query Optimization: Optimize SQL queries and PL/SQL code for better performance and resource uti -
lization.
Technologies and Tools : PL-SQL Developer ,Putty ,Toad , eclipse ,core java .
WORK EXPERIENCE
EDUCATION