0% found this document useful (0 votes)
15 views

Ajay resume

Uploaded by

nadif23365
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Ajay resume

Uploaded by

nadif23365
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Ajay Kadiyala

: Bengaluru, 560029, INDIA, IN,

📱: +91-9542380365, 📧 : kadiyalaajay8@gmail.com ,

: https://www.linkedin.com/in/ajay026/,

: https://github.com/Ajay026.

Profile
Summary:
• Over 6+ Years of overall IT experience in Application Development.
• Working experience in Hadoop ecosystem (Gen-1 and Gen-2) and its various components such as
HDFS, Job Tracker, Task Tracker, Name Node, Data Node, Resource Manager (YARN).
• Experience with components such as Cloudera distribution encompassing components like
MapReduce, Spark, SQL, Hive, HBase, Sqoop, Pyspark.
• Good skills on NoSQL Database- HBase.
• Knowledge in Data Warehousing Concepts in OLTP/OLAP System Analysis and developing Database
Schemas like Star Schema and Snowflake Schema for Relational and Dimensional Modelling.
• Good hands on in creating custom UDF’s in Hive.
• Load and transform large sets of structured, semi-structured and unstructured data from Relational
Database Systems to HDFS and vice-versa using Sqoop tool.
• Good Experience on architecture and components of Spark, and efficient in working with Spark Core,
Data Frames/Data Sets/RDD API/Spark SQL, Spark streaming and expertise in building PySpark and
Spark-Scala applications for interactive analysis, batch processing and stream processing.
• Hands-on experience in Spark, Scala, SparkSQL, Hive Context for Data Processing.
• Experience on Azure cloud i.e., ADF, ADLS, Blob Storage, Databricks, Synapse, CI/CD pipelines etc.
• Extensive working experience in an Agile development Methodology & Working knowledge on Linux.
• Expertise in working with big data distributions like Cloudera and Hortonworks.
• Experience in working with Hive optimization techniques such as partitioning, bucketing.
• Experience in tuning and debugging Spark application and using Spark optimization techniques.
• Expertise in developing batch data processing applications using Spark, Hive and Sqoop.
• Experience in working with CSV, JSON, XML, ORC, Avro and Parquet file formats.
• Limited experience in creating and designing data ingest pipelines using technologies such as Kafka.
• Good knowledge in working with ETL methods for data extraction, transformation and loading in
corporate-wide ETL Solutions and Data Warehouse tools for reporting and data analysis.
• Basic Experience in implementing Snowflake Data Warehouse.
TECHNICAL SKILLS:
• Big Data Technologies: Hadoop, Spark, Hive, Sqoop, Kafka, PySpark, HBase, Scala Spark &
Snowflake Basic,
• Cloud Technologies: Azure(Azure Storage, Azure Synapse, ADF, Azure Data Bricks) GCP Basics.
• Languages: Scala, Python, SQL
• Databases and Tools: Oracle, MySQL, SQL, NoSQL.
• Platforms: Windows, Linux.
• IDEs: Eclipse, Cloudera, Hortonworks.
• Scheduling: Airflow.
• Project Management Tools: Jira, GitHub.

Certifications:
• Completed Microsoft Azure-Fundamentals (AZ-900).
• Completed Microsoft Azure Data Fundamentals (DP-900)
• Completed Microsoft Azure Data Scientist Associate (D100).
• Completed Microsoft Azure Power BI Data Analyst Associate (PL-300).
• Completed Microsoft Azure Data Engineer Associate (DP-203).
• Completed Microsoft Azure Developer Associate (AZ-204)
• Snowflake Pro Core Certification(COF-C02)

PROFESSIONAL EXPERIENCE:

Price Waterhouse Coopers Pvt Ltd : Oct 2021 – Till date


Client: Pepsico
Remote
Role: Big data Consultant

Responsibilities:
• Responsible for data discovery, data mapping and data engineering activities.
• Created Data mapping sheets with standard format and mapped data with Power BI
dashboards data received from Client to Actual datasets avaiable in Blob Storage.
• Connected with Different teams including Transformation, DE team, RPA, Data cleansing
teams to gather the requirements.
• Ingested the data from various sources like Blob storage, Sharepoints, and Raw XlsX files.
• Created Gold layers tables schema in ADLS Gen 2, that is synapse layer.
• Created and worked on transformation Logic in SSMS if required. Keep an eye on Config
entires for each dashboard and changed them as per requirement.
• Ran the ADF master pipeline to ingest the data, and created external tables through Script
runnning that was already built in databricks.
• Finally save the output data in Synapse Gold layer table for further Automation process.
Client: Trimble
Project : Migration
Remote
Role: Data Engineer

Responsibilities:
• Working with Structured data that is being ingested into Azure File storage explorer.
• Create ETL pipeline in Snap logic tool to bring the data into azure Databricks workspace.
• Applied transformation logic including, Spark sql, pyspark operations on data.
• Applied optimizations logics i.e., Partitioning, broadcast joins etc.
• Create ETL pipeline on Databricks transformed data to dump the target directory called
Salesforce.
• Analyze the resultant data with data bricks tool.

• Tech Stack: ETL, Databricks, azure, Snap logic, Salesforce.

Accenture Solutions Pvt Ltd: Feb 2019 – Oct 2021


D-VOIS Communications Pvt Ltd: Nov 2017 – Feb 2019

Side Projects:

• Link to Repository: https://github.com/Ajay026

Education:
• B Tech (Electronics & Communication) Siddhartha Institute of Engineering and Technology,
Puttur (A.P) India, 2017 with First Division marks 60%.
• Diploma in Electronics and Communication Engineering in Govt Polytechnic college,
Chandragiri (A.P) India, 2014 with 70%

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy