Harsh Synopsis
Harsh Synopsis
Harsh Synopsis
Harshwardhan Singh
Reg.No:21BCAN178
Internshala is a dot com business with the heart of dot org. It is a technology company on a
mission to equip students with relevant skills & practical exposure through internships and
trainings.
Internshala's offerings include Placement Guarantee Courses, ensuring students acquire the vital
skills demanded by current competitive job market. These courses are complemented by short-
term online trainings meticulously crafted to address specific skill gaps and industry requirements.
Furthermore, Internshala plays a pivotal role in guiding students through their first steps into the
professional realm by facilitating internships and fresher job opportunities. This hands-on
experience is invaluable, providing students with the real-world exposure necessary for a
successful career launch.
In August 2016, Telangana's not-for-profit organisation, Telangana Academy for Skill and
Knowledge (TASK) partnered with Internshala to help students with internship resources and
career services.
From Internshala we can complete courses and gain certificates after the completion of the course
which helps us for applying a job
INDEX
Sr. Topic
No.
1 Title
2 Index
3 Declaration
4 Acknowledgement
5 About Training
6 About Internshala
7 Objectives
8 Data Science
9 My Learnings
10 Reason for choosing Data Science
11 Learning Outcome
12 Scope in Data Science
13 Results
1. ABOUT TRAINING
• NAME OF TRAINING: DATA SCIENCE
• HOSTING INSTITUTION: INTERNSHALA
• DATES: From 1st July 2021 to 12th August 2021
2. ABOUT INTERNSHALA
Internshala is an internship and online training platform, based in Gurgaon, India. Founded in
2011 by Sarvesh Agrawal, an IIT Madras alumni. The site offers searching and posting
internships, and other career services such as counselling, cover-letter writing, resume
building and training programs to students.
3. OBJECTIVES
To explore, sort and analyse mega data from various sources to take advantage of them and
reach conclusions to optimize business processes and for decision support.
Examples include machine maintenance or (predictive maintenance), in the fields of
marketing and sales with sales forecasting based on weather.
4. DATA SCIENCE
Data Science as a multi-disciplinary subject that uses mathematics, statistics, and computer
science to study and evaluate data. The key objective of Data Science is to extract valuable
information for use in strategic decision making, product development, trend analysis, and
forecasting.
Data Science concepts and processes are mostly derived from data engineering, statistics,
programming, social engineering, data warehousing, machine learning, and natural language
processing. The key techniques in use are data mining, big data analysis, data extraction and
data retrieval.
Data science is the field of study that combines domain expertise, programming skills, and
knowledge of mathematics and statistics to extract meaningful insights from data. Data
science practitioners apply machine learning algorithms to numbers, text, images, video,
audio, and more to produce artificial intelligence (AI) systems to perform tasks that
ordinarily require
human intelligence. In turn, these systems generate insights which analysts and business users
can translate into tangible business value.
5. MY LEARNINGS
1) INTRODUCTION TO DATA SCIENCE
• Overview & Terminologies in Data Science
• Applications of Data Science
Unfamiliar detection (fraud, disease, etc.)
Automation and decision-making (credit worthiness, etc.)
Classifications (classifying emails as “important” or “junk”)
Forecasting (sales, revenue, etc.)
Pattern detection (weather patterns, financial market patterns, etc.)
Recognition (facial, voice, text, etc.)
Recommendations (based on learned preferences, recommendation engines can
refer you to movies, restaurants and books you may like)
Social Media
1. Recommendation Engine
2. Ad placement
3. Sentiment Analysis
Deciding the right credit limit for credit card customers.
Suggesting right products from e-commerce companies
1. Recommendation System
2. Past Data Searched
3. Discount Price Optimization
How google and other search engines know what are the more relevant results for our search query?
1. Apply ML and Data Science
2. Fraud Detection
3. AD placement
4. Personalized search results
Python Introduction
Python is an interpreted, high-level, general-purpose programming language. It has efficient high-level data
structures and a simple but effective approach to object-oriented programming. Python’s elegant syntax and
dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid
application development in many areas on most platforms.
Why Python???
LISTS: A list is an ordered data structure with elements separated by comma and enclosed within
square brackets.
DICTIONARY: A dictionary is an unordered data structure with elements separated by comma and
stored as key: value pair, enclosed with curly braces {}.
Statistics
Descriptive Statistic
Mode
It is a number which occurs most frequently in the data series.
It is robust and is not generally affected much by addition of couple of new values.
Code
import pandas as pd
data=pd.read_csv( "Mode.csv") //reads data from csv file
data.head() //print first five lines
mode_data=data['Subject'].mode() //to take mode of subject column
print(mode_data)
Mean
import pandas as pd
data=pd.read_csv( "mean.csv") //reads data from csv file
data.head() //print first five lines
mean_data=data[Overallmarks].mean() //to take mode of subject column
print(mean_data)
Median
Absolute central value of data set.
import pandas as pd
data=pd.read_csv( "data.csv") //reads data from csv file
data.head() //print first five lines
median_data=data[Overallmarks].median() //to take mode of subject column
print(median_data)
Types of variables
Continous – Which takes continuous numeric values. Eg-marks
Categorial-Which have discrete values. Eg- Gender
Ordinal – Ordered categorial variables. Eg- Teacher feedback
Nominal – Unorderd categorial variable. Eg- Gender
Outliers
Any value which will fall outside the range of the data is termed as a outlier. Eg- 9700 instead of 97.
Reasons of Outliers
Typos-During collection. Eg-adding extra zero by mistake.
Measurement Error-Outliers in data due to measurement operator being faulty.
Intentional Error-Errors which are induced intentionally. Eg-claiming smaller amount of alcohol
consumed then actual.
Legit Outlier—These are values which are not actually errors but in data due to legitimate
reasons. Eg - a CEO’s salary might actually be high as compared to other employees.
Interquartile Range (IQR)
Is difference between third and first quartile from last. It is robust to outliers.
Histograms
Histograms depict the underlying frequency of a set of discrete or continuous data that are measured on an
interval scale.
import pandas as pd
histogram=pd.read_csv(histogram.csv)
import matplotlib.pyplot as plt
%matplot inline
plt.hist(x= 'Overall Marks',data=histogram)
plt.show()
Inferential Statistics
Inferential statistics allows to make inferences about the population from the sample data.
Hypothesis Testing
Hypothesis testing is a kind of statistical inference that involves asking a question, collecting data, and then
examining what the data tells us about how to proceed. The hypothesis to be tested is called the null
hypothesis and given the symbol Ho. We test the null hypothesis against an alternative hypothesis, which is
given the symbol Ha.
T Tests
When we have just a sample not population statistics.
Use sample standard deviation to estimate population standard deviation.
T test is more prone to errors, because we just have samples.
Z Score
The distance in terms of number of standard deviations, the observed value is away from mean, is standard
score or z score.
+Z – value is above mean.
-Z – value is below mean.
The distribution once converted to z- score is always same as that of shape of original distribution.
Making use of past data and attributes we predict future using this data.
Eg-
Past Horror Movies
Future Unwatched Horror Movies
Problem Definition
Identify the right problem statement, ideally formulate the problem mathematically.
Hypothesis Generation
List down all possible variables, which might influence problem objective. These variables should be free
from personal bias and preferences.
Quality of model is directly proportional to quality of hypothesis.
Data Extraction/Collection
Collect data from different sources and combine those for exploration and model building.
While looking at data we might come across new hypothesis.
Data Exploration and Transformation
Data extraction is a process that involves retrieval of data from various sources for further data processing or
data storage.
Steps of Data Extraction
Reading the data
Eg- From csv file
Variable identification
Univariate Analysis
Bivariate Analysis
Missing value treatment
Outlier treatment
Variable Transformation
Variable Treatment
It is the process of identifying whether variable is
1. Independent or dependent variable
2. Continuous or categorical variable
Why do we perform variable
identification?
1. Techniques like supervised learning require identification of dependent variable.
2. Different data processing techniques for categorical and continuous data.
Categorical variable- Stored as object.
Continuous variable-Stored as int or float.
Univariate Analysis
1. Explore one variable at a time.
2. Summarize the variable.
3. Make sense out of that summary to discover insights, anomalies,
etc. Bivariate Analysis
When two variables are studied together for their empirical relationship.
When you want to see whether the two variables are associated with each other.
It helps in prediction and detecting anomalies.
Missing Value Treatment
Reasons of missing value
1. Non-response – Eg-when you collect data on people’s income and many choose not to answer.
2. Error in data collection. Eg- Faculty data
3. Error in data
reading. Types
1. MCAR (Missing completely at random): Missing values have no relation to the variable in which
missing value exist and other variables in dataset.
2. MAR (Missing at random): Missing values have no relation to the in which missing value exist and
the variables other than the variables in which missing values exist.
3. MNAR (Missing not at random): Missing values have relation to the variable in which missing value
exists
Identifying
Syntax: -
1. describe()
2. Isnull()
Output will we in True or False
Different methods to deal with missing values
1. Imputation
Continuous-Impute with help of mean, median or regression mode.
Categorical-With mode, classification model.
2. Deletion
Row wise or column wise deletion. But it leads to loss of data.
Outlier Treatment
Reasons of Outliers
1. Data entry Errors
2. Measurement Errors
3. Processing Errors
4. Change in underlying population
Types of Outlier
Univariate
Analysing only one variable for outlier.
Eg – In box plot of height and weight.
Weight will we analysed for outlier
Bivariate
Analysing both variables for outlier.
Eg- In scatter plot graph of height and weight. Both will we analysed.
Identifying Outlier
Graphical Method
Box Plot
Scatter Plot
Formula Method
Using Box Plot
< Q1 - 1.5 * IQR or > Q3+1.5 * IQR
Where IQR= Q3 – Q1
Q3=Value of 3rd quartile
Q1=Value of 1st quartile
Treating Outlier
1. Deleting observations
2. Transforming and binning values
3. Imputing outliers like missing values
4. Treat them as separate
Variable Transformation
Is the process by which-
1. We replace a variable with some function of that variable. Eg – Replacing a variable x with its log.
2. We change the distribution or relationship of a variable with others.
Used to –
1. Change the scale of a variable
2. Transforming non linear relationships into linear relationship
3. Creating symmetric distribution from skewed distribution.
Common methods of Variable Transformation – Logarithm, Square root, Cube root, Binning, etc.
Model Building
It is a process to create a mathematical model for estimating / predicting the future based on past data.
Eg-
A retail wants to know the default behaviour of its credit card customers. They want to predict the
probability of default for each customer in next three months.
Probability of default would lie between 0 and 1.
Assume every customer has a 10% default rate.
Probability of default for each customer in next 3 months=0.1
It moves the probability towards one of the extremes based on attributes of past information.
A customer with volatile income is more likely (closer to) to default.
A customer with healthy credit history for last years has low chances of default (closer to 0).
Algorithm Selection
Example-
Yes No
Supervised Unsupervised
Learning Learning
Is dependent
variable continuous?
Yes No
Regression Classification
Eg- Predict the customer will buy product or not.
Algorithms
Logistic Regression
Decision Tree
Random Forest
Training Model
It is a process to learn relationship / correlation between independent and dependent variables.
We use dependent variable of train data set to predict/estimate.
Dataset
Train
Past data (known dependent variable).
Used to train model.
Test
Future data (unknown dependent variable)
Used to score.
Prediction / Scoring
It is the process to estimate/predict dependent variable of train data set by applying model rules.
We apply training learning to test data set for prediction/estimation.
10
0
0 1 2 3 4 5 6 7 8 9
Logistic Regression
Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary
dependent variable, although many more complex extensions exist.
K-means clustering is a type of unsupervised learning, which is used when you have unlabelled data
(i.e., data without defined categories or groups). The goal of this algorithm is to find groups in the data,
with the number of groups represented by the variable K. The algorithm works iteratively to assign each
data point to one of K groups based on the features that are provided. Data points are clustered based on
feature similarity.
8. RESULTS
In this complete 6 weeks training I successfully learnt about DATA SCIENCE. Also,
now I’m able to perform data analysis using python. I also attempted various quizzes and
assignments provided for periodic evaluation during 6 weeks and completed this training
with 100% score in Final Test.