Data Wrangling With R
Data Wrangling With R
Data Wrangling With R
BradleyC. Boehmke
Data
Wrangling
with R
Use R!
Series Editors:
Robert Gentleman Kurt Hornik Giovanni Parmigiani
Wickham: ggplot2
Moore: Applied Survival Analysis Using R
Luke: A Users Guide to Network Analysis in R
Monogan: Political Analysis Using R
Cano/M. Moguerza/Prieto Corcoba: Quality Control with R
Schwarzer/Carpenter/Rcker: Meta-Analysis with R
Gondro: Primer to Analysis of Genomic Data Using R
Chapman/Feit: R for Marketing Research and Analytics
Willekens: Multistate Analysis of Life Histories with R
Cortez: Modern Optimization with R
Kolaczyk/Csrdi: Statistical Analysis of Network Data with R
Swenson/Nathan: Functional and Phylogenetic Ecology in R
Nolan/Temple Lang: XML and Web Technologies for Data Sciences with R
Nagarajan/Scutari/Lbre: Bayesian Networks in R
van den Boogaart/Tolosana-Delgado: Analyzing Compositional Data with R
Bivand/Pebesma/Gmez-Rubio: Applied Spatial Data Analysis with R
(2nd ed. 2013)
Eddelbuettel: Seamless R and C++ Integration with Rcpp
Knoblauch/Maloney: Modeling Psychophysical Data in R
Lin/Shkedy/Yekutieli/Amaratunga/Bijnens: Modeling Dose-Response Microarray
Data in Early Drug Development
Experiments Using R
Cano/M. Moguerza/Redchuk: Six Sigma with R
Soetaert/Cash/Mazzia: Solving Differential Equations in R
Bradley C. Boehmke
Welcome to Data Wrangling with R! In this book, I will help you learn the essentials
of preprocessing data leveraging the R programming language to easily and quickly
turn noisy data into usable pieces of information. Data wrangling, which is also
commonly referred to as data munging, transformation, manipulation, janitor work,
etc., can be a painstakingly laborious process. In fact, it has been stated that up to
80 % of data analysis is spent on the process of cleaning and preparing data (cf.
Wickham 2014; Dasu and Johnson 2003). However, being a prerequisite to the rest
of the data analysis workow (visualization, modeling, reporting), its essential that
you become uent and efcient in data wrangling techniques.
This book will guide you through the data wrangling process along with giving
you a solid foundation of the basics of working with data in R. My goal is to teach
you how to easily wrangle your data, so you can spend more time focused on under-
standing the content of your data via visualization, modeling, and reporting your
results. By the time you nish reading this book, you will have learned:
How to work with the different types of data such as numerics, characters, regu-
lar expressions, factors, and dates.
The difference between the various data structures and how to create, add addi-
tional components to, and how to subset each data structure.
How to acquire and parse data from locations you may not have been able to
access before such as web scraping or leveraging APIs.
How to develop your own functions and use loop control structures to reduce
code redundancy.
How to use pipe operators to simplify your code and make it more readable.
How to reshape the layout of your data, and manipulate, summarize, and join
data sets.
Not only will you learn many base R functions, youll also learn how to use some
of the latest data wrangling packages such as tidyr, dplyr, httr, stringr,
lubridate, readr, rvest, magrittr, xlsx, readxl and others. In
essence, you will have the data wrangling toolbox required for modern day data
analysis.
v
vi Preface
This book is meant to establish the baseline R vocabulary and knowledge for the
primary data wrangling processes. This captures a wide range of programming
activities which covers the full spectrum from understanding basic data objects in R
to writing your own functions, applying loops, and web scraping. As a result, this
book can be benecial to all levels of R programmers. Beginner R programmers
will gain a basic understanding of the functionality of R along with learning how to
work with data using R. Intermediate and advanced R programmers will likely nd
the early chapters reiterating established knowledge; however, these programmers
will benet from the mid and latter chapters by learning newer and more efcient
data wrangling techniques.
Obviously to gain and retain knowledge from this book, it is highly recommended
that you follow along and practice the code examples yourself. Furthermore, this
book assumes that you will actually be performing data wrangling in R; therefore,
it is assumed that you have or plan to have R installed on your computer. You will nd
the latest version of R for Linux, Mac OS, and Windows at https://cran.r-project.org.
It is also recommended that you use an integrated development environment (IDE)
as it will simplify and organize your coding environment greatly. There are several
to choose from; however, I highly recommend the RStudio IDE which you can
download at https://www.rstudio.com.
Reader Feedback
Reader comments are greatly appreciated. Please send any feedback regarding
typos, mistakes, confusing statements, or opportunities for improvement to wran-
glingdata@gmail.com.
Bibliography
Dasu, T., & Johnson, T. (2003). Exploratory Data Mining and Data Cleaning (Vol. 479). John
Wiley & Sons.
Wickham, H. (2014). Tidy data. Journal of Statistical Software, 59 (i10).
Contents
Part I Introduction
1 The Role of Data Wrangling .................................................................. 3
2 Introduction to R..................................................................................... 7
2.1 Open Source ..................................................................................... 7
2.2 Flexibility ......................................................................................... 8
2.3 Community ...................................................................................... 9
3 The Basics ................................................................................................ 11
3.1 Installing R and RStudio .................................................................. 11
3.2 Understanding the Console .............................................................. 13
3.2.1 Script Editor ......................................................................... 13
3.2.2 Workspace Environment ...................................................... 13
3.2.3 Console ................................................................................ 15
3.2.4 Misc. Displays...................................................................... 15
3.2.5 Workspace Options and Shortcuts ....................................... 15
3.3 Getting Help ..................................................................................... 16
3.3.1 General Help ........................................................................ 16
3.3.2 Getting Help on Functions ................................................... 16
3.3.3 Getting Help from the Web .................................................. 17
3.4 Working with Packages .................................................................... 17
3.4.1 Installing Packages............................................................... 18
3.4.2 Loading Packages ................................................................ 18
3.4.3 Getting Help on Packages .................................................... 19
3.4.4 Useful Packages ................................................................... 19
3.5 Assignment and Evaluation ............................................................. 19
3.6 R as a Calculator .............................................................................. 21
3.6.1 Vectorization ........................................................................ 22
vii
viii Contents
10 Managing Vectors.................................................................................... 85
10.1 Creating Vectors ............................................................................. 85
10.2 Adding On To Vectors.................................................................... 86
10.3 Adding Attributes to Vectors.......................................................... 87
10.4 Subsetting Vectors.......................................................................... 88
10.4.1 Subsetting with Positive Integers ..................................... 88
10.4.2 Subsetting with Negative Integers .................................... 88
10.4.3 Subsetting with Logical Values ........................................ 89
10.4.4 Subsetting with Names..................................................... 89
10.4.5 Simplifying vs. Preserving ............................................... 89
11 Managing Lists ........................................................................................ 91
11.1 Creating Lists ................................................................................. 91
11.2 Adding On To Lists ........................................................................ 92
11.3 Adding Attributes to Lists .............................................................. 93
11.4 Subsetting Lists .............................................................................. 95
11.4.1 Subset List and Preserve Output as a List ........................ 95
11.4.2 Subset List and Simplify Output ...................................... 96
11.4.3 Subset List to Get Elements Out of a List ........................ 96
11.4.4 Subset List with a Nested List.......................................... 96
12 Managing Matrices ................................................................................. 99
12.1 Creating Matrices ........................................................................... 99
12.2 Adding On To Matrices.................................................................. 100
12.3 Adding Attributes to Matrices........................................................ 101
12.4 Subsetting Matrices........................................................................ 103
13 Managing Data Frames .......................................................................... 105
13.1 Creating Data Frames .................................................................... 105
13.2 Adding On To Data Frames ........................................................... 107
13.3 Adding Attributes to Data Frames ................................................. 109
13.4 Subsetting Data Frames ................................................................. 111
14 Dealing with Missing Values .................................................................. 113
14.1 Testing for Missing Values............................................................. 113
14.2 Recoding Missing Values............................................................... 114
14.3 Excluding Missing Values.............................................................. 114
Data. Our world has become increasingly reliant upon, and awash in, this resource.
Businesses are increasingly seeking to capitalize on data analysis as a means for
gaining competitive advantages. Government agencies are using more types of data
to improve operations and efficiencies. Sports entities are increasing the range of
data applications, from how teams are using data and analytics to how data are
impacting the experience for the fan base. Journalism is increasing the role that
numerical data are used in the production and distribution of information as evi-
denced by the emerging field of data journalism. In fact, the need to work with data
has become so prevalent that the U.S. alone is expected to have a shortage of
140,000190,000 data analysts by 2018.1 Consequently, it is safe to say there is a
need for becoming fluent with the data analysis process. And Im assuming thats
why you are reading this book.
Fluency in data analysis captures a wide range of activities. At its most basic
structure, data analysis fluency includes the ability to get, clean, transform, visual-
ize, and model data along with communicating your results as depicted in the fol-
lowing illustration.
Knowledge generation & extraction
Visualize
Model
A modified version of Hadley Wickhams analytic process
From project to project, no analytic process will be the same. Each specific
instance of data analysis includes unique, different, and often multiple requirements
regarding the specific processes required for each stage. For instance, getting data
1
Manyika et al. (2011).
2 Part I Introduction
may include simply accessing an Excel file, scraping data from an HTML table, or
using an application programming interface (API) to access a database. Cleaning
data may include reshaping data from a wide to long format, parsing or manipulat-
ing variables to different formats. Transforming data may include filtering, sum-
marizing, and applying common or uncommon functions to data along with joining
multiple datasets. Visualizing data may range from common static exploratory data
analysis plots to dynamic, interactive data visualizations in web browsers. And
modeling data can be even more diverse covering the range of descriptive, predic-
tive, and prescriptive analytic techniques.
Consequently, the road to becoming an expert in data analysis can be daunting.
And, in fact, obtaining expertise in the wide range of data analysis processes uti-
lized in your own respective field is a career long process. However, the goal of this
book is to help you take a step closer to fluency in the early stages of the analytic
process. Why? Because before using statistical literate programming to report your
results, before developing an optimization or predictive model, before performing
exploratory data analysis, and before visualizing your data, you need to be able
to manage your data. You need to be able to import your data. You need to be able to
work with the different data types. You need to be able to subset and parse your data.
You need to be able to manipulate and transform your data. You need to be able to
wrangle your data!
Chapter 1
The Role of Data Wrangling
Synonymous to Samuel Taylor Coleridges quote in Rime of the Ancient Mariner, the
degree to which data are useful is largely determined by an analysts ability to wrangle
data. In spite of advances in technologies for working with data, analysts still spend an
inordinate amount of time obtaining data, diagnosing data quality issues and pre-pro-
cessing data into a usable form. Research has illustrated that this portion of the data
analysis process is the most tedious and time consuming component; often consuming
5080 % of an analysts time (cf. Wickham 2014; Dasu and Johnson 2003). Despite the
challenges, data wrangling remains a fundamental building block that enables visual-
ization and statistical modeling. Only through data wrangling can we make data useful.
Consequently, ones ability to perform data wrangling tasks effectively and efficiently
is fundamental to becoming an expert data analyst in their respective domain.
So what exactly is this thing called data wrangling? Its the ability to take a
messy, unrefined source of data and wrangle it into something useful. Its the art of
using computer programming to extract raw data and creating clear and actionable
bits of information for your analysis. Data wrangling is the entire front end of the
analytic process and requires numerous tasks that can be categorized within the get,
clean, and transform components (Fig. 1.1).
However, learning how to wrangle your data does not necessarily follow a linear
progression as suggested by Fig. 1.1. In fact, you need to start from scratch to under-
stand how to work with data in R. Consequently, this book takes a meandering route
through the data wrangling process to help build a solid data wrangling
foundation.
First, modern day data wrangling requires being comfortable writing code. If
you are new to writing code, R, or RStudio you need to understand some of the
basics of working in the command line environment. The next two chapters in this
part will introduce you to R, discuss the benefits it provides, and then start to get you
comfortable at the command line by walking you through the process of assigning
and evaluating expressions, using vectorization, getting help, managing your
workspace, and working with packages. Lastly, I offer some basic styling guidelines
to help you write code that is easier to digest by others.
Second, data wrangling requires the ability to work with different forms of data.
Analysts and organizations are finding new and unique ways to leverage all forms
of data so its important to be able to work not only with numbers but also with
character strings, categorical variables, logical variables, regular expression, and
dates. Part II explains how to work with these different classes of data so that when
you start to learn how to manage the different data structures, which combines these
data classes into multiple dimensions, you will have a strong knowledge base.
Third, modern day datasets often contain variables of different lengths and
classes. Furthermore, many statistical and mathematical calculations operate on dif-
ferent types of data structures. Consequently, data wrangling requires a strong
knowledge of the different structures to hold your datasets. Part III covers the differ-
ent types of data structures available in R, how they differ by dimensionality and
how to create, add to, and subset the various data structures. Lastly, I cover how to
deal with missing values in data structures. Consequently, this part provides a robust
understanding of managing various forms of datasets.
Fourth, data are arriving from multiple sources at an alarming rate and analysts
and organizations are seeking ways to leverage these new sources of information.
Consequently, analysts need to understand how to get data from these sources.
Furthermore, since analysis is often a collaborative effort, analysts also need to know
how to share their data. Part IV covers the basics of importing tabular and spread-
sheet data, scraping data stored online, and exporting data for sharing purposes.
Fifth, minimizing duplication and writing simple and readable code is important
to becoming an effective and efficient data analyst. Moreover, clarity should always
be a goal throughout the data analysis process. Part V introduces the art of writing
functions and using loop control statements to reduce redundancy in code. I also
discuss how to simplify your code using pipe operators to make your code more
readable. Consequently, this part will help you to perform data wrangling tasks
more effectively, efficiently, and with more clarity.
Last, data wrangling is all about getting your data into the right form in order to
feed it into the visualization and modeling stages. This typically requires a large
amount of reshaping and transforming of your data. Part VI introduces some of the
fundamental functions for tidying your data and for manipulating, sorting, sum-
marizing, and joining your data. These tasks will help to significantly reduce the
time you spend on the data wrangling process.
Individually, each part will provide you important tools for performing individual
data wrangling tasks. Combined, these tools will help to make you more effective
and efficient in the front end of the data analysis process so that you can spend more
of your time visualizing and modeling your data and communicating your results!
Bibliography 5
Bibliography
Dasu, T., & Johnson, T. (2003). Exploratory Data Mining and Data Cleaning (Vol. 479). John
Wiley & Sons.
Manyika, J., Chui, M., Brown, B., Bughin, J., Dobbs, R., Roxburgh, C., et al. (2011). Big data: The
next frontier for innovation, competition, and productivity. McKinsey.
Wickham, H. (2014). Tidy data. Journal of Statistical Software , 59 (i10).
Chapter 2
Introduction to R
A language for data analysis and graphics. This denition of R was used by Ross
Ihaka and Robert Gentleman in the title of their 1996 paper (Ihaka and Gentleman
1996) outlining their experience of designing and implementing the R software. Its
safe to say this remains the essence of what R is; however, its tough to encapsulate
such a diverse programming language into a single phrase.
During the last decade, the R programming language has become one of the most
widely used tools for statistics and data science. Its application runs the gamut from
data preprocessing, cleaning, web scraping and visualization to a wide range of
analytic tasks such as computational statistics, econometrics, optimization, and
natural language processing. In 2012 R had over two million users and continues to
grow by double-digit percentage points every year. R has become an essential ana-
lytic software throughout industry; being used by organizations such as Google,
Facebook, New York Times, Twitter, Etsy, Department of Defense, and even in
presidential political campaigns. So what makes R such a popular tool?
R is an open source software created over 20 years ago by Ihaka and Gentleman at
the University of Auckland, New Zealand. However, its history is even longer as its
lineage goes back to the S programming language created by John Chambers out of
Bell Labs back in the 1970s.1 R is actually a combination of S with lexical scoping
semantics inspired by Scheme (Morandat and Hill 2012). Whereas the resulting
language is very similar in appearance to S, the underlying implementation and
semantics are derived from Scheme. Unbeknownst to many the S language has been
a popular vehicle for research in statistical methodology, and R provides an open
source route to participate in that activity.
1
Consequently, R is named partly after its authors (Ross and Robert) and partly as a play on the
name of S.
2.2 Flexibility
Another benet of open source is that anybody can access the source code, modify
and improve it. As a result, many excellent programmers contribute to improving
existing R code and developing new capabilities. Researchers from all walks of life
(academic institutions, industry, and focus groups such as RStudio5 and rOpenSci6)
are contributing to advancements of Rs capabilities and best practices. This has
resulted in some powerful tools that advance both statistical and non-statistical
modeling capabilities that are taking data analysis to new levels.
2
See Roger Pengs R programming for Data Science for further, yet concise, details on S and Rs
history.
3
This was recently argued by Pollack, Klimberg, and Boklage (2015) which was appropriately
rebutted by Boehmke and Jackson (2016).
4
Open-source is far from new as it has been around for decades (i.e. A-2 in the 1950s, IBMs ACP
in the 60s, Tiny BASIC in the 70s) but has gained prominence since the late 1990s.
5
https://www.rstudio.com
6
https://ropensci.org/packages
2.3 Community 9
2.3 Community
The R community is fantastically diverse and engaged. On a daily basis, the R com-
munity generates opportunities and resources for learning about R. These cover the
full spectrum of trainingbooks, online courses, R user groups, workshops, confer-
ences, etc. And with over two million users and developers, nding help and techni-
cal expertise is only a simple click away. Support is available through R mailing
lists, Q&A websites, social media networks, and numerous blogs.
So now that you know how awesome R is, its time to learn how to use it.
Bibliography
Ihaka, Ross, and Robert Gentleman. R: A language for data analysis and graphics. Journal of
Computational and Graphical Statistics 5, no. 3 (1996):299314.
Morandat, Floral, Brandon Hill, Leo Osvald, and Jan Vitek. Evaluating the design of the R lan-
guage. In European Conference on Object-Oriented Programming, pp. 104131. Springer
Berlin Heidelberg, 2012.
Pollack, R. D., Klimberg, R. K., and Boklage, S.H. The true cost of free statistical software.
OR/MS Today, vol. 42, no. 5 (2015):3435.
Boehmke, Bradley C. and Jackson, Ross A. Unpacking the true cost of free statistical software.
OR/MS Today, vol. 43, no. 1 (2016):2627.
7
See The Journal of Statistical Software and The R Journal.
8
https://cran.r-project.org/web/views/
Chapter 3
The Basics
First, you need to download and install R, a free software environment for statistical
computing and graphics from CRAN, the Comprehensive R Archive Network. It is
highly recommended to install a precompiled binary distribution for your operating
system; follow these instructions:
1. Go to https://cran.r-project.org/
2. Click Download R for Mac/Windows
3. Download the appropriate le:
(a) Windows users click Base, and download the installer for the latest R
version
(b) Mac users select the le R-3.X.X.pkg that aligns with your OS version
and you should get a window that looks like the following (Fig. 3.2):
You are now ready to start programming!
The RStudio console is where all the action happens. There are four fundamental
windows in the console, each with their own purpose. I discuss each briey below
but I highly suggest Oscar Torres-Reynas Introduction to RStudio1 for a thorough
understanding of the console (Fig. 3.3).
The top left window is where your script les will display. There are multiple forms
of script les but the basic one to start with is the .R le. To create a new le you use
the File New File menu. To open an existing le you use either the File Open
File menu or the Recent Files menu to select from recently opened les.
RStudios script editor includes a variety of productivity enhancing features includ-
ing syntax highlighting, code completion, multiple-le editing, and nd/replace.
A good introduction to the script editor was written by RStudios Josh Paulson.2
The top right window is the workspace environment which captures much of your
current R working environment and includes any user-dened objects (vectors,
matrices, data frames, lists, functions). When saving your R working session, these
1
You can access this tutorial at http://dss.princeton.edu/training/RStudio101.pdf
2
You can assess the script editor tutorial at https://support.rstudio.com/hc/en-us/articles/
200484448-Editing-and-Executing-Code
14 3 The Basics
are the components along with the script les that will be saved in your working
directory, which is the default location for all le inputs and outputs. To get or set
your working directory so you can direct where your les are saved use getwd and
setwd in the console (note that you can type any comments in your code by pre-
ceding the comment with the hashtag (#) symbol; any values, symbols, and texts
following # will not be evaluated.).
# returns path for the current working directory
getwd()
You can also view previous commands in the workspace environment by clicking
the History tab, by simply pressing the up arrow on your keyboard, or by typing into
the console:
# default shows 25 most recent commands
history()
You can also save and load your workspaces. Saving your workspace will save all
R les and objects within your workspace to a .RData le in your working directory
and loading your workspace will load any .RData les in your working directory.
# save all items in workspace to a .RData file
save.image()
Note that saving the workspace without specifying the working directory will
default to saving in the current directory. You can further specify where to save the
.RData by including the path: save(object1, object2, file = "/users/
name/folder/myfile.RData"). More information regarding saving and
loading R objects such as .RData les will be discussed in Part IV of this book.
3.2.3 Console
The bottom left window contains the console. You can code directly in this window
but it will not save your code. It is best to use this window when you are simply
performing calculator type functions. This is also where your outputs will be pre-
sented when you run code in your script.
The bottom right window contains multiple tabs. The Files tab allows you to see
which les are available in your working directory. The Plots tab will display any
visualizations that are produced by your code. The Packages tab will list all pack-
ages downloaded to your computer and also the ones that are loaded (more on this
concept of packages shortly). And the Help tab allows you to search for topics you
need help on and will also display any help responses (more on this later as well).
There are multiple options available for you to set and customize your console. You
can view and set options for the current R session:
# learn about available options
help(options)
16 3 The Basics
As with most computer programs, there are numerous keyboard shortcuts for
working with the console. To access a menu displaying all the shortcuts in RStudio
you can use option + shift + k. Within RStudio you can also access them in the Help
menu Keyboard Shortcuts. You can also nd the RStudio console cheatsheet by
going to Help menu Cheatsheets.
Learning any new language requires lots of help. Luckily, the help documentation
and support in R is comprehensive and easily accessible from the command line. To
leverage general help resources you can use the following:
For more direct help on functions that are installed on your computer:
# provides details for specific function
help(functionname)
3.4 Working with Packages 17
Note that the help() and ? functions only work for functions within loaded
packages. If you want to see details on a function in a package that is installed on
your computer but not loaded in the active R session you can use
help(functionname, package = "packagename"). Another alterna-
tive is to use the :: operator as in help(packagename::functionname).
Typically, a problem you may be encountering is not new and others have faced,
solved, and documented the same issue online. The following resources can be used
to search for online help. Although, I typically just google the problem and nd
answers relatively quickly.
RSiteSearch("key phrase"): searches for the key phrase in help manuals
and archived mailing lists on the R Project website at http://search.r-project.org/.
Stack Overow: a searchable Q&A site oriented toward programming issues.
75 % of my answers typically come from Stack Overow questions tagged for R
at http://stackoverow.com/questions/tagged/r.
Cross Validated: a searchable Q&A site oriented toward statistical analysis.
Many questions regarding specic statistical functions in R are tagged for R at
http://stats.stackexchange.com/questions/tagged/r.
R-seek: a Google custom search that is focused on R-specic websites. Located
at http://rseek.org/
R-bloggers: a central hub of content collected from over 500 bloggers who pro-
vide news and tutorials about R. Located at http://www.r-bloggers.com/
Your primary source to obtain packages will likely be from CRAN. To install pack-
ages from CRAN:
# install packages from CRAN
install.packages("packagename")
# packages
install.packages("devtools")
Once the package is downloaded to your computer you can access the functions and
resources provided by the package in two different ways:
# load the package to use in the current R session
library(packagename)
For instance, if you want to have full access to the tidyr package you would
use library(tidyr); however, if you just wanted to use the gather() func-
tion without loading the tidyr package you can use tidyr::gather(function
arguments).
3.5 Assignment and Evaluation 19
Note that some packages will have multiple vignettes. For instance
vignette(package = "grid") will list the 13 vignettes available for the
grid package. To access one of the specic vignettes you simply use vignette
("vignettename").
There are thousands of helpful R packages for you to use, but navigating them all can
be a challenge. To help you out, RStudio compiled a guide3 to some of the best pack-
ages for loading, manipulating, visualizing, analyzing, and reporting data. In addi-
tion, their list captures packages that specialize in spatial data, time series and nancial
data, increasing speed and performance, and developing your own R packages.
The rst operator youll run into is the assignment operator. The assignment opera-
tor is used to assign a value. For instance we can assign the value 3 to the vari-
able x using the <- assignment operator. We can then evaluate the variable by simply
typing x at the command line which will return the value of x. Note that prior to the
value returned youll see ## [1] in the command line. This simply implies that the
output returned is the rst output.
3
https://support.rstudio.com/hc/en-us/articles/201057987-Quick-list-of-useful-R-packages
20 3 The Basics
# assignment
x <- 3
# evaluation
x
## [1] 3
# rightward assignment
value -> x
value ->> x
The original assignment operator in R was <- and has continued to be the pre-
ferred among R users. The = assignment operator was added in 20014 primarily
because it is the accepted assignment operator in many other languages and begin-
ners to R coming from other languages were so prone to use it. However, R uses = to
associate function arguments with values (i.e. f(x = 3) explicitly means to call func-
tion f and set the argument x to 3). Consequently, most R programmers prefer to
keep = reserved for argument association and use <- for assignment.
The operator <<- is normally only used in functions which we will not get into
the details. And the rightward assignment operators perform the same as their left-
ward counterparts; they just assign the value in an opposite direction.
Overwhelmed yet? Dont be. This is just meant to show you that there are options
and you will likely come across them sooner or later. My suggestion is to stick with
the tried and true <- operator. This is the most conventional assignment operator
used and is what you will nd in all the base R source codewhich means it should
be good enough for you.
Lastly, note that R is a case sensitive programming language. Meaning all vari-
ables, functions, and objects must be called by their exact spelling:
x <- 1
y <- 3
z <- 4
x * y * z
## [1] 12
x * Y * z
## Error in eval(expr, envir, enclos): object 'Y' not found
4
See http://developer.r-project.org/equalAssign.html for more details.
3.6 R as a Calculator 21
3.6 R as a Calculator
At its most basic function R can be used as a calculator. When applying basic arith-
metic, the PEMBDAS order of operations applies: parentheses rst followed
by exponentiation, multiplication and division, and nally addition and subtraction.
8 + 9 / 5 ^ 2
## [1] 8.36
8 + 9 / (5 ^ 2)
## [1] 8.36
8 + (9 / 5) ^ 2
## [1] 11.24
(8 + 9) / 5 ^ 2
## [1] 0.68
By default R will display seven digits but this can be changed using options()
as previously outlined.
1 / 7
## [1] 0.1428571
options(digits = 3)
1 / 7
## [1] 0.143
Also, large numbers will be expressed in scientic notation which can also be
adjusted using options().
888888 * 888888
## [1] 7.9e+11
options(digits = 10)
888888 * 888888
## [1] 790121876544
Note that the largest number of digits that can be displayed is 22. Requesting any
larger number of digits will result in an error message.
pi
## [1] 3.141592654
options(digits = 22)
pi
## [1] 3.141592653589793115998
options(digits = 23)
22 3 The Basics
When performing undened calculations R will produce Inf and NaN outputs.
1 / 0 # infinity
## [1] Inf
-1 / 0 # negative infinity
## [1] -Inf
0 / 0 # not a number
## [1] NaN
The last two functions to mention are the integer divide (%/%) and modulo (%%)
functions. The integer divide function will give the integer part of a fraction while
the modulo will provide the remainder.
42 / 4 # regular division
## [1] 10.5
42 %% 4 # modulo (remainder)
## [1] 2
3.6.1 Vectorization
A key difference between R and many other languages is a topic known as vector-
ization. What does this mean? It means that many functions that are to be applied
individually to each element in a vector of numbers require a loop assessment to
evaluate; however, in R many of these functions have been coded in C to perform
much faster than a for loop would perform. For example, lets say you want to add
the elements of two separate vectors of numbers (x and y).
x <- c(1, 3, 4)
y <- c(1, 2, 4)
x ## [1] 1 3 4
y ## [1] 1 2 4
3.6 R as a Calculator 23
In other languages you might have to run a loop to add two vectors together. In
this for loop I print each iteration to show that the loop calculates the sum for the
rst elements in each vector, then performs the sum for the second elements, etc.
# empty vector
z <- as.vector(NULL)
x * y
## [1] 1 6 16
x > y
## [1] FALSE TRUE FALSE
long
## [1] 1 2 3 4 5 6 7 8 9 10
short
## [1] 1 2 3 4 5
long + short
## [1] 2 4 6 8 10 7 9 11 13 15
The elements of long and short are added together starting from the rst ele-
ment of both vectors. When R reaches the end of the short vector, it starts again at
the rst element of short and continues until it reaches the last element of
the long vector. This functionality is very useful when you want to perform the
same operation on every element of a vector. For example, say we want to multiply
every element of our long vector by 3:
24 3 The Basics
long * c
## [1] 3 6 9 12 15 18 21 24 27 30
even_length + odd_length
## Warning in even_length + odd_length: longer object length is not a
## multiple of shorter object length
## [1] 2 4 6 5 7 9 8 10 12 11
Good coding style is like using correct punctuation. You can manage without it, but it sure
makes things easier to read.Hadley Wickham
# Bad
basic-stuff.r
detail.r
5
Googles style guide can be found at https://google.github.io/styleguide/Rguide.xml and Hadley
Wickhams can be found at http://adv-r.had.co.nz/Style.html
3.7 Styling Guide 25
Historically, there has been no clearly preferred approach with multiple naming
styles sometimes used within a single package. Bottom line, your naming conven-
tion will be driven by your preference but the ultimate goal should be consistency.
My personal preference is to use all lowercase with an underscore (_) to separate
words within a name. This follows Hadley Wickhams suggestions in his style
guide. Furthermore, variable names should be nouns and function names should be
verbs to help distinguish their purpose. Also, refrain from using existing names of
functions (i.e. mean, sum, true).
3.7.2 Organization
Organization of your code is also important. Theres nothing like trying to decipher
2000 lines of code that has no organization. The easiest way to achieve organization
is to comment your code. The general commenting scheme I use is the following.
I break up principal sections of my code that have a common purpose with:
#################
# Download Data #
#################
lines of code here
###################
# Preprocess Data #
###################
########################
# Exploratory Analysis #
########################
lines of code here
26 3 The Basics
3.7.3 Syntax
Proper spacing within your code also helps with readability. The following pulls
straight from Hadley Wickhams suggestions.7 Place spaces around all inx opera-
tors (=, +, -, <-, etc.). The same rule applies when using = in function calls. Always
put a space after a comma, and never before.
# Good
average <- mean(feet / 12 + inches, na.rm = TRUE)
# Bad
average<-mean(feet/12+inches,na.rm=TRUE)
Theres a small exception to this rule: :, :: and ::: dont need spaces around them.
6
Go to RStudio on the menu bar then Preferences > Code > Display and you can select the show
margin option and set the margin to 80.
7
http://adv-r.had.co.nz/Style.html
3.7 Styling Guide 27
# Good
x <- 1:10
base::get
# Bad
x <- 1 : 10
base :: get
In this chapter you will learn the basics of working with numbers in R. This includes
understanding how to manage the numeric type (integer vs. double), the different
ways of generating non-random and random numbers, how to set seed values for
reproducible random number generation, and the different ways to compare and
round numeric values.
The two most common numeric classes used in R are integer and double (for
double precision floating point numbers). R automatically converts between these
two classes when needed for mathematical purposes. As a result, its feasible to
use R and perform analyses for years without specifying these differences.
To check whether a pre-existing vector is made up of integer or double values you
can use typeof(x) which will tell you if the vector is a double, integer, logical,
or character type.
By default, when you create a numeric vector using the c() function it will produce
a vector of double precision numeric values. To create a vector of integers using
c() you must specify explicity by placing an L directly after each number.
By default, if you read in data that has no decimal points or you create numeric
values using the x <- 1:10 method the numeric values will be coded as integer.
If you want to change a double to an integer or vice versa you can specify one of the
following:
There are a few R operators and functions that are especially useful for creating
vectors of non-random numbers. These functions provide multiple ways for gener-
ating sequences of numbers.
To explicitly specify numbers in a sequence you can use the colon : operator to
specify all integers between two specified numbers or the combine c() function to
explicity specify all numbers in the sequence.
4.3 Generating Sequence of Random Numbers 33
R comes with a set of pseudo-random number generators that allow you to simulate
the most common probability distributions such as Uniform, Normal, Binomial,
Poisson, Exponential and Gamma.
To generate random numbers from a uniform distribution you can use the runif()
function. Alternatively, you can use sample() to take a random sample using with
or without replacements.
# generate n random numbers between the default values of 0 and 1
runif(n)
# generate n random numbers between 0 and 25
runif(n, min = 0, max = 25)
# generate n random numbers between 0 and 25 (with replacement)
sample(0:25, n, replace = TRUE)
# generate n random numbers between 0 and 25 (without replacement)
sample(0:25, n, replace = FALSE)
For example, to generate 25 random numbers between the values 0 and 10:
For each non-uniform probability distribution there are four primary functions
available to generate random numbers, density (aka probability mass function),
cumulative density, and quantiles. The prefixes for these functions are:
r: random number generation
d: density or probability mass function
p: cumulative distribution
q: quantiles
The normal (or Gaussian) distribution is the most common and well known distri-
bution. Within R, the normal distribution functions are written as norm().
4.3 Generating Sequence of Random Numbers 35
You can also pass a vector of values. For instance, say you want to know the
CDF probabilities for each value in the vector x created above:
pnorm(x, mean = 100, sd = 15)
## [1] 0.43203732 0.47306731 0.40607337 0.04021628 0.51364538 0.58213815
## [7] 0.77573919 0.55548261 0.53102479 0.83390182 0.48086992 0.43430567
## [13] 0.75959941 0.20898424 0.18721209 0.72478191 0.22079836 0.66249503
## [19] 0.82847339 0.09313407 0.75588023 0.41738339 0.79402667 0.75906822
## [25] 0.32620260
The Gamma probability distribution is related to the Beta distribution and arises
naturally in processes for which the waiting times between Poisson distributed
events are relevant.
If you want to generate a sequence of random numbers and then be able to repro-
duce that same sequence of random numbers later you can set the random number
seed generator with set.seed(). This is a critical aspect of reproducible research.
For example, we can reproduce a random generation of 10 values from a normal
distribution:
set.seed(197)
rnorm(n = 10, mean = 0, sd = 1)
## [1] 0.6091700 -1.4391423 2.0703326 0.7089004 0.6455311 0.7290563
## [7] -0.4658103 0.5971364 -0.5135480 -0.1866703
set.seed(197)
rnorm(n = 10, mean = 0, sd = 1)
## [1] 0.6091700 -1.4391423 2.0703326 0.7089004 0.6455311 0.7290563
## [7] -0.4658103 0.5971364 -0.5135480 -0.1866703
There are multiple ways to compare numeric values and vectors. This includes
logical operators along with testing for exact equality and also near equality.
38 4 Dealing with Numbers
The normal binary operators allow you to compare numeric values and provide the
answer in logical form:
x < y # is x less than y
x > y # is x greater than y
x <= y # is x less than or equal to y
x >= y # is x greater than or equal to y
x == y # is x equal to y
x != y # is x not equal to y
x <- 9
y <- 10
x == y
## [1] FALSE
Note that logical values TRUE and FALSE equate to 1 and 0 respectively. So if
you want to identify the number of equal values in two vectors you can wrap the
operation in the sum() function:
If you need to identify the location of pairwise equalities in two vectors you can
wrap the operation in the which() function:
# Where are the pairwise equal values located in vectors x and y
which(x == y)
## [1] 2 3
4.6 Rounding Numbers 39
Sometimes you wish to test for near equality. The all.equal() function
allows you to test for equality with a difference tolerance of 1.5e8.
x <- c(4.00000005, 4.00000008)
y <- c(4.00000002, 4.00000006)
all.equal(x, y)
## [1] TRUE
If the difference is greater than the tolerance level the function will return the
mean relative difference:
There are many ways of rounding to the nearest integer, up, down, or toward a
specified decimal place. The following illustrates the common ways to round.
x <- c(1, 1.35, 1.7, 2.05, 2.4, 2.75, 3.1, 3.45, 3.8, 4.15,
4.5, 4.85, 5.2, 5.55, 5.9)
# Round to the nearest integer
round(x)
## [1] 1 1 2 2 2 3 3 3 4 4 4 5 5 6 6
40 4 Dealing with Numbers
# Round up
ceiling(x)
## [1] 1 2 2 3 3 3 4 4 4 5 5 5 6 6 6
# Round down
floor(x)
## [1] 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5
# Round to a specified decimal
round(x, digits = 1)
## [1] 1.0 1.4 1.7 2.0 2.4 2.8 3.1 3.5 3.8 4.2 4.5 4.8 5.2 5.5 5.9
Chapter 5
Dealing with Character Strings
Dealing with character strings is often under-emphasized in data analysis training. The
focus typically remains on numeric values; however, the growth in data collection is
also resulting in greater bits of information embedded in character strings. Consequently,
handling, cleaning and processing character strings is becoming a prerequisite in daily
data analysis. This chapter is meant to give you the foundation of working with char-
acters by covering some basics followed by learning how to manipulate strings using
base R functions along with using the simplied stringr package.
In this section youll learn the basics of creating, converting and printing character
strings followed by how to assess the number of elements and characters in a string.
The most basic way to create strings is to use quotation marks and assign a string to
an object similar to creating number sequences.
a <- "learning to create" # create string a
b <- "character strings" # create string b
The paste() function provides a versatile means for creating and building
strings. It takes one or more R objects, converts them to character, and then it
concatenates (pastes) them to form one or several character strings.
# paste together string a & b
paste(a, b)
## [1] "I-love-R"
## [1] "IloveR"
## [1] "R v1.1" "R v1.2" "R v1.3" "R v1.4" "R v1.5"
Test if strings are characters with is.character() and convert strings to char-
acter with as.character() or with toString().
a <- "The life of"
b <- pi
is.character(a)
## [1] TRUE
is.character(b)
## [1] FALSE
c <- as.character(b)
is.character(c)
## [1] TRUE
toString(c("Aug", 24, 1980))
# basic printing
print(x)
## a b c d e f g h i j k l m n o p q r s t u v w x y z
## a-b-c-d-e-f-g-h-i-j-k-l-m-n-o-p-q-r-s-t-u-v-w-x-y-z
44 5 Dealing with Character Strings
## abcdefghijklmnopqrstuvwxyz
You can also format the line width for printing long strings using the fill argument:
x <- "Today I am learning how to print strings."
y <- "Tomorrow I plan to learn about textual analysis."
z <- "The day after I will take a break and drink a beer."
cat(x, y, z, fill = 0)
cat(x, y, z, fill = 5)
version <- 3
# substitute integer
sprintf("This is R version:%d", version)
For oating-point numbers, use %f for standard notation, and %e or %E for expo-
nential notation:
sprintf("%f", pi) # '%f' indicates 'fixed point' decimal notation
## [1] "3.141593"
## [1] "3.142"
## [1] "3"
sprintf("%05.1f", pi) # same as above but fill empty digits with zeros
## [1] "003.1"
## [1] "+3.141593"
## [1] "3.141593e+00"
## [1] "3.141593E+00"
## [1] 1
46 5 Dealing with Character Strings
## [1] 7
## [1] 39
## [1] 3 4 10 3 2 4 7
Basic string manipulation typically includes case conversion, simple character and
substring replacement, adding/removing whitespace, and performing set operations to
compare similarities and differences between two character vectors. These operations
can all be performed with base R functions; however, some operations (or at least
their syntax) are simplied with the stringr package which we will discuss in the
next section. This section illustrates the base R string manipulation capabilities.
tolower(x)
## [1] "learning to manipulate strings in r"
To replace a character (or multiple characters) in a string you can use chartr():
# replace 'A' with 'a'
5.2 String Manipulation with Base R 47
Note that chartr() replaces every identied letter for replacement so the only
time I use it is when I am certain that I want to change every possible occurrence of
a letter.
# default abbreviations
abbreviate(streets)
Note that if you are working with U.S. states, R already has a pre-built vector
with state names (state.name). Also, there is a pre-built vector of abbreviated
state names (state.abb).
To extract or replace substrings in a character vector there are three primary base R
functions to use: substr(), substring(), and strsplit(). The purpose of
substr() is to extract and replace substrings with specied starting and stopping
characters:
48 5 Dealing with Character Strings
## [1] "R"
## [1] "RSTUVWX"
## [1] "ABCDEFGHIJKLMNOPQRRRRRRRYZ"
## [1] "RSTUVWXYZ"
## [[1]]
## [1] "The" "day" "after" "I" "will" "take" "a" "break"
## [9] "and" "drink" "a" "beer."
a <- "Alabama-Alaska-Arizona-Arkansas-California"
strsplit(a, split = "-")
## [[1]]
## [1] "Alabama" "Alaska" "Arizona" "Arkansas" "California"
5.3 String Manipulation with stringr 49
Note that the output of strsplit() is a list. To convert the output to a simple
atomic vector simply wrap in unlist():
unlist(strsplit(a, split = "-"))
The stringr package was developed by Hadley Wickham to act as simple wrap-
pers that make Rs string functions more consistent, simple, and easier to use. To
replicate the functions in this section you will need to install and load the stringr
package:
# install stringr package
install.packages("stringr")
# load package
library(stringr)
There are three stringr functions that are closely related to their base R equivalents,
but with a few enhancements:
Concatenate with str_c()
Number of characters with str_length()
Substring with str_sub()
str_c() is equivalent to the paste() functions:
# same as paste0()
str_c("Learning", "to", "use", "the", "stringr", "package")
## [1] "Learningtousethestringrpackage"
# same as paste()
str_c("Learning", "to", "use", "the", "stringr", "package", sep = " ")
# allows recycling
str_c(letters, " is for", "")
## [1] "a is for" "b is for" "c is for" "d is for" "e is for"
## [6] "f is for" "g is for" "h is for" "i is for" "j is for"
50 5 Dealing with Character Strings
## [11] "k is for" "l is for" "m is for" "n is for" "o is for"
## [16] "p is for" "q is for" "r is for" "s is for" "t is for"
## [21] "u is for" "v is for" "w is for" "x is for" "y is for"
## [26] "z is for"
# alternative indexing
str_sub(x, start = 1, end = 15)
## [1] "e"
# Replacement
str_sub(x, end = 15) <- "I know how to use"
x
A new functionality that stringr provides in which base R does not have a specic
function for is character duplication:
str_dup("beer", times = 3)
## [1] "beerbeerbeer"
A common task of string processing is that of parsing text into individual words.
Often, this results in words having blank spaces (whitespaces) on either end of the
word. The str_trim() can be used to remove these spaces:
text <- c("Text ", " with", " whitespace ", " on", "both ", " sides ")
To add whitespace, or to pad a string, use str_pad(). You can also use str_
pad() to pad a string with specied characters.
str_pad("beer", width = 10, side = "left")
## [1] "beer!!!!!!"
There are also base R functions that allow for assessing the set union, intersection,
difference, equality, and membership of two vectors.
To obtain the elements of the union between two character vectors use union():
set_1 <- c("lagunitas", "bells", "dogfish", "summit", "odell")
set_2 <- c("sierra", "bells", "harpoon", "lagunitas", "founders")
union(set_1, set_2)
intersect(set_1, set_2)
## [1] "lagunitas" "bells"
5.4 Set Operatons for Character Strings 53
To obtain the non-common elements, or the difference, of two character vectors use
setdiff():
# returns elements in set_1 not in set_2
setdiff(set_1, set_2)
To test if two vectors contain the same elements regardless of order use
setequal():
set_3 <- c("woody", "buzz", "rex")
set_4 <- c("woody", "andy", "buzz")
set_5 <- c("andy", "buzz", "woody")
setequal(set_3, set_4)
## [1] FALSE
setequal(set_4, set_5)
## [1] TRUE
To test if two character vectors are equal in content and order use
identical():
identical(set_6, set_7)
## [1] FALSE
identical(set_6, set_8)
## [1] TRUE
54 5 Dealing with Character Strings
is.element(good, set_8)
## [1] TRUE
good %in% set_8
## [1] TRUE
bad %in% set_8
## [1] FALSE
At rst glance (and second, third,) the regex syntax can appear quite confusing.
This section will provide you with the basic foundation of regex syntax; however,
realize that there is a plethora of resources available that will give you far more
detailed, and advanced, knowledge of regex syntax. To read more about the speci-
cations and technicalities of regex in R you can nd help at help(regex) or
help(regexp).
6.1.1 Metacharacters
Fig. 6.1 Escape syntax for Metacharacter Literal Meaning Escape Syntax
common metacharacters . period or dot \\.
$ dollar sign \\$
* asterisk \\*
+ plus sign \\+
? question mark \\?
| vertical bar \\|
\\ double backslash \\\\
^ caret \\^
[ square bracket \\[
{ curly brace \\{
( parenthesis \\(
*adapted from Handling and Processing Strings in R (Sanchez, 2013)
The following provides examples to show how to use the escape syntax to nd
and replace metacharacters. For information on the sub and gsub functions used
in this example visit the main regex functions page.
# substitute $ with !
sub(pattern = "\\$", "\\!", "I love R$")
## [1] "I love R!"
6.1.2 Sequences
The following provides examples to show how to use the anchor syntax to nd
and replace sequences. For information on the gsub function used in this example
visit the main regex functions page.
# substitute any digit with an underscore
gsub(pattern = "\\d", "_", "I'm working in RStudio v.0.99.484")
## [1] "I'm working in RStudio v._.__.___"
To match one of several characters in a specied set we can enclose the characters
of concern with square brackets [ ]. In addition, to match any characters not in a
specied character set we can include the caret ^ at the beginning of the set within
the brackets. The following displays the general syntax for common character
classes but these can be altered easily as shown in the examples that follow
(Fig. 6.3):
58 6 Dealing with Regular Expressions
Anchor Description
[aeiou] match any specified lower case vowel
[AEIOU] match any specified upper case vowel
[0123456789] match any specified numeric value
[0-9] match any range of specified numeric values
[a-z] match any range of lower case letter
[A-Z] match any range of upper case letter
[a-zA-Z0-9] match any of the above
[^aeiou] match anything other than a lowercase vowel
[^0-9] match anything other than the specified numeric values
*adapted from Handling and Processing Strings in R (Sanchez, 2013)
The following provides examples to show how to use the anchor syntax to match
character classes. For information on the grep function used in this example visit
the main regex functions page.
x <- c("RStudio", "v.0.99.484", "2015", "09-22-2015", "grep vs. grepl")
Closely related to regex character classes are POSIX character classes which are
expressed in double brackets [[ ]] (Fig. 6.4).
The following provides examples to show how to use the anchor syntax to match
POSIX character classes. For information on the grep function used in this exam-
ple visit the main regex functions page.
x <- "I like beer! #beer, @wheres_my_beer, I like R (v3.2.2) #rrrrrrr2015"
Anchor Description
[[:lower:]] lower-case letters
[[:upper:]] upper-case letters
[[:alpha:]] alphabetic characters [[:lower:]] + [[:upper:]]
[[:digit:]] numeric values
[[:alnum:]] alphanumeric characters [[:alpha:]] + [[:digit:]]
[[:blank:]] blank characters (space & tab)
[[:cntrl:]] control characters
[[:punct:]] punctuation characters: ! " # % & ' ( ) * + , - . / : ;
[[:space:]] space characters: tab, newline, vertical tab, space, etc
[[:xdigit:]] hexadecimal digits: 0-9 A B C D E F a b c d e f
[[:print:]] printable characters [[:alpha:]] + [[:punct:]] + space
[[:graph:]] graphical characters [[:alpha:]] + [[:punct:]]
*adapted from Handling and Processing Strings in R (Sanchez, 2013)
Quantifier Description
? the preceding item is optional and will be matched at most once
* the preceding item will be matched zero or more times
+ the preceding item will be matched one or more times
{n} the preceding item is matched exactly n times
{n,} the preceding item is matched n or more times
{n,m} the preceding item is matched at least n times, but not more than m times
*adapted from Handling and Processing Strings in R (Sanchez, 2013)
6.1.5 Quantifiers
When we want to match a certain number of characters that meet a certain criteria we
can apply quantiers to our pattern searches. The quantiers we can use are (Fig. 6.5):
The following provides examples to show how to use the quantier syntax
to match a certain number of characters patterns. For information on the grep
function used in this example visit the main regex functions page. Note that state.
name is a built in dataset within R that contains all the U.S. state names.
60 6 Dealing with Regular Expressions
Now that Ive illustrated how R handles some of the most common regular expres-
sion elements, its time to present the functions you can use for working with regu-
lar expression. R contains a set of functions in the base package that we can use to
nd pattern matches. Alternatively, the R package stringr also provides several
functions for regex operations. We will cover both these alternatives.
The primary base R regex functions serve three primary purposes: pattern matching,
pattern replacement, and character splitting.
There are ve functions that provide pattern matching capabilities. The three func-
tions that I provide examples for (grep(), grepl(), and regexpr()) are ones
that are most common. The primary difference between these three functions is the
output they provide. The two other functions which I do not illustrate are greg-
expr() and regexec(). These two functions provide similar capabilities as
regexpr() but with the output in list form.
To nd a pattern in a character vector and to have the element values or indices
as the output use grep():
6.2 Regex Functions 61
regexpr("v.", x)
## [1] 1 2 3 4 -1
## attr(,"match.length")
## [1] 2 2 2 2 -1
## attr(,"useBytes")
## [1] TRUE
The output of regexpr() can be interpreted as follows. The rst element pro-
vides the starting position of the match in each element. Note that the value 1
means there is no match. The second element (attribute match length) provides
the length of the match. The third element (attribute useBytes) has a value TRUE
meaning matching was done byte-by-byte rather than character-by-character.
62 6 Dealing with Regular Expressions
There will be times when you want to split the elements of a character string into
separate elements. To divide the characters in a vector into individual components
use strsplit():
x <- paste(state.name[1:10], collapse = " ")
Similar to basic string manipulation, the stringr package also offers regex func-
tionality. In some cases the stringr performs the same functions as certain base
R functions but with more consistent syntax. In other cases stringr offers addi-
tional functionality that is not available in base R functions.
# install stringr package
install.packages("stringr")
# load package
library(stringr)
To detect whether a pattern is present (or absent) in a string vector use the str_
detect(). This function is a wrapper for grepl().
# use the built in data set 'state.name'
head(state.name)
## [1] "Alabama" "Alaska" "Arizona" "Arkansas" "California"
## [6] "Colorado"
To locate the occurrences of patterns stringr offers two options: (a) locate the
rst matching occurrence or (b) locate all occurrences. To locate the position of the
rst occurrence of a pattern in a string vector use str_locate(). The output pro-
vides the starting and ending position of the rst match found within each element.
x <- c("abcd", "a22bc1d", "ab3453cd46", "a1bc44d")
To locate the positions of all pattern match occurrences in a character vector use
str_locate_all(). The output provides a list the same length as the number
of elements in the vector. Each list item will provide the starting and ending positions
for each pattern match occurrence in its respective element.
# locate all sequences of 1 or more consecutive numbers
str_locate_all(x, "[0-9]+")
## [[1]]
## start end
##
## [[2]]
## start end
## [1,] 2 3
## [2,] 6 6
##
## [[3]]
## start end
## [1,] 3 6
## [2,] 9 10
##
## [[4]]
## start end
## [1,] 2 2
## [2,] 5 6
For extracting a string containing a pattern, stringr offers two primary options:
(a) extract the rst matching occurrence or (b) extract all occurrences. To extract the
rst occurrence of a pattern in a character vector use str_extract(). The out-
put will be the same length as the string and if no match is found the output will be
NA for that element.
y <- c("I use R #useR2014", "I use R and love R #useR2015", "Beer")
For extracting a string containing a pattern, stringr offers two options: (a)
replace the rst matching occurrence or (b) replace all occurrences. To replace the
rst occurrence of a pattern in a character vector use str_replace(). This func-
tion is a wrapper for sub().
cities <- c("New York", "new new York", "New New New York")
cities
## [1] "New York" "new new York" "New New New York"
# case sensitive
str_replace(cities, pattern = "New", replacement = "Old")
## [1] "Old York" "new new York" "Old New New York"
# to deal with case sensitivities use Regex syntax in the 'pattern' argument
str_replace(cities, pattern = "[N]*[n]*ew", replacement = "Old")
## [1] "Old York" "Old new York" "Old New New York"
a <- "Alabama-Alaska-Arizona-Arkansas-California"
str_split(a, pattern = "-")
## [[1]]
## [1] "Alabama" "Alaska" "Arizona" "Arkansas" "California"
Note that the output of strs_plit() is a list. To convert the output to a simple
atomic vector simply wrap in unlist():
unlist(str_split(a, pattern = "-"))
## [1] "Alabama" "Alaska" "Arizona" "Arkansas" "California"
66 6 Dealing with Regular Expressions
Character strings are often considered semi-structured data. Text can be structured
in a specied eld; however, the quality and consistency of the text input can be far
from structured. Consequently, managing and manipulating character strings can be
extremely tedious and unique to each data wrangling process. As a result, taking the
time to learn the nuances of dealing with character strings and regex functions can
provide a great return on investment; however, the functions and techniques required
will likely be greater than what I could offer here. So here are additional resources
that are worth reading and learning from:
Handling and Processing Strings in R1
stringr Package Vignette2
Regular Expressions3
1
http://gastonsanchez.com/Handling_and_Processing_Strings_in_R.pdf
2
https://cran.r-project.org/web/packages/stringr/vignettes/stringr.html
3
http://www.regular-expressions.info/
Chapter 7
Dealing with Factors
Factors are variables in R, which take on a limited number of different values; such
variables are often referred to as categorical variables. One of the most important
uses of factors is in statistical modeling; since categorical variables enter into statis-
tical models such as lm and glm differently than continuous variables, storing data
as factors insures that the modeling functions will treat such data correctly.
One can think of a factor as an integer vector where each integer has a label.1 In
fact, factors are built on top of integer vectors using two attributes: the class()
factor, which makes them behave differently from regular integer vectors, and the
levels(), which defines the set of allowed values.2
In this chapter I will cover the basics of dealing with factors, which includes
Creating, converting and inspecting factors, Ordering levels, Revaluing levels, and
Dropping levels.
1
https://leanpub.com/rprogramming
2
http://adv-r.had.co.nz/Data-structures.html
When creating a factor we can control the ordering of the levels by using the lev-
els argument:
# when not specified the default puts order as alphabetical
gender <- factor(c("male", "female", "female", "male", "female"))
gender
## [1] male female female male female
## Levels: female male
# specifying order
gender <- factor(c("male", "female", "female", "male", "female"),
levels = c("male", "female"))
gender
## [1] male female female male female
## Levels: male female
7.4 Dropping Levels 69
We can also create ordinal factors in which a specific order is desired by using
the ordered = TRUE argument. This will be reflected in the output of the levels
as shown below in which low < middle < high:
ses <- c("low", "middle", "low", "low", "low", "low", "middle", "low", "middle",
"middle", "middle", "middle", "middle", "high", "high", "low", "middle",
"middle", "low", "high")
To recode factor levels I usually use the revalue() function from the plyr
package.
plyr::revalue(ses, c("low" = "small", "middle" = "medium", "high" = "large"))
## [1] small medium small small small small medium small medium medium
## [11] medium medium medium large large small medium medium small large
## Levels: small < medium < large
Note that Using the :: notation allows you to access the revalue() function
without having to fully load the plyr package.
Real world data are often associated with dates and time; however, dealing with
dates accurately can appear to be a complicated task due to the variety in formats
and accounting for time-zone differences and leap years. R has a range of functions
that allow you to work with dates and times. Furthermore, packages such as lub-
ridate make it easier to work with dates and times.
In this chapter I will introduce you to the basics of dealing with dates. This
includes printing the current date and time stamp, converting strings to dates,
extracting and manipulating parts of dates, creating date sequences, performing cal-
culations with dates, and dealing with time zone and daylight savings differences.
I end with offering additional resources to learn and deal with date and time data.
Sys.Date()
## [1] "2015-09-24"
Sys.time()
## [1] "2015-09-24 15:08:57 EDT"
now()
## [1] "2015-09-24 15:08:57 EDT"
When date and time data are imported into R they will often default to a character
string. This requires us to convert strings to dates. We may also have multiple strings
that we want to merge to create a date variable.
as.Date(x)
## [1] "2015-07-01" "2015-08-01" "2015-09-01"
Note that the default date format is YYYY-MM-DD; therefore, if your string is
of different format you must incorporate the format argument. There are multiple
formats that dates can be in; for a complete list of formatting code options in R type
?strftime in your console.
y <- c("07/01/2015", "07/01/2015", "07/01/2015")
mdy(y)
## [1] "2015-07-01 UTC" "2015-07-01 UTC" "2015-07-01 UTC"
Sometimes your date data are collected in separate elements. To convert these sepa-
rate data into one date object incorporate the ISOdate() function:
yr <- c("2012", "2013", "2014", "2015")
mo <- c("1", "5", "7", "2")
day <- c("02", "22", "15", "28")
Note that ISODate() also has arguments to accept data for hours, minutes,
seconds, and time-zone if you need to merge all these separate components.
To extract and manipulate individual elements of a date I typically use the lub-
ridate package due to its simplistic function syntax. The functions provided by
lubridate to perform extraction and manipulation of dates include (Fig. 8.2):
To extract an individual element of the date variable you simply use the accessor
function desired. Note that the accessor variables have additional arguments that
can be used to show the name of the date element in full or abbreviated form.
74 8 Dealing with Dates
library(lubridate)
year(x)
## [1] 2015 2015 2015
To manipulate or change the values of date elements we simply use the accessor
function to extract the element of choice and then use the assignment function to
assign a new value.
To create a sequence of dates we can leverage the seq() function. As with numeric
vectors, you have to specify at least three of the four arguments (from, to, by, and
length.out).
seq(as.Date("2010-1-1"), as.Date("2015-1-1"), by = "years")
## [1] "2010-01-01" "2011-01-01" "2012-01-01" "2013-01-01" "2014-01-01"
## [6] "2015-01-01"
Using the lubridate package is very similar. The only difference is lubridate
changes the way you specify the first two arguments in the seq() function.
library(lubridate)
Creating sequences with time is very similar; however, we need to make sure our
date object is POSIXct rather than just a Date object (as produced by as.Date):
seq(as.POSIXct("2015-1-1 0:00"), as.POSIXct("2015-1-1 12:00"), by = "hour")
## [1] "2015-01-01 00:00:00 EST" "2015-01-01 01:00:00 EST"
## [3] "2015-01-01 02:00:00 EST" "2015-01-01 03:00:00 EST"
## [5] "2015-01-01 04:00:00 EST" "2015-01-01 05:00:00 EST"
## [7] "2015-01-01 06:00:00 EST" "2015-01-01 07:00:00 EST"
## [9] "2015-01-01 08:00:00 EST" "2015-01-01 09:00:00 EST"
## [11] "2015-01-01 10:00:00 EST" "2015-01-01 11:00:00 EST"
## [13] "2015-01-01 12:00:00 EST"
# with lubridate
seq(ymd_hm("2015-1-1 0:00"), ymd_hm("2015-1-1 12:00"), by = "hour")
## [1] "2015-01-01 00:00:00 UTC" "2015-01-01 01:00:00 UTC"
## [3] "2015-01-01 02:00:00 UTC" "2015-01-01 03:00:00 UTC"
## [5] "2015-01-01 04:00:00 UTC" "2015-01-01 05:00:00 UTC"
## [7] "2015-01-01 06:00:00 UTC" "2015-01-01 07:00:00 UTC"
## [9] "2015-01-01 08:00:00 UTC" "2015-01-01 09:00:00 UTC"
## [11] "2015-01-01 10:00:00 UTC" "2015-01-01 11:00:00 UTC"
## [13] "2015-01-01 12:00:00 UTC"
76 8 Dealing with Dates
Since R stores date and time objects as numbers, this allows you to perform various
calculations such as logical comparisons, addition, subtraction, and working with
durations.
x <- Sys.Date()
x
## [1] "2015-09-26"
y <- as.Date("2015-09-11")
x > y
## [1] TRUE
x - y
## Time difference of 15 days
The nice thing about the date/time classes is that they keep track of leap years,
leap seconds, daylight savings, and time zones. Use OlsonNames() for a full list
of acceptable time zone specifications.
# last leap year
x <- as.Date("2012-03-1")
y <- as.Date("2012-02-28")
x - y
## Time difference of 2 days
y == x
## [1] FALSE
y - x
## Time difference of 3 hours
Similarly, the same functionality exists with the lubridate package with the
only difference being the accessor function(s) used.
library(lubridate)
x <- now()
x
## [1] "2015-09-26 10:08:18 EDT"
y <- ymd("2015-09-11")
x > y
## [1] TRUE
8.6 Dealing with Time Zones and Daylight Savings 77
x - y
## Time difference of 15.5891 days
y + days(4)
## [1] "2015-09-15 UTC"
x - hours(4)
## [1] "2015-09-26 06:08:18 EDT"
We can also deal with time spans by using the duration functions in lubridate.
Durations simply measure the time span between start and end dates. Using base R
date functions for duration calculations is tedious and often results in wrong
measurements. lubridate provides simplistic syntax to calculate durations with
the desired measurement (seconds, minutes, hours, etc.).
# create new duration (represented in seconds)
new_duration(60)
## [1] "60s"
dhours(1)
## [1] "3600 s (~1 hours)"
dyears(1)
## [1] "31536000 s (~365 days)"
x + dhours(10)
## [1] "2015-09-22 22:00:00 UTC"
To change the time zone for a date/time we can use the with_tz() function which
will also update the clock time to align with the updated time zone:
library(lubridate)
If the time zone is incorrect or for some reason you need to change the time zone
without changing the clock time you can force it with force_tz():
time
## [1] "2015-09-26 10:30:32 EDT"
We can also easily work with daylight savings times to eliminate impacts on
date/time calculations:
# most recent daylight savings time
ds <- ymd_hms("2015-03-08 01:59:59", tz = "US/Eastern")
# add a duration of 2 hours will reflect actual daylight savings clock time
# that occurred 2 hours after 01:59:59 on 2015-03-08
ds + dhours(2)
## [1] "2015-03-08 04:59:59 EDT"
# add a period of two hours will reflect clock time that normally occurs after
# 01:59:59 and is not influenced by daylight savings time.
ds + hours(2)
## [1] "2015-03-08 03:59:59 EDT"
For additional resources on learning and dealing with dates I recommend the
following:
Dates and times made easy with lubridate1
Date and time classes in R2
1
http://www.jstatsoft.org/article/view/v040i03
2
https://www.r-project.org/doc/Rnews/Rnews_2004-1.pdf
Part III
Managing Data Structures in R
Smart data structures and dumb code works a lot better than
the other way around
Eric S. Raymond
In the previous section I illustrated how to work with different types of data; however,
we primarily focused on data in a one-dimensional structure. In typical data analyses
you often need more than one dimension. Many datasets can contain variables of dif-
ferent length and or types of values (i.e. numeric vs character). Furthermore, many
statistical and mathematical calculations are based on matrices. R provides multiple
types of data structures to deal with these different needs.
The basic data structures in R can be organized by their dimensionality (1D,
2D, , nD) and their likeness (homogenous vs. heterogeneous). This results in
five data structure types most often used in data analysis; and almost all other
objects in R are built from these foundational types:
In this section I will cover the basics of these data structures. I have not had the need
to use multi-dimensional arrays, therefore, the topics I will go into details on will
include vectors, lists, matrices, and data frames. These types represent the most
commonly used data structures for day-to-day analyses. For each data structure I
will illustrate how to create the structure, add additional elements to a pre-existing
structure, add attributes to structures, and how to subset the various data structures.
Lastly, I will cover how to deal with missing values in data structures. Consequently,
this section will provide a robust understanding of managing various forms of data-
sets depending on dimensionality needs.
Chapter 9
Data Structure Basics
Prior to jumping into the data structures, its benecial to understand two components
of data structures - the structure and attributes.
Given an object, the best way to understand what data structure it represents is to
use the structure function str(). str() stands for structure and provides a
compact display of the internal structure of an R object.
# different data structures
vector <- 1:10
list <- list(item1 = 1:10, item2 = LETTERS[1:18])
matrix <- matrix(1:12, nrow = 4)
df <- data.frame(item1 = 1:18, item2 = LETTERS[1:18])
str(list)
## List of 2
## $ item1: int [1:10] 1 2 3 4 5 6 7 8 9 10
## $ item2: chr [1:18] "A" "B" "C" "D"
str(matrix)
## int [1:4, 1:3] 1 2 3 4 5 6 7 8 9 10
str(df)
## 'data.frame': 18 obs. of 2 variables:
## $ item1: int 1 2 3 4 5 6 7 8 9 10
## $ item2: Factor w/ 18 levels "A","B","C","D",..: 1 2 3 4 5 6 7 8 9 10
9.2 Attributes
R objects can have attributes, which are like metadata for the object. These meta-
data can be very useful in that they help to describe the object. For example, column
names on a data frame help to tell us what data are contained in each of the columns.
Some examples of R object attributes are:
names, dimnames
dimensions (e.g. matrices, arrays)
class (e.g. integer, numeric)
length
other user-dened attributes/metadata
Attributes of an object (if any) can be accessed using the attributes() func-
tion. Not all R objects contain attributes, in which case the attributes() func-
tion returns NULL.
# assess attributes of an object
attributes(df)
## $names
## [1] "item1" "item2"
##
## $row.names
## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
##
## $class
## [1] "data.frame"
attributes(matrix)
## $dim
## [1] 4 3
length(df)
## [1] 2
This chapter only shows you functions to assess these attributes. In the chapters
that follow more details are provided on how to view and create attributes for each
type of data structure.
Chapter 10
Managing Vectors
The basic structure in R is the vector. A vector is a sequence of data elements of the
same basic type: integer, double, logical, or character.1 The one-dimensional exam-
ples illustrated in the previous section are considered vectors. In this chapter I will
illustrate how to create vectors, add additional elements to pre-existing vectors, add
attributes to vectors, and subset vectors.
The colon : operator can be used to create a vector of integers between two speci-
fied numbers or the c() function can be used to create vectors of objects by concat-
enating elements together:
# integer vector
w <- 8:17
w
## [1] 8 9 10 11 12 13 14 15 16 17
# double vector
x <- c(0.5, 0.6, 0.2)
x
## [1] 0.5 0.6 0.2
# logical vector
y1 <- c(TRUE, FALSE, FALSE)
y1
## [1] TRUE FALSE FALSE
1
There are two additional vector types which I will not discusscomplex and raw.
# Character vector
z <- c("a", "b", "c")
z
## [1] "a" "b" "c"
You can also use the as.vector() function to initialize vectors or change the
vector type:
v <- as.vector(8:17)
v
## [1] 8 9 10 11 12 13 14 15 16 17
All elements of a vector must be the same type, so when you attempt to combine
different types of elements they will be coerced to the most flexible type possible:
# numerics are turned to characters
str(c("a", "b", "c", 1, 2, 3))
## chr [1:6] "a" "b" "c" "1" "2" "3"
# or character
str(c("A", "B", "C", TRUE, FALSE))
## chr [1:5] "A" "B" "C" "TRUE" "FALSE"
c(v1, 18:22)
## [1] 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
# same as
c(v1, c(18, c(19, c(20, c(21:22)))))
## [1] 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
10.3 Adding Attributes to Vectors 87
The attributes that you can add to vectors includes names and comments. If we
continue with our vector v1 we can see that the vector currently has no attributes:
attributes(v1)
## NULL
We can add names to vectors using two approaches. The first uses names() to
assign names to each element of the vector. The second approach is to assign names
when creating the vector.
# assigning names to a pre-existing vector
names(v1) <- letters[1:length(v1)]
v1
## a b c d e f g h i j
## 8 9 10 11 12 13 14 15 16 17
attributes(v1)
## $names
## [1] "a" "b" "c" "d" "e" "f" "g" "h" "i" "j"
We can also add comments to vectors to act as a note to the user. This does not change
how the vector behaves; rather, it simply acts as a form of metadata for the vector.
comment(v1) <- "This is a comment on a vector"
v1
## a b c d e f g h i j
## 8 9 10 11 12 13 14 15 16 17
attributes(v1)
## $names
## [1] "a" "b" "c" "d" "e" "f" "g" "h" "i" "j"
##
## $comment
## [1] "This is a comment on a vector"
88 10 Managing Vectors
The four main ways to subset a vector include combining square brackets [ ] with:
Positive integers
Negative integers
Logical values
Names
You can also subset with double brackets [[ ]] for simplifying subsets.
Subsetting with positive integers returns the elements at the specified positions:
v1
## a b c d e f g h i j
## 8 9 10 11 12 13 14 15 16 17
v1[2]
## b
## 9
v1[2:4]
## b c d
## 9 10 11
v1[c(2, 4, 6, 8)]
## b d f h
## 9 11 13 15
Subsetting with negative integers will omit the elements at the specified positions:
v1[-1]
## b c d e f g h i j
## 9 10 11 12 13 14 15 16 17
v1[-c(2, 4, 6, 8)]
## a c e g i j
## 8 10 12 14 16 17
10.4 Subsetting Vectors 89
Subsetting with logical values will select the elements where the corresponding
logical value is TRUE:
v1[c(TRUE, FALSE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE, TRUE)]
## a c e f g j
## 8 10 12 13 14 17
Subsetting with names will return the elements with the matching names specified:
v1["b"]
## b
## 9
Its also important to understand the difference between simplifying and preserving
when subsetting. Simplifying subsets returns the simplest possible data structure
that can represent the output. Preserving subsets keeps the structure of the output
the same as the input.
90 10 Managing Vectors
For vectors, subsetting with single brackets [ ] preserves while subsetting with
double brackets [[ ]] simplifies. The change you will notice when simplifying
vectors is the removal of names.
v1[1]
## a
## 8
v1[[1]]
## [1] 8
Chapter 11
Managing Lists
A list is an R structure that allows you to combine elements of different types and
lengths. This can include a list embedded within a list. Many statistical outputs are
provided as a list as well; therefore, its critical to understand how to work with lists.
In this chapter I will illustrate how to create lists, add additional elements to pre-
existing lists, add attributes to lists, and subset lists.
To create a list we can use the list() function. Note how each of the four list items
below are of different classes (integer, character, logical, and numeric) and different
lengths.
l <- list(1:3, "a", c(TRUE, FALSE, TRUE), c(2.5, 4.2))
str(l)
## List of 4
## $ : int [1:3] 1 2 3
## $ : chr "a"
## $ : logi [1:3] TRUE FALSE TRUE
## $ : num [1:2] 2.5 4.2
To add additional list components to a list we can leverage the list() and
append() functions. We can illustrate with the following list.
l1 <- list(1:3, "a", c(TRUE, FALSE, TRUE))
str(l1)
## List of 3
## $ : int [1:3] 1 2 3
## $ : chr "a"
## $ : logi [1:3] TRUE FALSE TRUE
If we add the new elements with list() it will create a list of two components,
component 1 will be a nested list of the original list and component 2 will be the
new elements added:
l2 <- list(l1, c(2.5, 4.2))
str(l2)
## List of 2
## $ :List of 3
## ..$ : int [1:3] 1 2 3
## ..$ : chr "a"
## ..$ : logi [1:3] TRUE FALSE TRUE
## $ : num [1:2] 2.5 4.2
To simply add a fourth list component without creating nested lists we use the
append() function:
l3 <- append(l1, list(c(2.5, 4.2)))
str(l3)
## List of 4
## $ : int [1:3] 1 2 3
## $ : chr "a"
## $ : logi [1:3] TRUE FALSE TRUE
## $ : num [1:2] 2.5 4.2
Alternatively, we can also add a new list component by utilizing the $ sign and
naming the new item:
l3$item4 <- "new list item"
str(l3)
## List of 5
## $ : int [1:3] 1 2 3
## $ : chr "a"
## $ : logi [1:3] TRUE FALSE TRUE
## $ : num [1:2] 2.5 4.2
## $ item4: chr "new list item"
11.3 Adding Attributes to Lists 93
To add additional values to a list item you need to subset for that specic list item
and then you can use the c() function to add the additional elements to that list
item:
l1[[1]] <- c(l1[[1]], 4:6)
str(l1)
## List of 3
## $ : int [1:6] 1 2 3 4 5 6
## $ : chr "a"
## $ : logi [1:3] TRUE FALSE TRUE
The attributes that you can add to lists include names, general comments, and spe-
cic list item comments. Currently, our l1 list has no attributes:
attributes(l1)
## NULL
We can add names to lists in two ways. First, we can use names() to assign
names to list items in a pre-existing list. Second, we can add names to a list when
we are creating a list.
# adding names to a pre-existing list
names(l1) <- c("item1", "item2", "item3")
str(l1)
## List of 3
## $ item1: int [1:6] 1 2 3 4 5 6
## $ item2: chr [1:4] "a" "dding" "to a" "list"
## $ item3: logi [1:3] TRUE FALSE TRUE
attributes(l1)
## $names
## [1] "item1" "item2" "item3"
94 11 Managing Lists
If list x is a train carrying objects, then x[[5]] is the object in car 5; x[4:6] is a train of cars
4-6@RLangTip
To subset lists we can utilize the single bracket [ ], double brackets [[ ]], and
dollar sign $ operators. Each approach provides a specic purpose and can be com-
bined in different ways to achieve the following subsetting objectives:
Subset list and preserve output as a list
Subset list and simplify output
Subset list to get elements out of a list
Subset list with a nested list
To extract one or more list items while preserving1 the output in list format use the
[ ] operator:
# extract first list item
l2[1]
## $item1
## [1] 1 2 3
1
Its important to understand the difference between simplifying and preserving subsetting.
Simplifying subsets returns the simplest possible data structure that can represent the output.
Preserving subsets keeps the structure of the output the same as the input. See Hadley Wickhams
section on Simplifying vs. Preserving Subsetting to learn more.
96 11 Managing Lists
To extract one or more list items while simplifying the output use the [[ ]] or $
operator:
# extract first list item and simplify to a vector
l2[[1]]
## [1] 1 2 3
One thing that differentiates the [[ operator from the $ is that the [[ operator can
be used with computed indices. The $ operator can only be used with literal names.
To extract individual elements out of a specic list item combine the [[ (or $)
operator with the [ operator:
# extract third element from the second list item
l2[[2]][3]
## [1] "c"
If you have nested lists you can expand the ideas above to extract items and ele-
ments. Well use the following list l3 which has a nested list in item 2.
l3 <- list(item1 = 1:3,
item2 = list(item2a = letters[1:5],
item3b = c(T, F, T, T)))
str(l3)
11.4 Subsetting Lists 97
## List of 2
## $ item1: int [1:3] 1 2 3
## $ item2:List of 2
## ..$ item2a: chr [1:5] "a" "b" "c" "d"
## ..$ item3b: logi [1:4] TRUE FALSE TRUE TRUE
If the goal is to subset l3 to extract the nested list item item2a from item2,
we can perform this multiple ways.
# preserve the output as a list
l3[[2]][1]
## $item2a
## [1] "a" "b" "c" "d" "e"
The underlying structure of this matrix is simply an integer vector with an added
2 3 dimension attribute.
str(m1)
## int [1:2, 1:3] 1 2 3 4 5 6
attributes(m1)
## $dim
## [1] 2 3
Matrices can also contain character values. Whether a matrix contains data that
are of numeric or character type, all the elements must be of the same class.
# a character matrix
m2 <- matrix(letters[1:6], nrow = 2, ncol = 3)
m2
## [,1] [,2] [,3]
## [1,] "a" "c" "e"
## [2,] "b" "d" "f"
Matrices can also be created using the column-bind cbind() and row-bind
rbind() functions. However, keep in mind that the vectors that are being binded
must be of equal length and mode.
v1 <- 1:4
v2 <- 5:8
cbind(v1, v2)
## v1 v2
## [1,] 1 5
## [2,] 2 6
## [3,] 3 7
## [4,] 4 8
rbind(v1, v2)
## [,1] [,2] [,3] [,4]
## v1 1 2 3 4
## v2 5 6 7 8
We can leverage the cbind() and rbind() functions for adding onto matrices
as well. Again, its important to keep in mind that the vectors that are being binded
must be of equal length and mode to the pre-existing matrix.
12.3 Adding Attributes to Matrices 101
# the dimension attribute shows this matrix has 4 rows and 3 columns
attributes(m2)
## $dim
## [1] 4 3
However, matrices can also have additional attributes such as row names, column
names, and comments. Adding names can be done individually, meaning we can
add row names or column names separately.
102 12 Managing Matrices
# attributes displayed will now show the dimension, list the row names
# and will show the column names as NULL
attributes(m2)
## $dim
## [1] 4 3
##
## $dimnames
## $dimnames[[1]]
## [1] "row1" "row2" "row3" "row4"
##
## $dimnames[[2]]
## NULL
Another option is to use the dimnames() function. To add row names you
assign the names to dimnames(m2)[[1]] and to add column names you assign
the names to dimnames(m2)[[2]].
dimnames(m2)[[1]] <- c("row_1", "row_2", "row_3", "row_4")
m2
## col1 col2 col3
## row_1 1 5 9
## row_2 2 6 10
## row_3 3 7 11
## row_4 4 8 12
12.4 Subsetting Matrices 103
Lastly, similar to lists and vectors you can add a comment attribute to a list.
comment(m2) <- "adding a comment to a matrix"
attributes(m2)
## $dim
## [1] 4 3
##
## $dimnames
## $dimnames[[1]]
## [1] "row_1" "row_2" "row_3" "row_4"
##
## $dimnames[[2]]
## [1] "col_1" "col_2" "col_3"
##
##
## $comment
## [1] "adding a comment to a matrix"
To subset matrices we use the [ operator; however, since matrices have two dimen-
sions we need to incorporate subsetting arguments for both row and column dimen-
sions. A generic form of matrix subsetting looks like: matrix[rows, columns].
We can illustrate with matrix m2:
m2
## col_1 col_2 col_3
## row_1 1 5 9
## row_2 2 6 10
## row_3 3 7 11
## row_4 4 8 12
Note that subsetting matrices with the [ operator will simplify the results to the
lowest possible dimension. To avoid this you can introduce the drop = FALSE
argument:
# simplifying results in a named vector
m2[, 2]
## row_1 row_2 row_3 row_4
## 5 6 7 8
A data frame is the most common way of storing data in R and, generally, is the data
structure most often used for data analyses. Under the hood, a data frame is a list of
equal-length vectors. Each element of the list can be thought of as a column and the
length of each element of the list is the number of rows. As a result, data frames can
store different classes of objects in each column (i.e. numeric, character, factor).
In essence, the easiest way to think of a data frame is as an Excel worksheet that
contains columns of different types of data but are all of equal length rows. In this
chapter I will illustrate how to create data frames, add additional elements to
pre-existing data frames, add attributes to data frames, and subset data frames.
# number of rows
nrow(df)
## [1] 3
# number of columns
ncol(df)
## [1] 4
We can also convert pre-existing structures to a data frame. The following illus-
trates how we can turn multiple vectors, a list, or a matrix into a data frame:
v1 <- 1:3
v2 <-c("this", "is", "text")
v3 <- c(TRUE, FALSE, TRUE)
as.data.frame(l)
## item1 item2 item3
## 1 1 this 2.5
## 2 2 is 4.2
## 3 3 text 5.1
as.data.frame(m1)
## V1 V2 V3
## 1 1 5 9
## 2 2 6 10
## 3 3 7 11
## 4 4 8 12
We can leverage the cbind() function for adding columns to a data frame. Note
that one of the objects being combined must already be a data frame otherwise
cbind() could produce a matrix.
df
## col1 col2 col3 col4
## 1 1 this TRUE 2.500000
## 2 2 is FALSE 4.200000
## 3 3 text TRUE 3.141593
We can also use the rbind() function to add data frame rows together.
However, severe caution should be taken because this can cause changes in the
classes of the columns. For instance, our data frame df currently consists of an
integer, character, logical, and numeric variables.
df
## col1 col2 col3 col4
## 1 1 this TRUE 2.500000
## 2 2 is FALSE 4.200000
108 13 Managing Data Frames
If we attempt to add a row using rbind() and c() it converts all columns to a
character class. This is because all elements in the vector created by c() must be of
the same class so they are all coerced to the character class which then coerces all
the variables in the data frame to the character class.
df2 <- rbind(df, c(4, "R", F, 1.1))
df2
## col1 col2 col3 col4
## 1 1 this TRUE 2.5
## 2 2 is FALSE 4.2
## 3 3 text TRUE 3.14159265358979
## 4 4 R FALSE 1.1
str(df2)
## 'data.frame': 4 obs. of 4 variables:
## $ col1: chr "1" "2" "3" "4"
## $ col2: chr "this" "is" "text" "R"
## $ col3: chr "TRUE" "FALSE" "TRUE" "FALSE"
## $ col4: chr "2.5" "4.2" "3.14159265358979" "1.1"
To add rows appropriately, we need to convert the items being added to a data
frame and make sure the columns are the same class as the original data frame.
adding_df <- data.frame(col1 = 4,
col2 = "R",
col3 = FALSE,
col4 = 1.1,
stringsAsFactors = FALSE)
There are better ways to join data frames together than to use cbind() and
rbind(). These are covered later on in the transforming your data with dplyr
chapter.
13.3 Adding Attributes to Data Frames 109
Similar to matrices, data frames will have a dimension attribute. In addition, data
frames can also have additional attributes such as row names, column names, and
comments. We can illustrate with data frame df.
# basic data frame
df
## col1 col2 col3 col4
## 1 1 this TRUE 2.500000
## 2 2 is FALSE 4.200000
## 3 3 text TRUE 3.141593
dim(df)
## [1] 3 4
attributes(df)
## $names
## [1] "col1" "col2" "col3" "col4"
##
## $row.names
## [1] 1 2 3
##
## $class
## [1] "data.frame"
Currently df does not have row names but we can add them with
rownames():
# add row names
rownames(df) <- c("row1", "row2", "row3")
df
## col1 col2 col3 col4
## row1 1 this TRUE 2.500000
## row2 2 is FALSE 4.200000
## row3 3 text TRUE 3.141593
attributes(df)
## $names
## [1] "col1" "col2" "col3" "col4"
##
## $row.names
## [1] "row1" "row2" "row3"
##
## $class
## [1] "data.frame"
110 13 Managing Data Frames
Lastly, just like vectors, lists, and matrices, we can add a comment to a data
frame without affecting how it operates.
# adding a comment attribute
comment(df) <- "adding a comment to a data frame"
attributes(df)
## $names
## [1] "col.1" "col.2" "col.3" "col.4"
##
## $row.names
## [1] "row1" "row2" "row3"
##
## $class
## [1] "data.frame"
##
## $comment
## [1] "adding a comment to a data frame"
13.4 Subsetting Data Frames 111
Data frames possess the characteristics of both lists and matrices: if you subset with
a single vector, they behave like lists and will return the selected columns with all
rows; if you subset with two vectors, they behave like matrices and can be subset by
row and column:
df
## col.1 col.2 col.3 col.4
## row1 1 this TRUE 2.500000
## row2 2 is FALSE 4.200000
## row3 3 text TRUE 3.141593
Note that subsetting data frames with the [ operator will simplify the results to
the lowest possible dimension. To avoid this you can introduce the drop = FALSE
argument:
# simplifying results in a named vector
df[, 2]
## [1] "this" "is" "text"
A common task in data analysis is dealing with missing values. In R, missing values
are often represented by NA or some other value that represents missing values (i.e.
99). We can easily work with missing values and in this chapter I illustrate how to
test for, recode, and exclude missing values in your data.
To identify missing values use is.na() which returns a logical vector with TRUE
in the element locations that contain missing values represented by NA. is.na()
will work on vectors, lists, matrices, and data frames.
# vector with missing data
x <- c(1:4, NA, 6:7, NA)
x
## [1] 1 2 3 4 NA 6 7 NA
is.na(x)
## [1] FALSE FALSE FALSE FALSE TRUE FALSE FALSE TRUE
To identify the location or the number of NAs we can leverage the which() and
sum() functions:
# identify location of NAs in vector
which(is.na(x))
## [1] 5 8
To recode missing values; or recode specic indicators that represent missing values,
we can use normal subsetting and assignment operations. For example, we can recode
missing values in vector x with the mean values in x by rst subsetting the vector to
identify NAs and then assign these elements a value. Similarly, if missing values are
represented by another value (i.e. 99) we can simply subset the data for the elements
that contain that value and then assign a desired value to those elements.
# recode missing values with the mean
x[is.na(x)] <- mean(x, na.rm = TRUE)
round(x, 2)
## [1] 1.00 2.00 3.00 4.00 3.83 6.00 7.00 3.83
# change 99 s to NAs
df[df == 99] <- NA
df
## col1 col2
## 1 1 2.5
## 2 2 4.2
## 3 3 NA
## 4 NA 3.2
We may also desire to subset our data to obtain complete observations, those
observations (rows) in our data that contain no missing data. We can do this a few
different ways.
Data are being generated by everything around us at all times. Every digital process
and social media exchange produces it. Systems, sensors and mobile devices trans-
mit it. Countless databases collect it. Data are arriving from multiple sources at an
alarming rate and analysts and organizations are seeking ways to leverage these new
sources of information. Consequently, analysts need to understand how to get data
from these data sources. Furthermore, since analysis is often a collaborative effort
analysts also need to know how to share their data.
This section covers the process of importing, scraping, and exporting data. First,
I cover the basics of importing tabular and spreadsheet data. Second, since modern
day data wrangling often includes scraping data from the flood of web-based data
becoming available to organizations and analysts, I cover the fundamentals of web-
scraping with R. This includes importing spreadsheet data files stored online, scrap-
ing HTML text and data tables, and leveraging APIs. Third, although getting data
into R is essential, I also cover the equally important process of getting data out of
R. Consequently, this section will give you a strong foundation for the different
ways to get your data into and out of R.
Chapter 15
Importing Data
The rst step to any data analysis process is to get the data. Data can come from
many sources but two of the most common include text and Excel les. This chapter
covers how to import data into R by reading data from common text les and Excel
spreadsheets. In addition, I cover how to load data from saved R object les for
holding or transferring data that has been processed in R. In addition to the com-
monly used base R functions to perform data importing, I will also cover functions
from the popular readr, xlsx, and readxl packages.
Text les are a popular way to hold and exchange tabular data as almost any data
application supports exporting data to the CSV (or other text le) formats. Text le
formats use delimiters to separate the different elements in a line, and each line of
data is in its own line in the text le. Therefore, importing different kinds of text
les can follow a fairly consistent process once youve identied the delimiter.
There are two main groups of functions that we can use to read in text les:
Base R functions
readr package functions
To illustrate these functions lets work with a CSV le that is saved in our working
directory which looks like:
variable 1,variable 2,variable 3
10,beer,TRUE
25,wine,TRUE
8,cheese,FALSE
To read in the CSV le we can use read.csv(). Note that when we assess the
structure of the data set that we read in, variable.2 is automatically coerced to
a factor variable and variable.3 is automatically coerced to a logical variable.
Furthermore, any whitespace in the column names are replaced with a ..
mydata = read.csv("mydata.csv")
mydata
## variable.1 variable.2 variable.3
## 1 10 beer TRUE
## 2 25 wine TRUE
## 3 8 cheese FALSE
str(mydata)
## 'data.frame': 3 obs. of 3 variables:
## $ variable.1: int 10 25 8
## $ variable.2: Factor w/ 3 levels "beer","cheese",..: 1 3 2
## $ variable.3: logi TRUE TRUE FALSE
str(mydata_2)
## 'data.frame': 3 obs. of 3 variables:
## $ variable.1: int 10 25 8
## $ variable.2: chr "beer" "wine" "cheese"
## $ variable.3: logi TRUE TRUE FALSE
There are multiple other arguments we can use for certain situations which we
illustrate below:
# provides same results as read.csv above
read.table("mydata.csv", sep=",", header = TRUE, stringsAsFactors = FALSE)
## variable.1 variable.2 variable.3
## 1 10 beer TRUE
## 2 25 wine TRUE
## 3 8 cheese FALSE
In addition to CSV les, there are other text les that read.table works with.
The primary difference is what separates the elements. For example, tab delimited
text les typically end with the .txt extension. You can also use the read.
delim() function as, similiar to read.csv(), read.delim() is a wrapper of
read.table() with defaults set specically for tab delimited les.
# reading in tab delimited text files
read.delim("mydata.txt")
## variable.1 variable.2 variable.3
## 1 10 beer TRUE
## 2 25 wine TRUE
## 3 8 cheese FALSE
Compared to the equivalent base functions, readr functions are around 10 faster.
They bring consistency to importing functions, they produce data frames in a
data.table format which are easier to view for large data sets, the default set-
tings removes the hassels of stringsAsFactors, and they have a more ex-
ible column specication.
To illustrate, we can use read_csv() which is equivalent to base Rs read.
csv() function. However, note that read_csv() maintains the full variable
name (whereas read.csv eliminates any spaces in variable names and lls it with
.). Also, read_csv() automatically sets stringsAsFactors = FALSE,
which can be a controversial topic.1
library(readr)
mydata_3 = read_csv("mydata.csv")
mydata_3
## variable 1 variable 2 variable 3
## 1 10 beer TRUE
## 2 25 wine TRUE
## 3 8 cheese FALSE
str(mydata_3)
## Classes 'tbl_df', 'tbl' and 'data.frame': 3 obs. of 3 variables:
## $ variable 1: int 10 25 8
## $ variable 2: chr "beer" "wine" "cheese"
## $ variable 3: logi TRUE TRUE FALSE
1
An interesting biography of the stringsAsFactors argument can be found at http://simplystatistics.
org/2015/07/24/stringsasfactors-an-unauthorized-biography/
15.2 Reading Data from Excel Files 123
Similar to base R, readr also offers functions to import .txt les (read_
delim()), xed-width les (read_fwf()), general text les (read_table()),
and more.
These examples provide the basics for reading in text les. However, sometimes
even text les can offer unanticipated difculties with their formatting. Both the
base R and readr functions offer many arguments to deal with different formatting
issues and I suggest you take time to look at the help les for these functions to learn
more (i.e. ?read.table). Also, you will nd more resources at the end of this
chapter for importing les.
With Excel still being the spreadsheet software of choice its important to be able to
efciently import and export data from these les. Often, R users will simply resort
to exporting the Excel le as a CSV le and then import into R using read.csv;
however, this is far from efcient. This section will teach you how to eliminate the
CSV step and to import data directly from Excel using two different packages:
xlsx package
readxl package
Note that there are several packages available to connect R with Excel (i.e.
gdata, RODBC, XLConnect, RExcel, etc.); however, I am only going to cover
the two main packages that I use which provide all the fundamental requirements
Ive needed for dealing with Excel.
The xlsx package provides tools necessary to interact with Excel 2007 (and older)
les from R. Many of the benets of the xlsx come from being able to export and
format Excel les from R. Some of these capabilities will be covered in the
Exporting Data chapter; however, in this section we will simply cover importing
data from Excel with the xlsx package.
124 15 Importing Data
To illustrate, well use similar data from the previous section; however, saved as
an .xlsx le in our working director. To import the Excel data we simply use the
read.xlsx() function:
library(xlsx)
read.xlsx("mydata.xlsx", sheetIndex = 1)
## variable.1 variable.2 variable.3
## 1 10 beer TRUE
## 2 25 wine TRUE
## 3 8 cheese FALSE
Since Excel is such a exible spreadsheet software, people often make notes,
comments, headers, etc. at the beginning or end of the les which we may not want
to include. If we want to read in data that starts further down in the Excel worksheet
we can include the startRow argument. If we have a specic range of rows (or
columns) to include we can use the rowIndex (or colIndex) argument.
# a worksheet with comments in the first two lines
read.xlsx("mydata.xlsx", sheetName = "Sheet3")
## HEADER..COMPANY.A NA.
## 1 What if we want to disregard header text in Excel file? <NA>
## 2 variable 6 variable 7
## 3 200 Male
## 4 225 Female
## 5 400 Female
## 6 310 Male
We can also change the class type of the columns when we read them in:
# read in data without changing class type
mydata_sheet1.1 <- read.xlsx("mydata.xlsx", sheetName = "Sheet1")
str(mydata_sheet1.1)
## 'data.frame': 3 obs. of 3 variables:
## $ variable.1: num 10 25 8
## $ variable.2: Factor w/ 3 levels "beer","cheese",..: 1 3 2
## $ variable.3: logi TRUE TRUE FALSE
str(mydata_sheet1.2)
## 'data.frame': 3 obs. of 3 variables:
## $ variable.1: num 10 25 8
## $ variable.2: chr "beer" "wine" "cheese"
## $ variable.3: logi TRUE TRUE FALSE
Another useful argument is keepFormulas which allows you to see the text
of any formulas in the Excel spreadsheet:
# by default keepFormula is set to FALSE so only
# the formula output will be read in
read.xlsx("mydata.xlsx", sheetName = "Sheet4")
## Future.Value Rate Periods Present.Value
## 1 500 0.065 10 266.3630
## 2 600 0.085 6 367.7671
## 3 750 0.080 11 321.6621
## 4 1000 0.070 16 338.7346
readxl is one of the newest packages for accessing Excel data with R and was devel-
oped by Hadley Wickham and the RStudio team who also developed the readr
package. This package works with both legacy .xls formats and the modern xml-
based .xlsx format. Similar to readr the readxl functions are based on a C++
library so they are extremely fast. Unlike most other packages that deal with Excel,
126 15 Importing Data
readxl has no external dependencies, so you can use it to read Excel data on just
about any platform. Additional benets readxl provides includes the ability to
load dates and times as POSIXct formatted dates, automatically drops blank col-
umns, and returns outputs as data.table formatted which provides easier viewing for
large data sets.
To read in Excel data with readxl you use the read_excel() function
which has very similar operations and arguments as xlsx. A few important differ-
ences you will see below include: readxl will automatically convert date and
date-time variables to POSIXct formatted variables, character variables will not be
coerced to factors, and logical variables will be read in as integers.
library(readxl)
str(mydata)
## Classes 'tbl_df', 'tbl' and 'data.frame': 3 obs. of 5 variables:
## $ variable 1: num 10 25 8
## $ variable 2: chr "beer" "wine" NA
## $ variable 3: num 1 1 0
## $ variable 4: POSIXct, format: "2015-11-20" NA
## $ variable 5: POSIXct, format: "2015-11-20 13:30:00" "2015-11-21 16:30:00"
The available arguments allow you to change the data as you import it. Some
examples are provided:
# change variable names by skipping the first row
# and using col_names to set the new names
read_excel("mydata.xlsx", sheet = "Sheet5", skip = 1,
col_names = paste("Var", 1:5))
## Var 1 Var 2 Var 3 Var 4 Var 5
## 1 10 beer 1 42328 2015-11-20 13:30:00
## 2 25 wine 1 NA 2015-11-21 16:30:00
## 3 8 <NA> 0 42330 2015-11-22 14:45:00
One unique difference between readxl and xlsx is how to deal with column
types. Whereas read.xlsx() allows you to change the column types to integer,
double, numeric, character, or logical; read_excel() restricts you to changing
column types to blank, numeric, date, or text. The blank option allows you to skip
columns; however, to change variable 3 to a logical TRUE/FALSE variable requires
a second step.
mydata_ex <- read_excel("mydata.xlsx", sheet = "Sheet5",
col_types = c("numeric", "blank", "numeric",
"date", "blank"))
mydata_ex
## variable 1 variable 3 variable 4
## 1 10 1 2015-11-20
## 2 25 1 <NA>
## 3 8 0 2015-11-22
Sometimes you may need to save data or other R objects outside of your workspace.
You may want to share R data/objects with co-workers, transfer between projects or
computers, or simply archive them. There are three primary ways that people tend
to save R data/objects: as .RData, .rda, or as .rds les. The differences behind when
you use each will be covered in the Saving data as an R object le section. This sec-
tion simply shows how to load these data/object forms.
load("mydata.RData")
load(file = "mydata.rda")
In addition to text and Excel les, there are multiple other ways that data are stored
and exchanged. Commercial statistical software such as SPSS, SAS, Stata, and
Minitab often have the option to store data in a specic format for that software.
In addition, analysts commonly use databases to store large quantities of data. R has
128 15 Importing Data
good support to work with these additional options which we did not cover here.
The following provides a list of additional resources to learn about data importing
for these specic cases:
R data import/export manual: https://cran.r-project.org/doc/manuals/R-data.html
Working with databases
MySQL: https://cran.r-project.org/web/packages/RMySQL/index.html
Oracle: https://cran.r-project.org/web/packages/ROracle/index.html
PostgreSQL: https://cran.r-project.org/web/packages/RPostgreSQL/index.html
SQLite: https://cran.r-project.org/web/packages/RSQLite/index.html
Open Database Connectivity databases: https://cran.rstudio.com/web/packages/
RODBC/
Importing data from commercial software2
The foreign package provides functions that help you load data les from
other programs such as SPSS, SAS, Stata, and others into R.
2
https://cran.r-project.org/doc/manuals/R-data.html#Importing-from-other-statistical-systems
Chapter 16
Scraping Data
Rapid growth of the World Wide Web has signicantly changed the way we share,
collect, and publish data. Vast amount of information is being stored online, both in
structured and unstructured forms. Regarding certain questions or research topics,
this has resulted in a new problemno longer is the concern of data scarcity and
inaccessibility but, rather, one of overcoming the tangled masses of online data.
Collecting data from the web is not an easy process as there are many technolo-
gies used to distribute web content (i.e. HTML, XML, JSON). Therefore, dealing
with more advanced web scraping requires familiarity in accessing data stored in
these technologies via R. Through this chapter I will provide an introduction to
some of the fundamental tools required to perform basic web scraping. This includes
importing spreadsheet data les stored online, scraping HTML text, scraping HTML
table data, and leveraging APIs to scrape data.
My purpose in the following sections is to discuss these topics at a level meant to
get you started in web scraping; however, this area is vast and complex and this
chapter will far from provide you expertise level insight. To advance your knowl-
edge I highly recommend getting copies of XML and Web Technologies for Data
Sciences with R (Nolan and Lang, 2014) and Automated Data Collection with R
(Munzert et al., 2014).
The most basic form of getting data from online is to import tabular (i.e. .txt, .csv) or
Excel les that are being hosted online. This is often not considered web scraping1;
however, I think its a good place to start introducing the user to interacting with the
web for obtaining data. Importing tabular data is especially common for the many
1
In Automated Data Collection with R Munzert et al. state that [t]he rst way to get data from the
web is almost too banal to be considered here and actually not a case of web scraping in the nar-
rower sense.
library(gdata)
rents[1:6, 1:10]
## fips2000 fips2010 fmr2 fmr0 fmr1 fmr3 fmr4 county State CouSub
## 1 100199999 100199999 788 628 663 1084 1288 1 1 99999
## 2 100399999 100399999 762 494 643 1123 1318 3 1 99999
## 3 100599999 100599999 670 492 495 834 895 5 1 99999
## 4 100799999 100799999 773 545 652 1015 1142 7 1 99999
## 5 100999999 100999999 773 545 652 1015 1142 9 1 99999
## 6 101199999 101199999 599 481 505 791 1061 11 1 99999
Note that many of the arguments covered in the Importing Data chapter (i.e.
specifying sheets to read from, skipping lines) also apply to read.xls(). In addi-
tion, gdata provides some useful functions (sheetCount() and sheet-
Names()) for identifying if multiple sheets exist prior to downloading.
Another common form of le storage is using zip les. For instance, the Bureau
of Labor Statistics (BLS) stores their public-use microdata for the Consumer
Expenditure Survey in .zip les.2 We can use download.file() to download the
le to your working directory and then work with this data as desired.
2
http://www.bls.gov/cex/pumd_data.htm#csv
16.1 Importing Tabular and Excel Files Stored Online 131
The .zip archive le format is meant to compress les and are typically used on
les of signicant size. For instance, the Consumer Expenditure Survey data we
downloaded in the previous example is over 10 MB. Obviously there may be times
in which we want to get specic data in the .zip le to analyze but not always per-
manently store the entire .zip le contents. In these instances we can use the follow-
ing process proposed by Dirk Eddelbuettel to temporarily download the .zip le,
extract the desired data, and then discard the .zip le.
# Create a temp. file name
temp <- tempfile()
zip_data2[1:5, 1:10]
## NEWID ALLOC COST GIFT PUB_FLAG UCC EXPNSQDY EXPN_QDY EXPNWKDY EXPN_KDY
## 1 2825371 0 6.26 2 2 190112 1 D 3 D
## 2 2825371 0 1.20 2 2 190322 1 D 3 D
## 3 2825381 0 0.98 2 2 20510 3 D 2 D
## 4 2825381 0 0.98 2 2 20510 3 D 2 D
## 5 2825381 0 2.50 2 2 20510 3 D 2 D
132 16 Scraping Data
One last common scenario Ill cover when importing spreadsheet data from
online is when we identify multiple data sets that wed like to download but are not
centrally stored in a .zip format or the like. As a simple example lets look at the
average consumer price data from the BLS.3 The BLS holds multiple data sets for
different types of commodities within one url; however, there are separate links for
each individual data set.4 More complicated cases of this will have the links to tabu-
lar data sets scattered throughout a webpage.5 The XML package provides the use-
ful getHTMLLinks() function to identify these links.
library(XML)
links
## [1] "/pub/time.series/"
## [2] "/pub/time.series/ap/ap.area"
## [3] "/pub/time.series/ap/ap.contacts"
## [4] "/pub/time.series/ap/ap.data.0.Current"
## [5] "/pub/time.series/ap/ap.data.1.HouseholdFuels"
## [6] "/pub/time.series/ap/ap.data.2.Gasoline"
## [7] "/pub/time.series/ap/ap.data.3.Food"
## [8] "/pub/time.series/ap/ap.footnote"
## [9] "/pub/time.series/ap/ap.item"
## [10] "/pub/time.series/ap/ap.period"
## [11] "/pub/time.series/ap/ap.series"
## [12] "/pub/time.series/ap/ap.txt"
This allows us to assess which les exist that may be of interest. In this case the
links that we are primarily interested in are the ones that contain data in their
name (links 47 listed above). We can use the stringr package to extract these
desired links which we will use to download the data.
library(stringr)
# paste url to data links to have full url for data sets
# use str_sub and regexpr to paste links at appropriate
# starting point
filenames <- paste0(url, str_sub(links_data,
start = regexpr("ap.data", links_data)))
3
http://www.bls.gov/data/#prices
4
http://download.bls.gov/pub/time.series/ap/
5
An example is provided in Automated Data Collection with R in which they use a similar
approach to extract desired CSV les scattered throughout the Maryland State Board of Elections
websiteMaryland State Board of Elections website.
16.1 Importing Tabular and Excel Files Stored Online 133
filenames
## [1] "http://download.bls.gov/pub/time.series/ap/ap.data.0.Current"
##[2]"http://download.bls.gov/pub/time.series/ap/ap.data.1.HouseholdFuels"
## [3] "http://download.bls.gov/pub/time.series/ap/ap.data.2.Gasoline"
## [4] "http://download.bls.gov/pub/time.series/ap/ap.data.3.Food"
We can now proceed to develop a simple for loop function (which you will learn
about in the loop control statements chapter) to download each data set. We store the
results in a list which contains 4 items, one item for each data set. Each list item
contains the url in which the data was extracted from and the dataframe containing
the downloaded data. Were now ready to analyze these data sets as necessary.
for(i in 1:length(filenames)){
url <- filenames[i]
data <- read.delim(url)
data_ls[[length(data_ls) + 1]] <- list(url = filenames[i], data = data)
}
str(data_ls)
## List of 4
## $ :List of 2
## ..$ url : chr "http://download.bls.gov/pub/time.series/ap/ap.data.0.Current"
## ..$ data:'data.frame': 144712 obs. of 5 variables:
## .. ..$ series_id : Factor w/ 878 levels "APU0000701111 ",..: 1 1
## .. ..$ year : int [1:144712] 1995 1995 1995 1995 1995 1995
## .. ..$ period : Factor w/ 12 levels "M01","M02","M03",..: 1 2 3 4
## .. ..$ value : num [1:144712] 0.238 0.242 0.242 0.236 0.244
## .. ..$ footnote_codes: logi [1:144712] NA NA NA NA NA NA
## $ :List of 2
## ..$ url : chr "http://download.bls.gov/pub/time.series/ap/ap.data.1.Hou"
## ..$ data:'data.frame': 90339 obs. of 5 variables:
## .. ..$ series_id : Factor w/ 343 levels "APU000072511 ",..: 1 1
## .. ..$ year : int [1:90339] 1978 1978 1979 1979 1979 1979 1979
## .. ..$ period : Factor w/ 12 levels "M01","M02","M03",..: 11 12
## .. ..$ value : num [1:90339] 0.533 0.545 0.555 0.577 0.605 0.627
## .. ..$ footnote_codes: logi [1:90339] NA NA NA NA NA NA
## $ :List of 2
## ..$ url : chr "http://download.bls.gov/pub/time.series/ap/ap.data.2.Gas"
## ..$ data:'data.frame': 69357 obs. of 5 variables:
## .. ..$ series_id : Factor w/ 341 levels "APU000074712 ",..: 1 1
## .. ..$ year : int [1:69357] 1973 1973 1973 1974 1974 1974 1974
## .. ..$ period : Factor w/ 12 levels "M01","M02","M03",..: 10 11
## .. ..$ value : num [1:69357] 0.402 0.418 0.437 0.465 0.491 0.528
## .. ..$ footnote_codes: logi [1:69357] NA NA NA NA NA NA
## $ :List of 2
## ..$ url : chr "http://download.bls.gov/pub/time.series/ap/ap.data.3.Food"
## ..$ data:'data.frame': 122302 obs. of 5 variables:
## .. ..$ series_id : Factor w/ 648 levels "APU0000701111 ",..: 1 1
## .. ..$ year : int [1:122302] 1980 1980 1980 1980 1980 1980 1980
## .. ..$ period : Factor w/ 12 levels "M01","M02","M03",..: 1 2 3 4
## .. ..$ value : num [1:122302] 0.203 0.205 0.211 0.206 0.207 0.21
## .. ..$ footnote_codes: logi [1:122302] NA NA NA NA NA NA
134 16 Scraping Data
These examples provide the basics required for downloading most tabular and
Excel les from online. However, this is just the beginning of importing/scraping
data from the web. Next, well start exploring the more conventional forms of scrap-
ing text and data stored in HTML webpages.
Vast amount of information exists across the interminable online webpages. Much
of this information are unstructured text that may be useful in our analyses. This
section covers the basics of scraping these texts from online sources. Throughout
this section I will illustrate how to extract different text components of webpages by
dissecting the Wikipedia page on web scraping. However, its important to rst cover
one of the basic components of HTML elements as we will leverage this informa-
tion to pull desired information. I offer only enough insight required to begin scrap-
ing; I highly recommend XML and Web Technologies for Data Sciences with R and
Automated Data Collection with R to learn more about HTML and XML element
structures.
HTML elements are written with a start tag, an end tag, and with the content in
between: <tagname>content</tagname>. The tags which typically contain
the textual content we wish to scrape, and the tags we will leverage in the next two
sections, include:
<h1>, <h2>,,<h6>: Largest heading, second largest heading, etc.
<p>: Paragraph elements
<ul>: Unordered bulleted list
<ol>: Ordered list
<li>: Individual list item
<div>: Division or section
<table>: Table
For example, text in paragraph form that you see online are wrapped with the
HTML paragraph tag <p> as in:
<p>
This paragraph represents
a typical text paragraph
in HTML form
</p>
It is through these tags that we can start to extract textual components (also
referred to as nodes) of HTML webpages.
16.2 Scraping HTML Text 135
To scrape online text well make use of the relatively newer rvest package. rvest
was created by the RStudio team inspired by libraries such as beautiful soup which
has greatly simplied web scraping. rvest provides multiple functionalities; how-
ever, in this section we will focus only on extracting HTML text with rvest. Its
important to note that rvest makes use of the pipe operator (%>%) developed
through the magrittr package. If you are not familiar with the functionality of %>%
I recommend you jump to the chapter on Simplifying Your Code with %>% so that
you have a better understanding of whats going on with the code.
To extract text from a webpage of interest, we specify what HTML elements we
want to select by using html_nodes(). For instance, if we want to scrape the
primary heading for the Web Scraping Wikipedia webpage we simply identify the
<h1> node as the node we want to select. html_nodes() will identify all <h1>
nodes on the webpage and return the HTML element. In our example we see there
is only one <h1> node on this webpage.
library(rvest)
scraping_wiki %>%
html_nodes("h1")
## {xml:nodeset (1)}
## [1] <h1 id="firstHeading" class="firstHeading" lang="en">Web scraping</h1>
To extract only the heading text for this <h1> node, and not include all the
HTML syntax we use html_text() which returns the heading text we see at the
top of the Web Scraping Wikipedia page.
scraping_wiki %>%
html_nodes("h1") %>%
html_text()
## [1] "Web scraping"
If we want to identify all the second level headings on the webpage we follow the
same process but instead select the <h2> nodes. In this example we see there are
ten second level headings on the Web Scraping Wikipedia page.
scraping_wiki %>%
html_nodes("h2") %>%
html_text()
## [1] "Contents"
## [2] "Techniques[edit]"
## [3] "Legal issues[edit]"
## [4] "Notable tools[edit]"
## [5] "See also[edit]"
## [6] "Technical measures to stop bots[edit]"
## [7] "Articles[edit]"
## [8] "References[edit]"
## [9] "See also[edit]"
## [10] "Navigation menu"
136 16 Scraping Data
Next, we can move on to extracting much of the text on this webpage which is in
paragraph form. We can follow the same process illustrated above but instead well
select all <p> nodes. This selects the 17 paragraph elements from the web page;
which we can examine by subsetting the list p_nodes to see the rst line of each
paragraph along with the HTML syntax. Just as before, to extract the text from these
nodes and coerce them to a character string we simply apply html_text().
p_nodes <- scraping_wiki %>%
html_nodes("p")
length(p_nodes)
## [1] 17
p_nodes[1:6]
## {xml:nodeset (6)}
## [1] <p>Web scraping (web harvesting or web data extract
## [2] <p>Web scraping is closely related to <a href="/wiki/Web_indexing" t
## [3] <p/>
## [4] <p/>
## [5] <p>Web scraping is the process of automatically collecting informati
## [6] <p>Web scraping may be against the <a href="/wiki/Terms_of_use" titl
p_text[1]
## [1] "Web scraping (web harvesting or web data extraction) is a
computer software technique of extracting information from web-
sites. Usually, such software programs simulate human exploration
of the World Wide Web by either implementing low-level Hypertext
Transfer Protocol (HTTP), or embedding a fully-fledged web browser,
such as Mozilla Firefox."
Not too bad; however, we may not have captured all the text that we were hoping
for. Since we extracted text for all <p> nodes, we collected all identied paragraph
text; however, this does not capture the text in the bulleted lists. For example, when
you look at the Web Scraping Wikipedia page you will notice a signicant amount
of text in bulleted list format following the third paragraph under the Techniques
heading. If we look at our data well see that that the text in this list format are not
capture between the two paragraphs:
p_text[5]
## [1] "Web scraping is the process of automatically collecting
information from the World Wide Web. It is a field with active
developments sharing a common goal with the semantic web vision,
an ambitious initiative that still requires breakthroughs in text
processing, semantic understanding, artificial intelligence and
human-computer interactions. Current web scraping solutions range
from the ad-hoc, requiring human effort, to fully automated sys-
tems that are able to convert entire web sites into structured
information, with limitations."
16.2 Scraping HTML Text 137
p_text[6]
## [1] "Web scraping may be against the terms of use of some websites.
The enforceability of these terms is unclear.[4] While outright duplica-
tion of original expression will in many cases be illegal, in the United
States the courts ruled in Feist Publications v. Rural Telephone Service
that duplication of facts is allowable. U.S. courts have acknowledged
that users of \"scrapers\" or \"robots\" may be held liable for commit-
ting trespass to chattels,[5][6] which involves a computer system itself
being considered personal property upon which the user of a scraper is
trespassing. The best known of these cases, eBay v. Bidder's Edge,
resulted in an injunction ordering Bidder's Edge to stop accessing, col-
lecting, and indexing auctions from the eBay web site. This case involved
automatic placing of bids, known as auction sniping. However, in order
to succeed on a claim of trespass to chattels, the plaintiff must dem-
onstrate that the defendant intentionally and without authorization
interfered with the plaintiff's possessory interest in the computer sys-
tem and that the defendant's unauthorized use caused damage to the plain-
tiff. Not all cases of web spidering brought before the courts have been
considered trespass to chattels.[7]"
This is because the text in this list format are contained in <ul> nodes. To capture
the text in lists, we can use the same steps as above but we select specic nodes which
represent HTML lists components. We can approach extracting list text two ways.
First, we can pull all list elements (<ul>). When scraping all <ul> text, the
resulting data structure will be a character string vector with each element repre-
senting a single list consisting of all list items in that list. In our running example
there are 21 list elements as shown in the example that follows. You can see the rst
list scraped is the table of contents and the second list scraped is the list in the
Techniques section.
ul_text <- scraping_wiki %>%
html_nodes("ul") %>%
html_text()
length(ul_text)
## [1] 21
ul_text[1]
## [1] "\n1 Techniques\n2 Legal issues\n3 Notable tools\n4 See
also\n5 Technical measures to stop bots\n6 Articles\n7 References\
n8 See also\n"
An alternative approach is to pull all <li> nodes. This will pull the text con-
tained in each list item for all the lists. In our running example theres 146 list items
that we can extract from this Wikipedia page. The rst eight list items are the list of
contents we see towards the top of the page. List items 917 are the list elements
138 16 Scraping Data
contained in the Techniques section, list items 1844 are the items listed under the
Notable Tools section, and so on.
li_text <- scraping_wiki %>%
html_nodes("li") %>%
html_text()
length(li_text)
## [1] 147
li_text[1:8]
## [1] "1 Techniques" "2 Legal issues"
## [3] "3 Notable tools" "4 See also"
## [5] "5 Technical measures to stop bots" "6 Articles"
## [7] "7 References" "8 See also"
At this point we may believe we have all the text desired and proceed with join-
ing the paragraph (p_text) and list (ul_text or li_text) character strings
and then perform the desired textual analysis. However, we may now have captured
more text than we were hoping for. For example, by scraping all lists we are also
capturing the listed links in the left margin of the webpage. If we look at the 104
136 list items that we scraped, well see that these texts correspond to the left mar-
gin text.
li_text[104:136]
## [1] "Main page" "Contents" "Featured content"
## [4] "Current events" "Random article" "Donate to Wikipedia"
## [7] "Wikipedia store" "Help" "About Wikipedia"
## [10] "Community portal" "Recent changes" "Contact page"
## [13] "What links here" "Related changes" "Upload file"
## [16] "Special pages" "Permanent link" "Page information"
## [19] "Wikidata item" "Cite this page" "Create a book"
## [22] "Download as PDF" "Printable version" "Catal"
## [25] "Deutsch" "Espaol" "Franais"
## [28] "slenska" "Italiano" "Latvieu"
## [31] "Nederlands" "" "Cpc / srpski"
If we desire to scrape every piece of text on the webpage than this wont be of
concern. In fact, if we want to scrape all the text regardless of the content they rep-
resent there is an easier approach. We can capture all the content to include text in
paragraph (<p>), lists (<ul>, <ol>, and <li>), and even data in tables (<table>)
by using <div>. This is because these other elements are usually a subsidiary of an
HTML division or section so pulling all <div> nodes will extract all text contained
in that division or section regardless if it is also contained in a paragraph or list.
However, if we are concerned only with specic content on the webpage then we
need to make our HTML node selection process a little more focused. To do this we,
we can use our browsers developer tools to examine the webpage we are scraping
and get more details on specic nodes of interest. If you are using Chrome or Firefox
you can open the developer tools by clicking F12 (Cmd + Opt + I for Mac) or for
Safari you would use Command-Option-I. An additional option which is recom-
mended by Hadley Wickham is to use selectorgadget.com, a Chrome extension, to
help identify the web page elements you need.6
Once the developers tools are opened your primary concern is with the element
selector. This is located in the top lefthand corner of the developers tools window.
Developer Tools: Element Selector
Once youve selected the element selector you can now scroll over the elements
of the webpage which will cause each element you scroll over to be highlighted.
Once youve identied the element you want to focus on, select it. This will cause
the element to be identied in the developer tools window. For example, if I am only
interested in the main body of the Web Scraping content on the Wikipedia page then
I would select the element that highlights the entire center component of the web-
page. This highlights the corresponding element <div id="bodyContent"
class="mw-body-content"> in the developer tools window as the following
illustrates.
6
You can learn more about selectors at ukeout.github.io
140 16 Scraping Data
I can now use this information to select and scrape all the text from this specic
<div> node by calling the ID name (#mw-content-text) in html_nodes().7
As you can see below, the text that is scraped begins with the rst line in the main
body of the Web Scraping content and ends with the text in the See Also section
which is the last bit of text directly pertaining to Web Scraping on the webpage.
Explicitly, we have pulled the specic text associated with the web content we desire.
body_text <- scraping_wiki %>%
html_nodes("#mw-content-text") %>%
html_text()
7
You can simply assess the name of the ID in the highlighted element or you can right click the
highlighted element in the developer tools window and select Copy selector. You can then paste
directly into `html_nodes() as it will paste the exact ID name that you need for that element.
16.2 Scraping HTML Text 141
16.2.3 Cleaning Up
With any webscraping activity, especially involving text, there is likely to be some
clean up involved. For example, in the previous example we saw that we can spe-
cically pull the list of Notable Tools; however, you can see that in between each
list item rather than a space there contains one or more \n which is used in HTML
to specify a new line. We can clean this up quickly with a little character string
manipulation.
142 16 Scraping Data
library(magrittr)
scraping_wiki %>%
html_nodes("#mw-content-text > div:nth-child(22)") %>%
html_text()
## [1] "\n\nApache Camel\nArchive.is\nAutomation Anywhere\nConvertigo\ncURL\nData
Toolbar\nDiffbot\nFirebug\nGreasemonkey\nHeritrix\nHtmlUnit\nHTTrack\niMacros\nImport.
io\nJaxer\nNode.js\nnokogiri\nPhantomJS\nScraperWiki\nScrapy\nSelenium\nSimpleTest\n
watir\nWget\nWireshark\nWSO2 Mashup Server\nYahoo! Query Language (YQL)\n\n"
scraping_wiki %>%
html_nodes("#mw-content-text > div:nth-child(22)") %>%
html_text() %>%
strsplit(split = "\n") %>%
unlist() %>%
.[. != ""]
## [1] "Apache Camel" "Archive.is"
## [3] "Automation Anywhere" "Convertigo"
## [5] "cURL" "Data Toolbar"
## [7] "Diffbot" "Firebug"
## [9] "Greasemonkey" "Heritrix"
## [11] "HtmlUnit" "HTTrack"
## [13] "iMacros" "Import.io"
## [15] "Jaxer" "Node.js"
## [17] "nokogiri" "PhantomJS"
## [19] "ScraperWiki" "Scrapy"
## [21] "Selenium" "SimpleTest"
## [23] "watir" "Wget"
## [25] "Wireshark" "WSO2 Mashup Server"
## [27] "Yahoo! Query Language (YQL)"
Similarly, as we saw in our example above with scraping the main body content
(body_text), there are extra characters (i.e. \n, \, ^) in the text that we may not
want. Using a little regex we can clean this up so that our character string consists
of only text that we see on the screen and no additional HTML code embedded
throughout the text.
library(stringr)
# clean up text
body_text %>%
str_replace_all(pattern = "\n", replacement = " ") %>%
str_replace_all(pattern = "[\\^]", replacement = " ") %>%
str_replace_all(pattern = "\"", replacement = " ") %>%
str_replace_all(pattern = "\\s+", replacement = " ") %>%
str_trim(side = "both") %>%
So there we have it, text scraping in a nutshell. Although not all encompassing,
this section covered the basics of scraping text from HTML documents. Whether
you want to scrape text from all common text-containing nodes such as <div>,
<p>, <ul> and the like or you want to scrape from a specic node using the specic
ID, this section provides you the basic fundamentals of using rvest to scrape the
text you need. In the next section we move on to scraping data from HTML tables.
Another common structure of information storage on the Web is in the form of HTML
tables. This section reiterates some of the information from the previous section;
however, we focus solely on scraping data from HTML tables. The simplest approach
to scraping HTML table data directly into R is by using either the rvest package or the
XML package. To illustrate, I will focus on the BLS employment statistics webpage
which contains multiple HTML tables from which we can scrape data.
Recall that HTML elements are written with a start tag, an end tag, and with the
content in between: <tagname>content</tagname>. HTML tables are con-
tained within <table> tags; therefore, to extract the tables from the BLS employ-
ment statistics webpage we rst use the html_nodes() function to select the
<table> nodes. In this case we are interested in all table nodes that exist on the
webpage. In this example, html_nodes captures 15 HTML tables. This includes
data from the 10 data tables seen on the webpage but also includes data from a few
additional tables used to format parts of the page (i.e. table of contents, table of
gures, advertisements).
144 16 Scraping Data
library(rvest)
head(tbls)
## {xml:nodeset (6)}
## [1] <table id="main-content-table"> \n\t<tr> \n\t\t<td id="secon
## [2] <table id="Table1" class="regular" cellspacing="0" cellpadding="0" x
## [3] <table id="Table2" class="regular" cellspacing="0" cellpadding="0" x
## [4] <table id="Table3" class="regular" cellspacing="0" cellpadding="0" x
## [5] <table id="Table4" class="regular" cellspacing="0" cellpadding="0" x
## [6] <table id="Exhibit1" class="regular" cellspacing="0" cellpadding="0"
Remember that html_nodes() does not parse the data; rather, it acts as a CSS
selector. To parse the HTML table data we use html_table(), which would cre-
ate a list containing 15 data frames. However, rarely do we need to scrape every
HTML table from a page, especially since some HTML tables dont catch any infor-
mation we are likely interested in (i.e. table of contents, table of gures, footers).
More often than not we want to parse specic tables. Lets assume we want to
parse the second and third tables on the webpage:
Table 2. Nonfarm employment benchmarks by industry, March 2014 (in thou-
sands) and
Table 3. Net birth/death estimates by industry supersector, AprilDecember 2014
(in thousands)
This can be accomplished two ways. First, we can assess the previous tbls list
and try to identify the table(s) of interest. In this example it appears that tbls list
items 3 and 4 correspond with Table 2 and Table 3, respectively. We can then subset
the list of table nodes prior to parsing the data with html_table(). This results
in a list of two data frames containing the data of interest.
# subset list of table nodes for items 3 & 4
tbls_ls <- webpage %>%
html_nodes("table") %>%
.[3:4] %>%
html_table(fill = TRUE)
str(tbls_ls)
## List of 2
## $ :'data.frame': 147 obs. of 6 variables:
## ..$ CES Industry Code : chr [1:147] "Amount" "00-000000" "05-000000"
## ..$ CES Industry Title: chr [1:147] "Percent" "Total nonfarm"
## ..$ Benchmark : chr [1:147] NA "137,214" "114,989" "18,675"
## ..$ Estimate : chr [1:147] NA "137,147" "114,884" "18,558"
## ..$ Differences : num [1:147] NA 67 105 117 -50 -12 -16 -2.8
## ..$ NA : chr [1:147] NA "(1)" "0.1" "0.6"
## $ :'data.frame': 11 obs. of 12 variables:
## ..$ CES Industry Code : chr [1:11] "10-000000" "20-000000" "30-000000"
16.3 Scraping HTML Table Data 145
## ..$ CES Industry Title: chr [1:11] "Mining and logging" "Construction"
## ..$ Apr : int [1:11] 2 35 0 21 0 8 81 22 82 12
## ..$ May : int [1:11] 2 37 6 24 5 8 22 13 81 6
## ..$ Jun : int [1:11] 2 24 4 12 0 4 5 -14 86 6
## ..$ Jul : int [1:11] 2 12 -3 7 -1 3 35 7 62 -2
## ..$ Aug : int [1:11] 1 12 4 14 3 4 19 21 23 3
## ..$ Sep : int [1:11] 1 7 1 9 -1 -1 -12 12 -33 -2
## ..$ Oct : int [1:11] 1 12 3 28 6 16 76 35 -17 4
## ..$ Nov : int [1:11] 1 -10 2 10 3 3 14 14 -22 1
## ..$ Dec : int [1:11] 0 -21 0 4 0 10 -10 -3 4 1
## ..$ CumulativeTotal : int [1:11] 12 108 17 129 15 55 230 107 266 29
str(tbls2_ls)
## List of 2
## $ Table1:'data.frame': 147 obs. of 6 variables:
## ..$ CES Industry Code : chr [1:147] "Amount" "00-000000" "05-000000"
## ..$ CES Industry Title: chr [1:147] "Percent" "Total nonfarm"
## ..$ Benchmark : chr [1:147] NA "137,214" "114,989" "18,675"
## ..$ Estimate : chr [1:147] NA "137,147" "114,884" "18,558"
## ..$ Differences : num [1:147] NA 67 105 117 -50 -12 -16 -2.8
## ..$ NA : chr [1:147] NA "(1)" "0.1" "0.6"
## $ Table2:'data.frame': 11 obs. of 12 variables:
## ..$ CES Industry Code : chr [1:11] "10-000000" "20-000000" "30-000000"
## ..$ CES Industry Title: chr [1:11] "Mining and logging" "Construction"
## ..$ Apr : int [1:11] 2 35 0 21 0 8 81 22 82 12
## ..$ May : int [1:11] 2 37 6 24 5 8 22 13 81 6
## ..$ Jun : int [1:11] 2 24 4 12 0 4 5 -14 86 6
## ..$ Jul : int [1:11] 2 12 -3 7 -1 3 35 7 62 -2
## ..$ Aug : int [1:11] 1 12 4 14 3 4 19 21 23 3
## ..$ Sep : int [1:11] 1 7 1 9 -1 -1 -12 12 -33 -2
## ..$ Oct : int [1:11] 1 12 3 28 6 16 76 35 -17 4
## ..$ Nov : int [1:11] 1 -10 2 10 3 3 14 14 -22 1
## ..$ Dec : int [1:11] 0 -21 0 4 0 10 -10 -3 4 1
## ..$ CumulativeTotal : int [1:11] 12 108 17 129 15 55 230 107 266 29
146 16 Scraping Data
One issue to note is when using rvests html_table() to read a table with
split column headings as in Table 2. Nonfarm employment. html_table will
cause split headings to be included and can cause the rst row to include parts of the
headings. We can see this with Table 2. This requires a little clean up.
head(tbls2_ls[[1]], 4)
## CES Industry Code CES Industry Title Benchmark Estimate Differences NA
## 1 Amount Percent <NA> <NA> NA <NA>
## 2 00-000000 Total nonfarm 137,214 137,147 67 (1)
## 3 05-000000 Total private 114,989 114,884 105 0.1
## 4 06-000000 Goods-producing 18,675 18,558 117 0.6
head(tbls2_ls[[1]], 4)
## CES_Code Ind_Title Benchmark Estimate Amt_Diff Pct_Diff
## 2 00-000000 Total nonfarm 137,214 137,147 67 (1)
## 3 05-000000 Total private 114,989 114,884 105 0.1
## 4 06-000000 Goods-producing 18,675 18,558 117 0.6
## 5 07-000000 Service-providing 118,539 118,589 -50 (1)
An alternative to rvest for table scraping is to use the XML package. The XML
package provides a convenient readHTMLTable() function to extract data from
HTML tables in HTML documents. By passing the URL to readHTMLTable(),
the data in each table is read and stored as a data frame. In a situation like our run-
ning example where multiple tables exists, the data frames will be stored in a list
similar to rvests html_table.
library(XML)
typeof(tbls_xml)
## [1] "list"
length(tbls_xml)
## [1] 15
16.3 Scraping HTML Table Data 147
You can see that tbls_xml captures the same 15 <table> nodes that html_
nodes captured. To capture the same tables of interest we previously discussed
(Table 2. Nonfarm employment and Table 3. Net birth/death) we can use a cou-
ple approaches. First, we can assess str(tbls_xml) to identify the tables of
interest and perform normal list subsetting. In our example list items 3 and 4 cor-
respond with our tables of interest.
head(tbls_xml[[3]])
## V1 V2 V3 V4 V5 V6
## 1 00-000000 Total nonfarm 137,214 137,147 67 (1)
## 2 05-000000 Total private 114,989 114,884 105 0.1
## 3 06-000000 Goods-producing 18,675 18,558 117 0.6
## 4 07-000000 Service-providing 118,539 118,589 -50 (1)
## 5 08-000000 Private service-providing 96,314 96,326 -12 (1)
## 6 10-000000 Mining and logging 868 884 -16 -1.8
head(tbls_xml[[4]], 3)
## CES Industry Code CES Industry Title Apr May Jun Jul Aug Sep Oct Nov Dec
## 1 10-000000 Mining and logging 2 2 2 2 1 1 1 1 0
## 2 20-000000 Construction 35 37 24 12 12 7 12 -10 -21
## 3 30-000000 Manufacturing 0 6 4 -3 4 1 3 2 0
## CumulativeTotal
## 1 12
## 2 108
## 3 17
str(emp_ls)
## List of 2
## $ Table2:'data.frame': 145 obs. of 6 variables:
## ..$ V1: Factor w/ 145 levels "00-000000","05-000000",..: 1 2 3 4 5 6 7 8
## ..$ V2: Factor w/ 143 levels "Accommodation",..: 130 131 52 116 102 74
## ..$ V3: Factor w/ 145 levels "1,010.3","1,048.3",..: 40 35 48 37 145 140
## ..$ V4: Factor w/ 145 levels "1,008.4","1,052.3",..: 41 34 48 36 144 142
## ..$ V5: Factor w/ 123 levels "-0.3","-0.4",..: 113 68 71 48 9 19 29 11
## ..$ V6: Factor w/ 56 levels "-0.1","-0.2",..: 30 31 36 30 30 16 28 14 29
## $ Table3:'data.frame': 11 obs. of 12 variables:
## ..$ CES Industry Code : Factor w/ 11 levels "10-000000","20-000000",..:1
## ..$ CES Industry Title: Factor w/ 11 levels "263","Construction",..: 8 2
## ..$ Apr : Factor w/ 10 levels "0","12","2","204",..: 3 7 1
## ..$ May : Factor w/ 10 levels "129","13","2",..: 3 6 8 5 7
## ..$ Jun : Factor w/ 10 levels "-14","0","12",..: 5 6 7 3 2
## ..$ Jul : Factor w/ 10 levels "-1","-2","-3",..: 6 5 3 10
## ..$ Aug : Factor w/ 9 levels "-19","1","12",..: 2 3 9 4 8
## ..$ Sep : Factor w/ 9 levels "-1","-12","-2",..: 5 8 5 9 1
## ..$ Oct : Factor w/ 10 levels "-17","1","12",..: 2 3 6 5 9
## ..$ Nov : Factor w/ 8 levels "-10","-15","-22",..: 4 1 7 5
## ..$ Dec : Factor w/ 8 levels "-10","-21","-3",..: 4 2 4 7
## ..$ CumulativeTotal : Factor w/ 10 levels "107","108","12",..: 3 2 6 4
148 16 Scraping Data
The third option involves explicitly naming the tables to parse. This process uses
the element selector process described in the previous section to call the table by
name.8 We use getNodeSet() to select the specied tables of interest. However,
a key difference here is rather than copying the table ID names you want to copy the
XPath. You can do this with the following: After youve highlighted the table ele-
ment of interest with the element selector, right click the highlighted element in the
developer tools window and select Copy XPath. From here we just use readHT-
MLTable() to convert to data frames and we have our desired tables.
library(RCurl)
# parse url
url_parsed <- htmlParse(getURL(url), asText = TRUE)
head(bls_table2)
## V1 V2 V3 V4 V5 V6
## 1 00-000000 Total nonfarm 137,214 137,147 67 (1)
## 2 05-000000 Total private 114,989 114,884 105 0.1
## 3 06-000000 Goods-producing 18,675 18,558 117 0.6
## 4 07-000000 Service-providing 118,539 118,589 -50 (1)
## 5 08-000000 Private service-providing 96,314 96,326 -12 (1)
## 6 10-000000 Mining and logging 868 884 -16 -1.8
head(bls_table3, 3)
## CES Industry Code CES Industry Title Apr May Jun Jul Aug Sep Oct Nov Dec
## 1 10-000000 Mining and logging 2 2 2 2 1 1 1 1 0
## 2 20-000000 Construction 35 37 24 12 12 7 12 -10 -21
## 3 30-000000 Manufacturing 0 6 4 -3 4 1 3 2 0
## CumulativeTotal
## 1 12
## 2 108
## 3 17
8
See Sect. 16.2.2 Scraping Specic HTML Nodes for details regarding the element selector
process.
16.3 Scraping HTML Table Data 149
head(bls_table2)
## CES_Code Ind_Title Benchmark Estimate Amt_Diff Pct_Diff
## 1 00-000000 Total nonfarm 137,214 137,147 67 (1)
## 2 05-000000 Total private 114,989 114,884 105 0.1
## 3 06-000000 Goods-producing 18,675 18,558 117 0.6
## 4 07-000000 Service-providing 118,539 118,589 -50 (1)
## 5 08-000000 Private service-providing 96,314 96,326 -12 (1)
## 6 10-000000 Mining and logging 868 884 -16 -1.8
Also, for bls_table3 note that the net birth/death values parsed have been
converted to factor levels. We can use the colClasses argument to correct this.
str(bls_table3)
## 'data.frame': 11 obs. of 12 variables:
## $ CES Industry Code : Factor w/ 11 levels "10-000000","20-000000",..: 1 2
## $ CES Industry Title: Factor w/ 11 levels "263","Construction",..: 8 2 7
## $ Apr : Factor w/ 10 levels "0","12","2","204",..: 3 7 1 5
## $ May : Factor w/ 10 levels "129","13","2",..: 3 6 8 5 7 9
## $ Jun : Factor w/ 10 levels "-14","0","12",..: 5 6 7 3 2 7
## $ Jul : Factor w/ 10 levels "-1","-2","-3",..: 6 5 3 10 1 7
## $ Aug : Factor w/ 9 levels "-19","1","12",..: 2 3 9 4 8 9 5
## $ Sep : Factor w/ 9 levels "-1","-12","-2",..: 5 8 5 9 1 1
## $ Oct : Factor w/ 10 levels "-17","1","12",..: 2 3 6 5 9 4
## $ Nov : Factor w/ 8 levels "-10","-15","-22",..: 4 1 7 5 8
## $ Dec : Factor w/ 8 levels "-10","-21","-3",..: 4 2 4 7 4 6
## $ CumulativeTotal : Factor w/ 10 levels "107","108","12",..: 3 2 6 4 5
str(bls_table3)
## 'data.frame': 11 obs. of 12 variables:
## $ CES Industry Code : Factor w/ 11 levels "10-000000","20-
000000",..: 1 2
## $ CES Industry Title: Factor w/ 11 levels "263","Construction",..:
8 2 7
## $ Apr : int 2 35 0 21 0 8 81 22 82 12
## $ May : int 2 37 6 24 5 8 22 13 81 6
## $ Jun : int 2 24 4 12 0 4 5 -14 86 6
## $ Jul : int 2 12 -3 7 -1 3 35 7 62 -2
## $ Aug : int 1 12 4 14 3 4 19 21 23 3
## $ Sep : int 1 7 1 9 -1 -1 -12 12 -33 -2
## $ Oct : int 1 12 3 28 6 16 76 35 -17 4
## $ Nov : int 1 -10 2 10 3 3 14 14 -22 1
## $ Dec : int 0 -21 0 4 0 10 -10 -3 4 1
## $ CumulativeTotal : int 12 108 17 129 15 55 230 107 266 29
Between rvest and XML, scraping HTML tables is relatively easy once you get u-
ent with the syntax and the available options. This section covers just the basics of both
these packages to get you moving forward with scraping tables. In the next section we
move on to working with application program interfaces (APIs) to get data from the web.
150 16 Scraping Data
16.4.1 Prerequisites?
Each API is unique; however, there are a few fundamental pieces of information
youll need to work with an API. First, the reason youre using an API is to request
specic types of data from a specic data set from a specic organization. You at
least need to know a little something about each one of these:
1. The URL for the organization and data you are pulling. Most pre-built API pack-
ages already have this connection established but when using httr youll need
to specify.
2. The data set you are trying to pull from. Most organizations have numerous data
sets to peruse so you need to make yourself familiar with the names of the avail-
able data sets.
3. The data content. Youll need to specify the specic data variables you want the
API to retrieve so youll need to be familiar with, or have access to, the data library.
In addition to these key components you will also, typically, need to provide a
form of identication and/or authorization. This is done via:
1. API key (aka token). A key is used to identify the user along with track and con-
trol how the API is being used (guard against malicious use). A key is often
obtained by supplying basic information (i.e. name, email) to the organization
and in return they give you a multi-digit key.
16.4 Working with APIs 151
Like everything else you do in R, when looking to work with an API your rst ques-
tion should be Is there a package for that? R has an extensive list of packages in
which API data feeds have been hooked into R. You can nd a slew of them scat-
tered throughout the CRAN Task View: Web Technologies and Services web page,10
on the rOpenSci web page,11 and elsewhere.12
To give you a taste for how these packages typically work, Ill quickly cover
three packages:
blsAPI for pulling U.S. Bureau of Labor Statistics data
rnoaa for pulling NOAA climate data
rtimes for pulling data from multiple APIs offered by the New York Times
16.4.2.1 blsAPI
The blsAPI allows users to request data for one or multiple series through the
U.S. Bureau of Labor Statistics API. To use the blsAPI app you only need knowledge
on the data; no key or OAuth are required. I lllustrate by pulling Mass Layoff Statistics
data but you will nd all the available data sets and their series code information at
http://www.bls.gov/help/hlpforma.htm.
The key information you will be concerned about is contained in the series iden-
tier. For the Mass Layoff data the series ID code is MLUMS00NN0001003. Each
component of this series code has meaning and can be adjusted to get specic Mass
Layoff data. The BLS provides this breakdown for what each component means
along with the available list of codes for this data set.13 For instance, the S00
(MLUMS00NN0001003) component represents the division/state. S00 will pull for
all states but I could change to D30 to pull data for the Midwest or S39 to pull for
Ohio. The N0001 (MLUMS00NN0001003) component represents the industry/
demographics. N0001 pulls data for all industries but I could change to N0008 to
pull data for the food industry or C00A2 for all persons age 3044.
9
Read more about OAuth athttps://oauth.net/
10
https://cran.r-project.org/web/views/WebTechnologies.html
11
https://ropensci.org/packages/
12
http://stats.stackexchange.com/questions/12670/data-apis-feeds-available-as-packages-in-r
13
http://www.bls.gov/help/hlpforma.htm#ML
152 16 Scraping Data
I simply call the series identier in the blsAPI() function which pulls the
JSON data object. We can then use the fromJSON() function from the rjson
package to convert to an R data object (a list in this case). You can see that the raw
data pull provides a list of 4 items. The rst three provide some metadata info (sta-
tus, response time, and message if applicable). The data we are concerned about is
in the 4th (Results$series$data) list item which contains 31 observations.
library(rjson)
library(blsAPI)
List of 4
$ status : chr "REQUEST_SUCCEEDED"
$ responseTime: num 38
$ message : list()
$ Results :List of 1
..$ series:List of 1
.. ..$ :List of 2
.. .. ..$ seriesID: chr "MLUMS00NN0001003"
.. .. ..$ data :List of 31
.. .. .. ..$ :List of 5
.. .. .. .. ..$ year : chr "2013"
.. .. .. .. ..$ period : chr "M05"
.. .. .. .. ..$ periodName: chr "May"
.. .. .. .. ..$ value : chr "1383"
One of the inconveniences of an API is we do not get to specify how the data we
receive is formatted. This is a minor price to pay considering all the other benets
APIs provide. Once we understand the received data format we can typically re-
format using a little list subsetting which we previously covered and looping which
well cover in a future chapter.
# create empty data frame to fill
layoff_df <- data.frame(NULL)
head(layoff_df)
## year period periodName value
## 1 2013 M05 May 1383
## 2 2013 M04 April 1174
## 3 2013 M03 March 1132
## 4 2013 M02 February 960
## 5 2013 M01 January 1528
## 6 2012 M13 Annual 17080
16.4 Working with APIs 153
blsAPI also allows you to pull multiple data series and has optional arguments
(i.e. start year, end year, etc.). You can see other options at help(package =
blsAPI).
16.4.2.2 rnoaa
The rnoaa package allows users to request climate data from multiple data sets
through the National Climatic Data Center API.14 Unlike blsAPI, the rnoaa app
requires you to have an API key. To request a key go to http://www.ncdc.noaa.gov/
cdo-web/token and provide your email; a key will immediately be emailed to you.
With the key in hand, we can begin pulling data. The NOAA provides a compre-
hensive metadata library to familiarize yourself with the data available. Lets start
by pulling all the available NOAA climate stations near my residence. I live in
Montgomery county Ohio so we can nd all the stations in this county by inserting
the FIPS code. Furthermore, Im interested in stations that provide data for the
GHCND data set which contains records on numerous daily variables such as
maximum and minimum temperature, total daily precipitation, snowfall, and snow
depth; however, about two thirds of the stations report precipitation only. See
?ncdc_stations for other data sets available via rnoaa.
library(rnoaa)
stations$data
## Source: local data frame [23 x 9]
##
## elevation mindate maxdate latitude
## (dbl) (chr) (chr) (dbl)
## 1 294.1 2009-02-09 2014-06-25 39.6314
## 2 251.8 2009-03-01 2016-01-16 39.6807
## 3 295.7 2009-03-25 2012-09-08 39.6252
## 4 298.1 2009-08-24 2012-07-20 39.8070
## 5 304.5 2010-04-02 2016-01-12 39.6949
## 6 283.5 2012-07-01 2016-01-16 39.7373
## 7 301.4 2012-07-29 2016-01-16 39.8795
## 8 317.3 2012-09-08 2016-01-12 39.8329
## 9 298.1 2012-09-07 2016-01-15 39.6247
## 10 250.5 2012-09-11 2016-01-08 39.7180
## ..
## Variables not shown: name (chr), datacoverage (dbl), id (chr),
## elevationUnit (chr), longitude (dbl)
14
http://www.ncdc.noaa.gov/cdo-web/webservices/v2
154 16 Scraping Data
So we see that several stations are available from which to pull data. To actually
pull data from one of these stations we need the station ID. The station I want to pull
data from is the Dayton International Airport station. We can see that this station
provides data from 1948-present and I can get the station ID as illustrated. Note that
I use some dplyr for data manipulation here; we will cover dplyr in a later chap-
ter but this just illustrates the fact that we received the data via the API.
library(dplyr)
stations$data %>%
filter(name == "DAYTON INTERNATIONAL AIRPORT, OH US") %>%
select(mindate, maxdate, id)
## Source: local data frame [1 x 3]
##
## mindate maxdate id
## (chr) (chr) (chr)
## 1 1948-01-01 2016-01-15 GHCND:USW00093815
To pull all available GHCND data from this station well use ncdc(). We sim-
ply supply the data to pull, the start and end dates (ncdc restricts you to a 1 year
limit), station ID, and your key. We can see that this station provides a full range of
data types.
climate$data
## Source: local data frame [25 x 8]
##
## date datatype station value fl_m fl_q
## (chr) (chr) (chr) (int) (chr) (chr)
## 1 2015-01-01T00:00:00 AWND GHCND:USW00093815 72
## 2 2015-01-01T00:00:00 PRCP GHCND:USW00093815 0
## 3 2015-01-01T00:00:00 SNOW GHCND:USW00093815 0
## 4 2015-01-01T00:00:00 SNWD GHCND:USW00093815 0
## 5 2015-01-01T00:00:00 TAVG GHCND:USW00093815 -38 H
## 6 2015-01-01T00:00:00 TMAX GHCND:USW00093815 28
## 7 2015-01-01T00:00:00 TMIN GHCND:USW00093815 -71
## 8 2015-01-01T00:00:00 WDF2 GHCND:USW00093815 240
## 9 2015-01-01T00:00:00 WDF5 GHCND:USW00093815 240
## 10 2015-01-01T00:00:00 WSF2 GHCND:USW00093815 130
## ..
## Variables not shown: fl_so (chr), fl_t (chr)
Since we recently had some snow here lets pull data on snow fall for 2015. We
adjust the limit argument (by default ncdc limits results to 25) and identify the data
type we want. By sorting we see what days experienced the greatest snowfall (dont
worry, the results are reported in mm!).
16.4 Working with APIs 155
snow$data %>%
arrange(desc(value))
## Source: local data frame [365 x 8]
##
## date datatype station value fl_m fl_q
## (chr) (chr) (chr) (int) (chr) (chr)
## 1 2015-03-01T00:00:00 SNOW GHCND:USW00093815 114
## 2 2015-02-21T00:00:00 SNOW GHCND:USW00093815 109
## 3 2015-01-25T00:00:00 SNOW GHCND:USW00093815 71
## 4 2015-01-06T00:00:00 SNOW GHCND:USW00093815 66
## 5 2015-02-16T00:00:00 SNOW GHCND:USW00093815 30
## 6 2015-02-18T00:00:00 SNOW GHCND:USW00093815 25
## 7 2015-02-14T00:00:00 SNOW GHCND:USW00093815 23
## 8 2015-01-26T00:00:00 SNOW GHCND:USW00093815 20
## 9 2015-02-04T00:00:00 SNOW GHCND:USW00093815 20
## 10 2015-02-12T00:00:00 SNOW GHCND:USW00093815 20
## ..
## Variables not shown: fl_so (chr), fl_t (chr)
This is just an intro to rnoaa as the package offers a slew of data sets to pull
from and functions to apply. It even offers built in plotting functions. Use
help(package = "rnoaa") to see all that rnoaa has to offer.
16.4.2.3 rtimes
Lets start by searching NY Times articles. With the presidential elections upon
us, we can illustrate by searching the least controversial candidateDonald Trump.
We can see that there are 4566 article hits for the term Trump. We can get more
information on a particular article by subsetting.
156 16 Scraping Data
library(rtimes)
# summary
articles$meta
## hits time offset
## 1 4565 28 0
We can use the campaign nance API and functions to gain some insight into
Trumps compaign income and expenditures. The only special data you need is the
FEC ID for the candidate of interest.
rtimes also allows us to gain some insight into what our locally elected of-
cials are up to with the Congress API. First, I can get some informaton on my
Senator and then use that information to see if hes supporting my interest. For
instance, I can pull the most recent bills that he is co-sponsoring.
# pull info on OH senator
senator <- cg_memberbystatedistrict(chamber = "senate",
state = "OH",
key = congress_key)
senator$meta
## id name role gender party
## 1 B000944 Sherrod Brown Senator, 1st Class M D
## times_topics_url twitter_id youtube_id seniority
## 1 SenSherrodBrown SherrodBrownOhio 9
## next_election
## 1 2018
## api_url
## 1 http://api.nytimes.com/svc/politics/v3/us/legislative/congress/members/B000944.json
Although numerous R API packages are available, and cover a wide range of data,
you may eventually run into a situation where you want to leverage an organiza-
tions API but an R package does not exist. Enter httr. httr was developed by
Hadley Wickham to easily work with web APIs. It offers multiple functions (i.e.
HEAD(), POST(), PATCH(), PUT() and DELETE()); however, the function we
are most concerned with today is Get(). We use the Get() function to access an
API, provide it some request parameters, and receive an output.
To give you a taste for how the httr package works, Ill quickly cover how to
use it for a basic key-only API and an OAuth-required API:
Key-only API is illustrated by pulling U.S. Department of Education data avail-
able on data.gov
OAuth-required API is illustrated by pulling tweets from my personal Twitter feed
To demonstrate how to use the httr package for accessing a key-only API, Ill
illustrate with the College Scorecard API15 provided by the Department of Education.
First, youll need to request your API key, which can be done at https://api.data.gov/
signup/.
# truncated key
edu_key <- "fd783wmS3Z"
We can now proceed to use httr to request data from the API with the GET()
function. I went to North Dakota State University (NDSU) for my undergrad so Im
interested in pulling some data for this school. I can use the provided data library
and query explanation to determine the parameters required. In this example, the
URL includes the primary path (https://api.data.gov/ed/collegescorecard/), the
API version (v1), and the endpoint (schools). The question mark (?) at the
end of the URL is included to begin the list of query parameters, which only includes
my API key and the school of interest.
library(httr)
15
https://api.data.gov/docs/ed/
16.4 Working with APIs 159
case). The data is segmented into two main components: metadata and results. Im
primarily interested in the results.
The results branch of this list provides information on lat-long location, school
identier codes, some basic info on the school (city, number of branches, school
website, accreditor, etc.), and then student data for the years 1997-2013.
names(ndsu_data)
## [1] "metadata" "results"
names(ndsu_data$results[[1]])
## [1] "2008" "2009" "2006" "ope6_id" "2007" "2004"
## [7] "2013" "2005" "location" "2002" "2003" "id"
## [13] "1996" "1997" "school" "1998" "2012" "2011"
## [19] "2010" "ope8_id" "1999" "2001" "2000"
To see what kind of student data categories are offered we can assess a single
year. You can see that available data includes earnings, academics, student info/
demographics, admissions, costs, etc. With such a large data set, which includes
many embedded lists, sometimes the easiest way to learn the data structure is to
peruse names at different levels.
# student data categories available by year
names(ndsu_data$results[[1]]$`2013`)
## [1] "earnings" "academics" "student" "admissions"
"repayment"
## [6] "aid" "cost" "completion"
So if Im interested in comparing the rise in cost versus the rise in student debt I
can simply subset for this data once Ive identied its location and naming structure.
Note that for this subsetting we use the magrittr package and the sapply function;
both we cover in later chapters but this is just meant to illustrate the types of data
available through this API.
library(magrittr)
Quite simple isnt itat least once youve learned how the query requests are
formatted for a particular API.
We can then bundle the consumer key and secret into one object with oauth_app().
The rst argument, appname is simply used as a local identier; it does not need to
match the name you gave the Twitter app you developed at https://apps.twitter.com/.
We are now ready to ask for access credentials. Since Twitter uses OAuth 1.0 we
use oauth1.0_token() function and incorporate the endpoints identied and
the oauth_app object we previously named twitter_app.
Once authentication is complete we can now use the API. I can pull all the tweets
that show up on my personal timeline using the GET() function and the access
cridentials I stored in twitter_token. I then use content() to convert to a
list and I can start to analyze the data.
In this case each tweet is saved as an individual list item and a full range of data
are provided for each tweet (i.e. id, text, user, geo location, favorite count, etc). For
instance, we can see that the rst tweet was by FiveThirtyEight concerning American
politics and, at the time of this analysis, has been favorited by 3 people.
# convert to R object
tweets <- content(req)
tweets[[1]]$text
[1] "\U0001f3a7 A History Of Data In American Politics (Part 1): William Jennings
Bryan to Barack Obama https://t.co/oCKzrXuRHf https://t.co/6CvKKToxoH"
tweets[[1]]$favorite_count
[1] 3
162 16 Scraping Data
Bibliography
Munzert, S., Rubba, C., Meiner, P., & Nyhuis, D. (2014). Automated data collection with R: A
practical guide to web scraping and text mining. John Wiley & Sons.
Nolan, D., & Lang, D. T. (2014). XML and Web Technologies for Data Sciences with R. Springer.
16
https://cran.r-project.org/web/packages/httr/vignettes/quickstart.html
Chapter 17
Exporting Data
Although getting data into R is essential, getting data out of R can be just as important.
Whether you need to export data or analytic results simply to store, share, or feed into
another system it is generally a straight forward process. This section will cover how to
export data to text les, Excel les (along with some additional formatting capabilities),
and save to R data objects. In addition to the commonly used base R functions to perform
data importing, I will also cover functions from the popular readr and xlsx packages
along with a lesser known but useful r2excel package for Excel formatting.
As mentioned in the importing data section, text les are a popular way to hold and
exchange tabular data as almost any data application supports exporting data to the
CSV (or other text le) formats. Consequently, exporting data to a text le is a
pretty standard operation. Plus, since youve already learned how to import text les
you pretty much have the basics required to write to text les, we just use a slightly
different naming convention.
Similar to the examples provided in the importing text les section, the two main
groups of functions that I will demonstrate to write to text les include base R func-
tions and readr package functions.
df
## var1 var2 var3
## billy 10 beer TRUE
## bob 25 wine TRUE
## thornton 8 cheese FALSE
In addition to CSV les, we can also write to other text les using write.
table and write.delim().
# write to a tab delimited text files
write.delim(df, file = "export_txt")
The readr package uses write functions similar to base R. However, readr write
functions are about twice as fast and they do not write row names. One thing to note,
where base R write functions use the file = argument, readr write functions use
path =.
library(readr)
As previously mentioned, many organizations still rely on Excel to hold and share
data so exporting to Excel is a useful bit of knowledge. And rather than saving to a
.csv le to send to a co-worker who wants to work in Excel, its more efcient to just
save R outputs directly to an Excel workbook. Since I covered importing data with
the xlsx package, Ill also cover exporting data with this package. However, the
readxl package which I demonstrated in the importing data section does not have
a function to export to Excel. But there is a lesser known package called r2excel
that provides exporting and formatting functions for Excel which I will cover.
In some cases you may wish to create a .xlsx le that contains multiple data
frames. In this you can just create an empty workbook and save the data frames on
separate worksheets within the same workbook:
# create empty workbook
multiple_df <- createWorkbook()
By default this saves the row and column names but this can be adjusted by add-
ing col.names = FALSE and/or row.names = FALSE to the add-
DataFrame() function. There is also the ability to do some formatting with the
xlsx package. The following provides several examples of how you can edit titles,
subtitles, borders, column width, etc.1 Although at rst glance this can appear
tedious for simple Excel editing, the real benets present themselves when you
integrate this editing into automated analyses.
# create new workbook
wb <- createWorkbook()
#--------------------
# DEFINE CELL STYLES
#--------------------
# title and subtitle styles
title_style <- CellStyle(wb) +
Font(wb, heightInPoints = 16,
color = "blue",
isBold = TRUE,
underline = 1)
#-------------------------
# CREATE & EDIT WORKSHEET
#-------------------------
# create worksheet
Cars <- createSheet(wb, sheetName = "Cars")
1
This example was derived from http://www.sthda.com/english/ Additional options, such as add-
ing plot outputs can be found at STHDA and also in the XML and Web Technologies for Data
Sciences with R book.
17.2 Writing Data to Excel Files 167
# save workbook
saveWorkbook(wb, file = "output_example_3.xlsx")
Although Formatting Excel les using the xlsx package is possible, the last sec-
tion illustrated that it is a bit cumbersome. For this reason, A. Kassambara2 created
the r2excel package which depends on the xlsx package but provides easy to
use functions for Excel formatting. The following provides a simple example but
you can nd many additional formatting functions at http://www.sthda.com/.
# install.packages("devtools")
devtools::install_github("kassambara/r2excel")
library(r2excel)
2
https://github.com/kassambara
168 17 Exporting Data
# create worksheet
Casualties <- createSheet(wb, sheetName = "Casualties")
# add title
xlsx.addHeader(wb, sheet = Casualties,
value = "Road Casualties",
level = 1,
color = "red",
underline = 1)
# add subtitle
xlsx.addHeader(wb, sheet = Casualties,
value = "Great Britain 1969-84",
level = 2,
color = "black")
# add hyperlink
xlsx.addHyperlink(wb, sheet = Casualties,
address = "http://bradleyboehmke.github.io/",
friendlyName = "Vist my website", fontSize = 12)
xlsx.addLineBreak(sheet = Casualties, 1)
Sometimes you may need to save data or other R objects outside of your workspace.
You may want to share R data/objects with co-workers, transfer between projects or
computers, or simply archive them. There are three primary ways that people tend to
save R data/objects: as .RData, .rda, or as .rds les. .rda is just short for .RData, there-
fore, these le extensions represent the same underlying object type. You use the .rda
or .RData le types when you want to save several, or all, objects and functions that
exist in your global environment. On the other hand, if you only want to save a single
R object such as a data frame, function, or statistical model results its best to use .rds
le type. You can use .rda or .RData to save a single object but the benet of .rds is it
only saves a representation of the object and not the name whereas .rda and .RData
save both the object and its name. As a result, with .rds the saved object can be loaded
into a named object within R that is different from the name it had when originally
saved. The following illustrates how you save R objects with each type.
# save() can be used to save multiple objects in you global environment,
# in this case I save two objects to a .RData file
x <- stats::runif(20)
y <- list(a = 1, b = TRUE, c = "oops")
save(x, y, file = "xy.RData")
1
According to Dave Thomas DRY says that every piece of system knowledge should have one
authoritative, unambiguous representation. Every piece of knowledge in the development of some-
thing should have a single representation. A systems knowledge is far broader than just its code.
It refers to database schemas, test plans, the build system, even documentation.
172 Part V Creating Efcient and Readable Code in R
control statements which allow you to perform repetitive code processes with
different intentions and allow these automated expressions to naturally respond to
features of your data. Lastly, I demonstrate how you can simplify your code to make
it more readable and clear. Combined, these tools will move you forward in writing
efcient, simple, and readable code.
Bibliography
Hunt, A., & Thomas, D. (2000). The pragmatic programmer: from journeyman to master. Addison-
Wesley Professional.
Chapter 18
Functions
With the exception of primitive functions all R functions have three parts:
body(): the code inside the function
formals(): the list of arguments used to call the function
environment(): the mapping of the location(s) of the functions variables
For example, lets build a function that calculates the present value (PV) of a
single future sum. The equation for a single sum PV is:
PV = FV / (1 + r )n
where FV is future value, r is the interest rate, and n is the number of periods. In the
function that follows the body of the function includes the equation
FV / (1+ r )n
and then rounding the output to two decimals. The formals (or arguments)
required for the function include FV, r, and n. And the environment shows that
function operates in the global environment.
PV <- function(FV, r, n) {
PV <- FV / (1 + r)^n
round(PV, 2)
}
body(PV)
## {
## PV <- FV / (1 + r)^n
## round(PV, 2)
## }
formals(PV)
## $FV
##
##
## $r
##
##
## $n
environment(PV)
## <environment: R_GlobalEnv>
18.2 Arguments
To perform the PV() function we can call the arguments in different ways.
# using argument names
PV(FV = 1000, r = .08, n = 5)
## [1] 680.58
Note that when building a function you can also set default values for arguments.
In our original PV() we did not provide any default values so if we do not supply
all the argument parameters an error will be returned. However, if we set default
values then the function will use the stated default if any parameters are missing:
# missing the n argument
PV(1000, .08)
## Error in PV(1000, 0.08): argument "n" is missing, with no default
Scoping refers to the set of rules a programming language uses to lookup the value
for variables and/or symbols. The following illustrates the basic concept behind the
lexical scoping rules that R follows.
A function will first look inside the function to identify all the variables being
called. If all variables exist then their is no additional search required to identify
variables.
PV1 <- function() {
FV <- 1000
r <- .08
n <- 5
FV / (1 + r)^n
}
PV1()
## [1] 680.5832
However, if a variable does not exist within the function, R will look one level
up to see if the variable exists.
# the FV variable is outside the function environment
FV <- 1000
176 18Functions
PV2()
## [1] 680.5832
This same concept applies if you have functions embedded within functions:
FV <- 1000
PV3()
## [1] 680.5832
This also applies for functions in which some arguments are called but not all
variables used in the body are identified as arguments:
# n is specified within the function
PV4 <- function(FV, r) {
n <- 5
FV / (1 + r)^n
}
PV4(1000, .08)
## [1] 680.5832
PV5(1000)
## [1] 680.5832
18.5Returning Multiple Outputs fromaFunction 177
If a function performs multiple tasks and therefore has multiple results to report
then we have to include the c() function inside the function to display all the
results. If you do not include the c() function then the function output will only
return the last expression:
bad <- function(x, y) {
2 * x + y
x + 2 * y
2 * x + 2 * y
x / y
}
bad(1, 2)
## [1] 0.5
Furthermore, when we have a function which performs multiple tasks (i.e. com-
putes multiple computations) then it is often useful to save the results in a list.
good_list <- function(x, y) {
output1 <- 2 * x + y
output2 <- x + 2 * y
output3 <- 2 * x + 2 * y
output4 <- x / y
c(list(Output1 = output1, Output2 = output2,
Output3 = output3, Output4 = output4))
}
good_list(1, 2)
## $Output1
## [1] 4
##
## $Output2
## [1] 5
##
## $Output3
## [1] 6
##
## $Output4
## [1] 0.5
For functions that will be used again, and especially for those used by someone
other than the creator of the function, it is good to check the validity of arguments
within the function. One way to do this is to use the stop() function. The follow-
ing uses an if() statement to check if the class of each argument is numeric. If one
or more arguments are not numeric then the stop() function will be triggered to
provide a meaningful message to the user.
PV <- function(FV, r, n) {
if(!is.numeric(FV) | !is.numeric(r) | !is.numeric(n)){
stop('This function only works for numeric inputs!\n',
'You have provided objects of the following classes:\n',
'FV: ', class(FV), '\n',
'r: ', class(r), '\n',
'n: ', class(n))
}
PV <- FV / (1 + r)^n
round(PV, 2)
}
18.7Saving andSourcing Functions 179
Another concern is dealing with missing or NA values. Lets say you wanted to
perform the PV() function on a vector of potential future values. The function as is
will output NA in place of any missing values in the FV input vector. If you want to
remove the missing values then you can incorporate the na.rm parameter in the
function arguments along with an if statement to remove missing values if na.rm
= TRUE.
# vector of future value inputs
fv <- c(800, 900, NA, 1100, NA)
If you want to save a function to be used at other times and within other scripts there
are two main ways to do this. One way is to build a package which I do not cover in
this book but is discussed in more details in Hadley Wickhams R Packages book,
which is openly available at http://r-pkgs.had.co.nz/. Another option, and the one
discussed here, is to save the function in a script. For example, we can save a script
that contains the PV() function and save this script as PV.R.
180 18Functions
Now, if we are working in a fresh script youll see that we have no objects and
functions in our working environment:
If we want to use the PV function in this new script we can simply read in the
function by sourcing the script using source("PV.R"). Now, youll notice that
we have the PV() function in our global environment and can use it as normal. Note
that if you are working in a different directory then where the PV.R file is located
youll need to include the proper path to access the relevant directory.
18.8Additional Resources 181
Looping is similiar to creating functions in that they are merely a means to automate
a certain multi-step process by organizing sequences of R expressions. R consists of
several loop control statements which allow you to perform repetitive code pro-
cesses with different intentions and allow these automated expressions to naturally
respond to features of your data. Consequently, learning these loop control state-
ments will go a long ways in reducing code redundancy and becoming a more ef-
cient data wrangler.
This chapter starts by covering the basic control statements in R, which includes
if, else, along with the for, while, and repeat loop control structures. In
addition, I cover break and next which allow you to further control ow within
the aforementioned control statements. Next I cover a set of vectorized functions
known as the apply family of functions which minimize your need to explicitly cre-
ate loops. I then provide some additional loop-like functions that are helpful in
everyday data analysis followed by a list of additional resources to learn more about
control structures in R.
19.1.1 if Statement
The following is an example that tests if any values in a vector are negative.
Notice there are two ways to write this if statement; since the body of the state-
ment is only one line you can write it with or without curly braces. I recommend
getting in the habit of using curly braces, that way if you build onto if statements
with additional functions in the body or add an else statement later you will not
run into issues with unexpected code procedures.
The following extends the previous example illustrated for the if statement in
which the if statement tests if any values in a vector are negative; if TRUE it pro-
duces one output and if FALSE it produces the else output.
# this test results in statement 1 being executed
x <- c(8, 3, -2, 5)
19.1 Basic Control Statements (i.e. if, for, while, etc.) 185
# this test results in statement 2 (or the else statement) being executed
y <- c(8, 3, 2, 5)
Simple ifelse statements, as above, in which only one line of code is being
executed in the statements can be written in a simplied alternative manner. These
alternatives are only recommended for very short ifelse code because they can
become difcult to read as the character length increases.
x <- c(8, 3, 2, 5)
# alternative 1
if(any(x < 0)) print("x contains negative numbers") else print("x contains all
positive numbers")
## [1] "x contains all positive numbers"
We can also nest as many ifelse statements as required (or desired). For
example:
# this test results in statement 1 being executed
x <- 7
The for loop is used to execute repetitive code statements for a particular number
of times. The general syntax is provided below where i is the counter and as i
assumes each sequential value dened (1 through 100 in this example) the code in
the body will be performed for that ith value.
# syntax of for loop
for(i in 1:100) {
<do stuff here with i>
}
For example, the following for loop iterates through each value (2010, 2011, ,
2016) and performs the paste and print functions inside the curly brackets.
for(i in 2010:2016) {
output <- paste("The year is", i)
print(output)
}
## [1] "The year is 2010"
## [1] "The year is 2011"
## [1] "The year is 2012"
## [1] "The year is 2013"
## [1] "The year is 2014"
## [1] "The year is 2015"
## [1] "The year is 2016"
If you want to perform the for loop but have the outputs combined into a vector
or other data structure than you can initiate the output data structure prior to the for
loop. For instance, if we want to have the previous outputs combined into a single
vector x we can initiate x rst and then append the for loop output to x.
x <- NULL
for(i in 2010:2016) {
output <- paste("The year is", i)
x <- append(x, output)
}
x
## [1] "The year is 2010" "The year is 2011" "The year is 2012"
"The year is 2013"
## [5] "The year is 2014" "The year is 2015" "The year is 2016"
size and then ll in each element within the for loop. Although this inefciency is not
noticed in this small example, when you perform larger repetitions it will become
noticeable so you might as well get in the habit of filling rather than growing.
x <- vector(mode = numeric, length = 7)
counter <- 1
for(i in 2010:2016) {
output <- paste("The year is", i)
x[counter] <- output
counter <- counter + 1
}
x
## [1] "The year is 2010" "The year is 2011" "The year is 2012" "The year is 2013"
## [5] "The year is 2014" "The year is 2015" "The year is 2016"
Another example in which we create an empty matrix with 5 rows and 5 col-
umns. The for loop then iterates over each column (note how i takes on the values
1 through the number of columns in the my.mat matrix) and takes a random draw
of 5 values from a Poisson distribution with mean i in column i:
for(i in 1:ncol(my.mat)){
my.mat[, i] <- rpois(5, lambda = i)
}
my.mat
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0 2 1 7 1
## [2,] 1 2 2 3 9
## [3,] 2 1 5 6 6
## [4,] 2 1 5 2 10
## [5,] 0 2 2 2 4
While loops begin by testing a condition. If it is true, then they execute the state-
ment. Once the statement is executed, the condition is tested again, and so forth,
until the condition is false, after which the loop exits. Its considered a best practice
to include a counter object to keep track of total iterations
# syntax of while loop
counter <- 1
while(test_expression) {
statement
counter <- counter + 1
}
188 19 Loop Control Statements
while loops can potentially result in innite loops if not written properly; there-
fore, you must use them with care. To provide a simple example to illustrate how
similiar for and while loops are:
counter <- 1
for(i in 1:length(counter)) {
print(i)
}
The primary difference between a for loop and a while loop is: a for loop is
used when the number of iterations a code should be run is known where a while
loop is used when the number of iterations is not known. For instance, the following
takes value x and adds or subtracts 1 from the value randomly until x exceeds the
values in the test expression. The output illustrates that the code runs 14 times until
x exceeded the threshold with the value 9.
counter <- 1
x <- 5
set.seed(3)
A repeat loop is used to iterate over a block of code multiple number of times.
There is not a test expression in a repeat loop to end or exit the loop. Rather, we
must put a condition statement explicitly inside the body of the loop and use the
break function to exit the loop. Failing to do so will result into an innite loop.
# syntax of repeat loop
counter <- 1
repeat {
statement
if(test_expression){
break
}
counter <- counter + 1
}
For example, say we want to randomly draw values from a uniform distribution
between 1 and 25. Furthermore, we want to continue to draw values randomly until
our sample contains at least each integer value between 1 and 25; however, we do
not care if weve drawn a particular value multiple times. The following code repeats
the random draws of values between 1 and 25 (which we round). We then include
an if statement to check if all values between 1 and 25 are present in our sample.
If so, we use the break statement to exit the loop. If not, we add to our counter and
let the loop repeat until the conditional if statement is found to be true. We can
then check the counter object to assess how many iterations were required to
reach our conditional requirement.
counter <- 1
x <- NULL
repeat {
x <- c(x, round(runif(1, min = 1, max = 25)))
if(all(1:25 %in% x)){
break
}
counter <- counter + 1
}
counter
## [1] 75
The break function is used to exit a loop immediately, regardless of what iteration
the loop may be on. break functions are typically embedded in an if statement in
which a condition is assessed, if TRUE break out of the loop, if FALSE continue
190 19 Loop Control Statements
on with the loop. In a nested looping situation, where there is a loop inside another
loop, this statement exits from the innermost loop that is being evaluated.
x <- 1:5
for (i in x) {
if (i == 3){
break
}
print(i)
}
## [1] 1
## [1] 2
The next statement is useful when we want to skip the current iteration of a loop
without terminating it. On encountering next, the R parser skips further evaluation
and starts next iteration of the loop.
x <- 1:5
for (i in x) {
if (i == 3){
next
}
print(i)
}
## [1] 1
## [1] 2
## [1] 4
## [1] 5
The apply family consists of vectorized functions which minimize your need to
explicitly create loops. These functions will apply a specied function to a data
object and there primary difference is in the object class in which the function is
applied to (list vs. matrix, etc) and the object class that will be returned from the
function. The following presents the most common forms of apply functions that I
use for data analysis but realize that additional functions exist (mapply, rapply,
and vapply) which are not covered here.
19.2 Apply Family 191
The apply() function is most often used to apply a function to the rows or col-
umns (margins) of matrices or data frames. However, it can be used with general
arrays, for example, to take the average of an array of matrices. Using apply() is
not faster than using a loop function, but it is highly compact and can be written in
one line.
The syntax for apply() is as follows where
x is the matrix, dataframe or array
MARGIN is a vector giving the subscripts which the function will be applied over.
E.g., for a matrix 1 indicates rows, 2 indicates columns, c(1, 2) indicates rows
and columns.
FUN is the function to be applied
is for any other arguments to be passed to the function
# get the sum of each row (not really relevant for this data
# but it illustrates the capability)
apply(mtcars, 1, sum)
## Mazda RX4 Mazda RX4 Wag Datsun 710
## 328.980 329.795 259.580
## Hornet 4 Drive Hornet Sportabout Valiant
## 426.135 590.310 385.540
## Duster 360 Merc 240D Merc 230
## 656.920 270.980 299.570
## Merc 280 Merc 280C Merc 450SE
## 350.460 349.660 510.740
## Merc 450SL Merc 450SLC Cadillac Fleetwood
## 511.500 509.850 728.560
## Lincoln Continental Chrysler Imperial Fiat 128
192 19 Loop Control Statements
## $item3
## [1] 1.193884
##
## $item4
## [1] 5.013019
The above provides a simple example where each list item is simply a vector of
numeric values. However, consider the case where you have a list that contains data
frames and you would like to loop through each list item and perform a function to the
data frame. In this case we can embed an apply function within an lapply function.
For example, the following creates a list for Rs built in beaver data sets.
The lapply function loops through each of the two list items and uses apply
to calculate the mean of the columns in both list items. Note that I wrap the apply
function with round to provide an easier to read output.
# list of R's built in beaver data
beaver_data <- list(beaver1 = beaver1, beaver2 = beaver2)
The sapply() function behaves similarly to lapply(); the only real difference
is in the return value. sapply() will try to simplify the result of lapply() if
possible. Essentially, sapply() calls lapply() on its input and then applies the
following algorithm:
If the result is a list where every element is length 1, then a vector is returned
If the result is a list where every element is a vector of the same length (> 1), a
matrix is returned.
If neither of the above simplications can be performed then a list is returned
To illustrate the differences we can use the previous example using a list with the
beaver data and compare the sapply and lapply outputs:
# list of R's built in beaver data
beaver_data <- list(beaver1 = beaver1, beaver2 = beaver2)
194 19 Loop Control Statements
# get the mean of each list item and simplify the output
sapply(beaver_data, function(x) round(apply(x, 2, mean), 2))
## beaver1 beaver2
## day 346.20 307.13
## time 1312.02 1446.20
## temp 36.86 37.60
## activ 0.05 0.62
To provide an example well use the built in mtcars dataset and calculate the
mean of the mpg variable grouped by the cyl variable.
# show first few rows of mtcars
head(mtcars)
## mpg cyl disp hp drat wt qsec vs am gear carb
## Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
## Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
## Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
## Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
## Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
## Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
19.3 Other Useful Loop-Like Functions 195
Now lets say you want to calculate the mean for each column in the mtcars
dataset grouped by the cylinder categorical variable. To do this you can embed the
tapply function within the apply function.
# get the mean of all columns grouped by cylinders
apply(mtcars, 2, function(x) tapply(x, mtcars$cyl, mean))
## mpg cyl disp hp drat wt qsec vs
## 4 26.66364 4 105.1364 82.63636 4.070909 2.285727 19.13727 0.9090909
## 6 19.74286 6 183.3143 122.28571 3.585714 3.117143 17.97714 0.5714286
## 8 15.10000 8 353.1000 209.21429 3.229286 3.999214 16.77214 0.0000000
## am gear carb
## 4 0.7272727 4.090909 1.545455
## 6 0.4285714 3.857143 3.428571
## 8 0.1428571 3.285714 3.500000
Note that this type of summarization can also be done using the dplyr package
with clearer syntax. This is covered in the Transforming Your Data with dplyr
section.
In addition to the apply family which provides vectorized functions that minimize
your need to explicitly create loops, there are also a few commonly applied apply
functions that have been further simplied. These include the calculation of column
and row sums, means, medians, standard deviations, variances, and summary quan-
tiles across the entire data set.
The most common apply functions include calculating the sums and means of
columns and rows. For instance, to calculate the sum of columns across a data frame
or matrix you could do the following:
apply(mtcars, 2, sum)
## mpg cyl disp hp drat wt qsec vs
## 642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000
## am gear carb
## 13.000 118.000 90.000
However, you can perform the same function with the shorter colSums()
function and it performs faster:
colSums(mtcars)
## mpg cyl disp hp drat wt qsec vs
## 642.900 198.000 7383.100 4694.000 115.090 102.952 571.160 14.000
## am gear carb
## 13.000 118.000 90.000
196 19 Loop Control Statements
To illustrate the speed difference we can compare the performance of using the
apply() function versus the colSums() function on a matrix with 100 million
values (10K 10K). You can see that the speed of colSums() is signicantly faster.
# develop a 10,000 x 10,000 matrix
mat = matrix(sample(1:10, size=100000000, replace=TRUE), nrow=10000)
system.time(apply(mat, 2, sum))
## user system elapsed
## 1.544 0.329 1.879
system.time(colSums(mat))
## user system elapsed
## 0.126 0.000 0.127
or
Both functions complete the same task and the benet of using %>% may not be
immediately evident; however, when you desire to perform multiple functions its
advantage becomes obvious. For instance, if we want to lter some data, group it by
categories, summarize it, and then order the summarized results we could write it
out three different ways. Dont worry, youll learn how to operate these specic
functions in the next section.
library(magrittr)
library(dplyr)
arrange(
summarize(
group_by(
filter(mtcars, carb > 1),
cyl
),
Avg_mpg = mean(mpg)
),
desc(Avg_mpg)
)
## Source: local data frame [3 x 2]
##
## cyl Avg_mpg
## (dbl) (dbl)
## 1 4 25.90
## 2 6 19.74
## 3 8 15.10
This rst option is considered a nested option such that functions are nested
within one another. Historically, this has been the traditional way of integrating code;
however, it becomes extremely difcult to read what exactly the code is doing and it
also becomes easier to make mistakes when making updates to your code. Although
not in violation of the DRY principle, it denitely violates the basic principle of read-
ability and clarity, which makes communication of your analysis more difcult. To
make things more readable, people often move to the following approach
##
## cyl Avg_mpg
## (dbl) (dbl)
## 1 4 25.90
## 2 6 19.74
## 3 8 15.10
This second option helps in making the data wrangling steps more explicit and
obvious but denitely violates the DRY principle. By sequencing multiple functions in
this way you are likely saving multiple outputs that are not very informative to you or
others; rather, the only reason you save them is to insert them into the next function to
eventually get the nal output you desire. This inevitably creates unnecessary copies
and wrecks havoc on properly managing your objectsbasically it results in a global
environment charlie foxtrot! To provide the same readability (or even better), we can
use %>% to string these arguments together without unnecessary object creation
mtcars %>%
filter(carb > 1) %>%
group_by(cyl) %>%
summarise(Avg_mpg = mean(mpg)) %>%
arrange(desc(Avg_mpg))
## Source: local data frame [3 x 2]
##
## cyl Avg_mpg
## (dbl) (dbl)
## 1 4 25.90
## 2 6 19.74
## 3 8 15.10
This nal option which integrates %>% operators makes for more efcient and
legible code. Its efcient in that it doesnt save unnecessary objects (as in option 2)
and performs as effectively (as both option 1 and 2) but makes your code more read-
able in the process. Its legible in that you can read this as you would read normal
prose (we read the %>% as and then) - take mtcars and then filter and then
group by and then summarize and then arrange.
And since R is a functional programming language, meaning that everything you
do is basically built on functions, you can use the pipe operator to feed into just
about any argument call. For example, we can pipe into a linear regression function
and then get the summary of the regression parameters. Note in this case I insert
data = . into the lm() function. When using the %>% operator the default is
the argument that you are forwarding will go in as the first argument of the function
that follows the %>%. However, in some functions the argument you are forwarding
does not go into the default rst position. In these cases, you place . to signal
which argument you want the forwarded expression to go to.
202 20 Simplify Your Code with %>%
mtcars %>%
filter(carb > 1) %>%
lm(mpg ~ cyl + hp, data = .) %>%
summary()
##
## Call:
## lm(formula = mpg ~ cyl + hp, data = .)
##
## Residuals:
## Min 1Q Median 3Q Max
## -4.6163 -1.4162 -0.1506 1.6181 5.2021
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 35.67647 2.28382 15.621 2.16e-13 ***
## cyl -2.22014 0.52619 -4.219 0.000353 ***
## hp -0.01414 0.01323 -1.069 0.296633
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.689 on 22 degrees of freedom
## Multiple R-squared: 0.7601, Adjusted R-squared: 0.7383
## F-statistic: 34.85 on 2 and 22 DF, p-value: 1.516e-07
mtcars %>%
filter(carb > 1) %>%
qplot(x = wt, y = mpg, data = .)
30
25
mpg
20
15
10
2 3 4 5
wt
Piping into a Plot
20.2 Additional Functions 203
You will also nd that the %>% operator is now being built into packages to make
programming much easier. For instance, in the section that follows where I illustrate
how to reshape and transform your data with the dplyr and tidyr packages, you
will see that the %>% operator is already built into these packages. It is also built into the
ggvis and dygraphs packages (visualization packages), the httr package (which
we covered in the data scraping chapter), and a growing number of newer packages.
magrittr also offers some alternative pipe operators. Some functions, such as
plotting functions, will cause the string of piped arguments to terminate. The tee
(%T>%) operator allows you to continue piping functions that normally cause
termination.
# normal piping terminates with the plot() function resulting in
# NULL results for the summary() function
mtcars %>%
filter(carb > 1) %>%
extract(, 1:4) %>%
plot() %>%
summary()
4 5 6 7 8 50 150 250
mpg 30
20
10
8
4 5 6 7
cyl
300
disp
100
50 150 250
hp
# inserting %T>% allows you to plot and perform the functions that
# follow the plotting function
mtcars %>%
filter(carb > 1) %>%
extract(, 1:4) %T>%
plot() %>%
summary()
4 5 6 7 8 50 150 250
30
mpg
20
10
8
4 5 6 7
cyl
300
disp
100
50 150 250
hp
The compound assignment %<>% operator is used to update a value by rst pip-
ing it into one or more expressions, and then assigning the result. For instance, lets
say you want to transform the mpg variable in the mtcars data frame to a square
root measurement. Using %<>% will perform the functions to the right of %<>% and
save the changes these functions perform to the variable or data frame called to the
left of %<>%.
206 20 Simplify Your Code with %>%
# we can square root mpg and save this change using %<>%
mtcars$mpg %<>% sqrt
head(mtcars)
## mpg cyl disp hp drat wt qsec vs am gear carb
## Mazda RX4 4.582576 6 160 110 3.90 2.620 16.46 0 1 4 4
## Mazda RX4 Wag 4.582576 6 160 110 3.90 2.875 17.02 0 1 4 4
## Datsun 710 4.774935 4 108 93 3.85 2.320 18.61 1 1 4 1
## Hornet 4 Drive 4.626013 6 258 110 3.08 3.215 19.44 1 0 3 1
## Hornet Sportabout 4.324350 8 360 175 3.15 3.440 17.02 0 0 3 2
## Valiant 4.254409 6 225 105 2.76 3.460 20.22 1 0 3 1
Some functions (e.g. lm, aggregate, cor) have a data argument, which allows the
direct use of names inside the data as part of the call. The exposition (%$%) operator
is useful when you want to pipe a data frame, which may contain many columns,
into a function that is only applied to some of the columns. For example, the correla-
tion (cor) function only requires an x and y argument so if you pipe the mtcars
data into the cor function using %>% you will get an error because cor doesnt
know how to handle mtcars. However, using %$% allows you to say take this
dataframe and then perform cor() on these specied columns within mtcars.
# regular piping results in an error
mtcars %>%
subset(vs == 0) %>%
cor(mpg, wt)
## Error in pmatch(use, c("all.obs", "complete.obs", "pairwise.complete.obs", :
object 'wt' not found
The magrittr package and its pipe operators are a great tool for making your code
simple, efcient, and readable. There are limitations, or at least suggestions, on when
and how you should use the operators. Garrett Grolemund and Hadley Wickham
offer some advice on the proper use of pipe operators in their R for Data Science
book. However, the %>% has greatly transformed our ability to write simplied
code in R. As the pipe gains in popularity you will likely nd it in more future pack-
ages and being familiar will likely result in better communication of your code.
Some additional resources regarding magrittr and the pipe operators you
may nd useful:
The magrittr vignette (vignette("magrittr")) in your console) pro-
vides additional examples of using pipe operators and functions provided by
magrittr.
A blog post by Stefan Milton Bache regarding the past, present and future of
magrittr1
magrittr questions on Stack Overow
The ensurer package, also written by Stefan Milton Bache, provides a useful way
of verifying and validating data outputs in a sequence of pipe operators.
1
https://www.r-bloggers.com/simpler-r-coding-with-pipes-the-present-and-future-of-the-magrittr-
package/
Part VI
Shaping and Transforming
Your Data with R
Up to 80 % of data analysis is spent on the process of cleaning
and preparing data.
cf.Wickham ( 2014) and Dasu and Johnson (2003)
Jenny Bryan stated that classroom data are like teddy bears and real data are like a
grizzley bear with salmon blood dripping out its mouth. In essence, she was getting
to the point that often when we learn how to perform a modeling approach in the
classroom, the data used is provided in a format that appropriately feeds into the
modeling tool of choice. In reality, datasets are messy and every messy dataset is
messy in its own way.1 The concept of tidy data was established by Hadley
Wickham and represents standardized way to link the structure of a dataset (its
physical layout) with its semantics (its meaning).2 The objective should always be
to get a dataset into a tidy form which consists of:
1. Each variable forms a column
2. Each observation forms a row
3. Each type of observational unit forms a table
To create tidy data you need to be able to reshape your data; preferably via ef-
cient and simple code. To help with this process Hadley created the tidyr pack-
age. This chapter covers the basics of tidyr to help you reshape your data as
necessary. I demonstrate how to turn wide data to long, long data to wide, splitting
and combining variables, and nally I will cover some lesser known functions in
tidyr that are useful. Note that throughout I use the %>% operator we covered in
the last chapter. Although not required, the tidyr package has the %>% operator
baked in to its functionality, which allows you to sequence multiple tidy functions
together.
1
Wickham, H. (2014). Tidy data. Journal of Statistical Software, 59(10). [document].
2
Ibid.
There are times when our data is considered wide or unstacked and a common
attribute/variable of concern is spread out across columns. To reformat the data such
that these common attributes are gathered together as a single variable, the
gather() function will take multiple columns and collapse them into key-value
pairs, duplicating all other columns as needed.
For example, lets say we have the given data frame.
library(dplyr) # I'm using dplyr just to create the data frame with tbl_df()
This data is considered wide since the time variable (represented as quarters) is
structured such that each quarter represents a variable. To re-structure the time compo-
nent as an individual variable, we can gather each quarter within one column variable
and also gather the values associated with each quarter in a second column variable.
library(tidyr)
## 10 3 2007 Qtr.1 13
## 11 3 2008 Qtr.1 17
## 12 3 2009 Qtr.1 14
## 13 1 2006 Qtr.2 16
## 14 1 2007 Qtr.2 13
## 15 1 2008 Qtr.2 22
Its important to note that there is exibility in how you specify the columns you
would like to gather. These all produce the same results:
wide %>% gather(Quarter, Revenue, Qtr.1:Qtr.4)
wide %>% gather(Quarter, Revenue, -Group, -Year)
wide %>% gather(Quarter, Revenue, 3:6)
wide %>% gather(Quarter, Revenue, Qtr.1, Qtr.2, Qtr.3, Qtr.4)
There are also times when we are required to turn long formatted data into wide
formatted data. As a complement to gather(), the spread() function spreads
a key-value pair across multiple columns. So now lets take our long data frame
from above and turn the Quarter variable into column headings and spread the
Revenue values across the quarters they are related to.
back2wide
## Source: local data frame [12 x 6]
##
## Group Year Qtr.1 Qtr.2 Qtr.3 Qtr.4
## (int) (int) (int) (int) (int) (int)
## 1 1 2006 15 16 19 17
## 2 1 2007 12 13 27 23
## 3 1 2008 22 22 24 20
## 4 1 2009 10 14 20 16
## 5 2 2006 12 13 25 18
## 6 2 2007 16 14 21 19
## 7 2 2008 13 11 29 15
## 8 2 2009 23 20 26 20
## 9 3 2006 11 12 22 16
## 10 3 2007 13 11 27 21
## 11 3 2008 17 12 23 19
## 12 3 2009 14 9 31 24
Many times a single column variable will capture multiple variables, or even parts of
a variable you just dont care about. This is exemplied in the following messy_df
data frame. Here, the Grp_Ind variable combines an individual variable (a, b, c)
214 21 Reshaping Your Data with tidyr
with the group variable (1, 2, 3), the Yr_Mo variable combines a year variable with
a month variable, etc. In each case there may be a purpose for separating parts of
these columns into separate variables.
messy_df
## Grp_Ind Yr_Mo City_State Extra_variable
## 1 1.a 2006_Jan Dayton (OH) XX01person_1
## 2 1.b 2006_Feb Grand Forks (ND) XX02person_2
## 3 1.c 2006_Mar Fargo (ND) XX03person_3
## 4 2.a 2007_Jan Rochester (MN) XX04person_4
This can be accomplished using the separate() function which turns a single
character column into multiple columns. Additional arguments provide some exi-
bility with separating columns.
# separate Grp_Ind column into two variables named "Grp" & "Ind"
messy_df %>% separate(col = Grp_Ind, into = c("Grp", "Ind"))
## Grp Ind Yr_Mo City_State Extra_variable
## 1 1 a 2006_Jan Dayton (OH) XX01person_1
## 2 1 b 2006_Feb Grand Forks (ND) XX02person_2
## 3 1 c 2006_Mar Fargo (ND) XX03person_3
## 4 2 a 2007_Jan Rochester (MN) XX04person_4
# you can keep the original column that you are separating
messy_df %>% separate(col = Grp_Ind, into = c("Grp", "Ind"), remove = FALSE)
## Grp_Ind Grp Ind Yr_Mo City_State Extra_variable
## 1 1.a 1 a 2006_Jan Dayton (OH) XX01person_1
## 2 1.b 1 b 2006_Feb Grand Forks (ND) XX02person_2
## 3 1.c 1 c 2006_Mar Fargo (ND) XX03person_3
## 4 2.a 2 a 2007_Jan Rochester (MN) XX04person_4
Similarly, there are times when we would like to combine the values of two vari-
ables. As a compliment to separate(), the unite() function is a convenient
function to paste together multiple variable values into one. Consider the following
data frame that has separate date variables. To perform time series analysis or for
visualizations we may desire to have a single date column.
21.5 Additional tidyr Functions 215
The previous four functions (gather, spread, separate and unite) are the
primary functions you will nd yourself using on a continuous basis; however, there
are some handy functions that are lesser known with the tidyr package.
expenses <- tbl_df(read.table(header = TRUE, text = "
Dept Year Month Day Cost
A 2015 01 01 $500.00
NA NA 02 05 $90.00
NA NA 02 22 $1,250.45
NA NA 03 NA $325.10
B NA 01 02 $260.00
NA NA 02 05 $90.00
", stringsAsFactors = FALSE))
216 21 Reshaping Your Data with tidyr
Often Excel reports will not repeat certain variables. When we read these reports
in, the empty cells are typically lled in with NA such as in the Dept and Year
columns of our expense data frame. We can ll these values in with the previous
entry using fill().
expenses %>% fill(Dept, Year)
## Source: local data frame [6 x 5]
##
## Dept Year Month Day Cost
## (chr) (int) (int) (int) (chr)
## 1 A 2015 1 1 $500.00
## 2 A 2015 2 5 $90.00
## 3 A 2015 2 22 $1,250.45
## 4 A 2015 3 NA $325.10
## 5 B 2015 1 2 $260.00
## 6 B 2015 2 5 $90.00
library(magrittr)
# you can use this to convert and save the Cost column to a
# numeric variable
expenses$Cost <- expenses %$% extract_numeric(Cost)
expenses
## Source: local data frame [6 x 5]
##
## Dept Year Month Day Cost
## (chr) (int) (int) (int) (dbl)
## 1 A 2015 1 1 500.00
## 2 NA NA 2 5 90.00
## 3 NA NA 2 22 1250.45
## 4 NA NA 3 NA 325.10
## 5 B NA 1 2 260.00
## 6 NA NA 2 5 90.00
You can also easily replace missing (or NA) values with a specied value:
library(magrittr)
Since the %>% operator is embedded in tidyr, we can string multiple operations
together to efciently tidy data and make the process easy to read and follow. To
illustrate, lets use the following data, which has multiple messy attributes.
a_mess <- tbl_df(read.table(header = TRUE, text = "
Dep_Unt Year Q1 Q2 Q3 Q4
A.1 2006 15 NA 19 17
B.1 NA 12 13 27 23
A.2 NA 22 22 24 20
B.2 NA 12 13 25 18
A.1 2007 16 14 21 19
B.2 NA 13 11 16 15
A.2 NA 23 20 26 20
B.2 NA 11 12 22 16
"))
In this case, a tidy dataset should result in columns of Dept, Unit, Year, Quarter,
and Cost. Furthermore, we want to ll in the year column where NAs currently exist.
And well assume that we know the missing value that exists in the Q2 column, and
wed like to update it.
a_mess %>%
fill(Year) %>%
gather(Quarter, Cost, Q1:Q4) %>%
separate(Dep_Unt, into = c("Dept", "Unit")) %>%
replace_na(replace = list(Cost = 17))
## Source: local data frame [32 x 5]
218 21 Reshaping Your Data with tidyr
##
## Dept Unit Year Quarter Cost
## (chr) (chr) (int) (fctr) (dbl)
## 1 A 1 2006 Q1 15
## 2 B 1 2006 Q1 12
## 3 A 2 2006 Q1 22
## 4 B 2 2006 Q1 12
## 5 A 1 2007 Q1 16
## 6 B 2 2007 Q1 13
## 7 A 2 2007 Q1 23
## 8 B 2 2007 Q1 11
## 9 A 1 2006 Q2 17
## 10 B 1 2006 Q2 13
## ..
This chapter covers most, but not all, of what tidyr provides. There are several
other resources you can check out to learn more.
Data wrangling presentation I gave at Miami University3
Hadley Wickhams tidy data (Wickham, 2014)
tidyr reference manual4
R Studios Data wrangling with R and RStudio webinar5
R Studios Data wrangling cheat sheet6
Bibliography
Wickham, Hadley (2014). Tidy data. Journal of Statistical Software, 59(10) 123.
3
http://rpubs.com/bradleyboehmke/data_processing
4
https://cran.r-project.org/web/packages/tidyr/tidyr.pdf
5
https://www.rstudio.com/resources/webinars/
6
You can get the RStudio cheatsheets at https://www.rstudio.com/resources/cheatsheets/ or within
a working RStudio session by going to Help > Cheatsheets
Chapter 22
Transforming Your Data with dplyr
Transforming your data is a basic part of data wrangling. This can include filtering,
summarizing, and ordering your data by different means. This also includes com-
bining disparate data sets, creating new variables, and many other manipulation
tasks. Although many fundamental data transformation and manipulation functions
exist in R, historically they have been a bit convoluted and lacked a consistent and
cohesive code structure. Consequently, Hadley Wickham developed the very popu-
lar dplyr package to make these data processing tasks more efficient along with a
syntax that is consistent and easier to remember and read.
dplyrs roots originate in the popular plyr package, also produced by Hadley
Wickham. plyr covers data transformation and manipulation for a range of data
structures (data frames, lists, arrays) whereas dplyr is focused on transformation
and manipulation of data frames. And since the bulk of data analysis leverages data
frames I am going to focus on dplyr. Even so, dplyr offers far more functional-
ity than I can cover in one chapter. Consequently, Im going to cover the seven pri-
mary functions dplyr provides for data transformation and manipulation.
Throughout, I also mention additional, useful functions that can be integrated with
these functions. The full list of capabilities can be found in the dplyr reference
manual; I highly recommend going through it as there are many great functions
provided by dplyr that I will not cover here. Also, similar to tidyr, dplyr has
the %>% operator baked in to its functionality.
For most of these examples well use the following census data which includes
the K-12 public school expenditures by state. This data frame currently is 50 16
and includes expenditure data for 14 unique years (50 states and has data through
year 2011). Here I only show you a subset of the data.
## Division State X1980 X1990 X2000 X2001 X2002 X2003
## 1 6 Alabama 1146713 2275233 4176082 4354794 4444390 4657643
## 2 9 Alaska 377947 828051 1183499 1229036 1284854 1326226
## 3 8 Arizona 949753 2258660 4288739 4846105 5395814 5892227
## 4 7 Arkansas 666949 1404545 2380331 2505179 2822877 2923401
When working with a sizable data frame, often we desire to only assess specific vari-
ables. The select() function allows you to select and/or rename variables. Lets
say our goal is to only assess the five most recent years worth of expenditure data.
Applying the select() function we can select only the variables of concern.
sub_exp <- expenditures %>% select(Division, State, X2007:X2011)
We can also apply some of the special functions within select(). For instance
we can select all variables that start with X (?select to see the available
functions):
expenditures %>%
select(starts_with("X")) %>%
head()
## X1980 X1990 X2000 X2001 X2002 X2003 X2004 X2005
## 1 1146713 2275233 4176082 4354794 4444390 4657643 4812479 5164406
## 2 377947 828051 1183499 1229036 1284854 1326226 1354846 1442269
## 3 949753 2258660 4288739 4846105 5395814 5892227 6071785 6579957
## 4 666949 1404545 2380331 2505179 2822877 2923401 3109644 3546999
## 5 9172158 21485782 38129479 42908787 46265544 47983402 49215866 50918654
## 6 1243049 2451833 4401010 4758173 5151003 5551506 5666191 5994440
## X2006 X2007 X2008 X2009 X2010 X2011
## 1 5699076 6245031 6832439 6683843 6670517 6592925
## 2 1529645 1634316 1918375 2007319 2084019 2201270
## 3 7130341 7815720 8403221 8726755 8482552 8340211
## 4 3808011 3997701 4156368 4240839 4459910 4578136
## 5 53436103 57352599 61570555 60080929 58248662 57526835
## 6 6368289 6579053 7338766 7187267 7429302 7409462
22.2 Filtering Rows 221
You can also de-select variables by using - prior to name or function. The fol-
lowing produces the inverse of functions above:
expenditures %>% select(-X1980:-X2006)
expenditures %>% select(-starts_with("X"))
And for convenience, you can rename selected variables with two options:
# select and rename a single column
expenditures %>% select(Yr_1980 = X1980)
We can apply multiple logic rules in the filter() function such as:
< Less than != Not equal to
> Greater than %in% Group membership
== Equal to is.na is NA
<= Less than or equal to !is.na is not NA
>= Greater than or equal to &,|,! Boolean operators
For instance, we can filter for Division 3 and expenditures in 2011 that were
greater than $10B. This results in Indiana being excluded since it falls within divi-
sion 3 and its expenditures were < $10B (FYIthe raw census data are reported in
units of $1000).
222 22 Transforming Your Data with dplyr
There are additional filtering and subsetting functions that are quite useful:
# remove duplicate rows
sub_exp %>% distinct()
# select top n entries - in this case ranks variable X2011 and selects
# the rows with the top 5 values
sub_exp %>% top_n(n = 5, wt = X2011)
Often, observations are nested within groups or categories and our goal is to per-
form statistical analysis both at the observation level and also at the group level. The
group_by() function allows us to create these categorical groupings.
The group_by() function is a silent function in which no observable manipu-
lation of the data is performed as a result of applying the function. Rather, the only
change youll notice is, when you print the data frame you will notice underneath
the Source information and prior to the actual data frame, an indicator of what vari-
able the data is grouped by will be provided. In the example that follows youll
notice that we grouped by Division and there are nine categories for this vari-
able. The real magic of the group_by() function comes when we perform sum-
mary statistics which we will cover shortly.
group.exp <- sub_exp %>% group_by(Division)
group.exp
## Source: local data frame [50 x 7]
## Groups: Division [9]
##
## Division State X2007 X2008 X2009 X2010 X2011
## (int) (chr) (int) (int) (int) (int) (int)
22.4 Performing Summary Statistics on Variables 223
Obviously the goal of all this data wrangling is to be able to perform statistical
analysis on our data. The summarise() function allows us to perform the major-
ity of summary statistics when performing exploratory data analysis.
Lets get the mean expenditure value across all states in 2011:
sub_exp %>% summarise(Mean_2011 = mean(X2011))
## Mean_2011
## 1 10513678
This information is useful, but being able to compare summary statistics at mul-
tiple levels is when you really start to gather some insights. This is where the
group_by() function comes in. First, lets group by Division and see how the
different regions compare across years 2010 and 2011.
sub_exp %>%
group_by(Division)%>%
summarise(Mean_2010 = mean(X2010, na.rm = TRUE),
Mean_2011 = mean(X2011, na.rm = TRUE))
## Source: local data frame [9 x 3]
##
## Division Mean_2010 Mean_2011
## (int) (dbl) (dbl)
## 1 1 5121003 5222277
## 2 2 32415457 32877923
## 3 3 16322489 16270159
## 4 4 4672332 4672687
## 5 5 10975194 11023526
## 6 6 6161967 6267490
## 7 7 14916843 15000139
## 8 8 3894003 3882159
## 9 9 15540681 15468173
Now were starting to see some differences pop out. How about we compare
states within a Division? We can start to apply multiple functions weve learned so
far to get the 5 year average for each state within Division 3.
library(tidyr)
sub_exp %>%
gather(Year, Expenditure, X2007:X2011) %>% # turn wide data to long
filter(Division == 3) %>% # only assess Division 3
group_by(State) %>% # summarize data by state
summarise(Mean = mean(Expenditure), # calculate mean & SD
SD = sd(Expenditure))
## Source: local data frame [5 x 3]
##
## State Mean SD
## (chr) (dbl) (dbl)
## 1 Illinois 22989317 1867527.7
## 2 Indiana 9613775 238971.6
## 3 Michigan 17059665 180245.0
## 4 Ohio 19264329 705930.2
## 5 Wisconsin 9678256 507461.2
There are several built-in summary functions in dplyr as displayed below. You
can also build in your own functions as well.
22.5 Arranging Variables by Value 225
Often we have separate data frames that can have common and differing variables
for similar observations and we wish to join these data frames together. dplyr
offers multiple joining functions (xxx_join()) that provide alternative ways to
join data frames:
inner_join()
left_join()
right_join()
full_join()
semi_join()
anti_join()
Our public education expenditure data represents then-year dollars. To make any
accurate assessments of longitudinal trends and comparisons we need to adjust for
inflation. I have the following data frame which provides inflation adjustment fac-
tors for base-year 2012 dollars.
## Year Annual Inflation
## 28 2007 207.342 0.9030811
## 29 2008 215.303 0.9377553
## 30 2009 214.537 0.9344190
## 31 2010 218.056 0.9497461
## 32 2011 224.939 0.9797251
## 33 2012 229.594 1.0000000
select(-x) %>%
mutate(Year = as.numeric(Year))
head(long_exp)
## Division State Year Expenditure
## 1 6 Alabama 2007 6245031
## 2 9 Alaska 2007 1634316
## 3 8 Arizona 2007 7815720
## 4 7 Arkansas 2007 3997701
## 5 9 California 2007 57352599
## 6 8 Colorado 2007 6579053
I can now apply the left_join() function to join the inflation data to the
expenditure data. This aligns the data in both data frames by the Year variable and
then joins the remaining inflation data to the expenditure data frame as new
variables.
join_exp <- long_exp %>% left_join(inflation)
head(join_exp)
## Division State Year Expenditure Annual Inflation
## 1 6 Alabama 2007 6245031 207.342 0.9030811
## 2 9 Alaska 2007 1634316 207.342 0.9030811
## 3 8 Arizona 2007 7815720 207.342 0.9030811
## 4 7 Arkansas 2007 3997701 207.342 0.9030811
## 5 9 California 2007 57352599 207.342 0.9030811
## 6 8 Colorado 2007 6579053 207.342 0.9030811
To illustrate the other joining methods we can use the a and b data frames from
the EDAWR package1:
library(EDAWR)
a
## x1 x2
## 1 A 1
## 2 B 2
## 3 C 3
b
## x1 x2
## 1 A TRUE
## 2 B FALSE
## 3 D TRUE
1
The EDAWR package contains multiple data sets and can be downloaded by executing
devtools::install_github(rstudio/EDAWR)
228 22 Transforming Your Data with dplyr
There are additional dplyr functions for merging data sets worth exploring:
intersect(y, z) # Rows that appear in both y and z
union(y, z) # Rows that appear in either or both y and z
setdiff(y, z) # Rows that appear in y but not z
bind_rows(y, z) # Append z to y as new rows
bind_cols(y, z) # Append z to y as new columns
Often we want to create a new variable that is a function of the current variables in
our data frame or we may just want to add a new variable that is external to our
existing variables. The mutate() function allows us to add new variables while
preserving the existing variables. If we go back to our previous join_exp
dataframe, remember that we joined inflation rates to our non-inflation adjusted
expenditures for public schools. The dataframe looks like:
22.7 Creating New Variables 229
If we wanted to adjust our annual expenditures for inflation we can use mutate()
to create a new inflation adjusted cost variable which well name Adj_Exp:
inflation_adj <- join_exp %>% mutate(Adj_Exp = Expenditure / Inflation)
head(inflation_adj)
## Division State Year Expenditure Annual Inflation Adj_Exp
## 1 6 Alabama 2007 6245031 207.342 0.9030811 6915249
## 9 Alaska 2007 1634316 207.342 0.9030811 1809711
## 3 8 Arizona 2007 7815720 207.342 0.9030811 8654505
## 4 7 Arkansas 2007 3997701 207.342 0.9030811 4426735
## 5 9 California 2007 57352599 207.342 0.9030811 63507696
## 6 8 Colorado 2007 6579053 207.342 0.9030811 7285119
head(rank_exp)
## Division State Year Expenditure Annual Inflation Adj_Exp Rank
## 1 9 California 2010 58248662 218.056 0.9497461 61330774 1
## 2 2 New York 2010 50251461 218.056 0.9497461 52910417 2
## 3 7 Texas 2010 42621886 218.056 0.9497461 44877138 3
## 4 3 Illinois 2010 24695773 218.056 0.9497461 26002501 4
## 5 2 New Jersey 2010 24261392 218.056 0.9497461 25545135 5
## 6 5 Florida 2010 23349314 218.056 0.9497461 24584797 6
If you wanted to assess the percent change in cost for a particular state you can
use the lag() function within the mutate() function:
inflation_adj %>%
filter(State == "Ohio") %>%
mutate(Perc_Chg = (Adj_Exp - lag(Adj_Exp)) / lag(Adj_Exp))
## Division State Year Expenditure Annual Inflation Adj_Exp Perc_Chg
## 1 3 Ohio 2007 18251361 207.342 0.9030811 20210102 NA
## 2 3 Ohio 2008 18892374 215.303 0.9377553 20146378 -0.003153057
## 3 3 Ohio 2009 19387318 214.537 0.9344190 20747992 0.029862103
## 4 3 Ohio 2010 19801670 218.056 0.9497461 20849436 0.004889357
## 5 3 Ohio 2011 19988921 224.939 0.9797251 20402582 -0.021432441
230 22 Transforming Your Data with dplyr
You could also look at what percent of all US expenditures each state made up in
2011. In this case we use mutate() to take each states inflation adjusted expen-
diture and divide by the sum of the entire inflation adjusted expenditure column. We
also apply a second function within mutate() that provides the cumulative per-
cent in rank-order. This shows that in 2011, the top 8 states with the highest expen-
ditures represented over 50 % of the total U.S. expenditures in K-12 public schools.
(I remove the non-inflation adjusted Expenditure, Annual & Inflation columns so
that the columns dont wrap on the screen view)
cum_pct <- inflation_adj %>%
filter(Year == 2011) %>%
arrange(desc(Adj_Exp)) %>%
mutate(Pct_of_Total = Adj_Exp/sum(Adj_Exp),
Cum_Perc = cumsum(Pct_of_Total)) %>%
select(-Expenditure, -Annual, -Inflation)
head(cum_pct, 8)
## Division State Year Adj_Exp Pct_of_Total Cum_Perc
## 1 9 California 2011 58717324 0.10943237 0.1094324
## 2 2 New York 2011 52575244 0.09798528 0.2074177
## 3 7 Texas 2011 43751346 0.08154005 0.2889577
## 4 3 Illinois 2011 25062609 0.04670957 0.3356673
## 5 5 Florida 2011 24364070 0.04540769 0.3810750
## 6 2 New Jersey 2011 24128484 0.04496862 0.4260436
## 7 2 Pennsylvania 2011 23971218 0.04467552 0.4707191
## 8 3 Ohio 2011 20402582 0.03802460 0.5087437
Lastly, you can apply the summarise and mutate functions to multiple col-
umns by using summarise_each() and mutate_each() respectively.
22.7 Creating New Variables 231
Similar to the summary function, dplyr allows you to build in your own func-
tions to be applied within mutate_each() and also has the following built in
functions that can be applied.
This chapter introduced you to dplyrs basic set of tools and demonstrated how to
use them on data frames. Additional resources are available that go into more detail
or provide additional examples of how to use dpyr. In addition, there are other
resources that illustrate how dplyr can perform tasks not mentioned in this chapter
such as connecting to remote databases and translating your R code into SQL code
for data pulls.
Data wrangling presentation I gave at Miami University2
dplyr reference manual3
R Studios Data wrangling with R and RStudio webinar4
R Studios Data wrangling cheat sheet5
Hadley Wickhams dplyr tutorial at useR! 2014, Part 16
Hadley Wickhams dplyr tutorial at useR! 2014, Part 27
2
http://rpubs.com/bradleyboehmke/data_processing
3
https://cran.r-project.org/web/packages/dplyr/dplyr.pdf
4
https://www.rstudio.com/resources/webinars/
5
You can get the RStudio cheatsheets at https://www.rstudio.com/resources/cheatsheets/or within
a working RStudio session by going to Help > Cheatsheets
6
https://www.youtube.com/watch?v=8SGif63VW6E
7
https://www.youtube.com/watch?v=Ue08LVuk790
Index
B E
blsAPI, 152 environment, 173
body, 173 extract_numeric, 216
break, 183, 189
F
C factor, 67
cat, 43 ll, 216
cbind, 100, 107, 108 lter, 154, 199202, 204, 205, 221
ceiling, 40 oor, 40
chartr, 46, 47 for, 23
class, 67 force_tz, 78
colnames, 110 formals, 173
G
gather, 18, 212, 213 N
GET, 158 na.omit, 116
getHTMLLinks, 132 names, 87, 93, 110
getNodeSet, 148 nchar, 46, 50
getURL, 148 ncol, 106
getwd, 14 new_duration, 77
grep, 5860 next, 183, 190
grepl, 60, 61, 63 noquote, 43
group_by, 222, 224 now, 76, 77
gsub, 56, 57, 62, 65 nrow, 106
H O
help, 16, 17 oauth_endpoints, 160
history, 14 oauth1.0_token, 161
html_nodes, 135, 140, 143, 144 OlsonNames, 76
htmlParse, 148 options, 15, 16, 21
html_text, 135138, 140142
Iidentical, 39, 53
if, 178 P
ifelse, 185 paste, 41, 49
inner_join, 226, 228 paste0, 49
install.packages, 18, 49, 63 pbinom, 35
intersect, 52, 228 pexp, 36
is.character, 42 pgamma, 37
is.element, 54 pnorm, 35
is.na, 113 ppois, 36
ISOdate, 73 print, 23, 43
L Q
lapply, 192193 qbinom, 36
left_join, 226, 227 qexp, 36
length, 24, 45, 49, 50 qgamma, 37
levels, 67 qnorm, 35
library, 18, 19, 49, 63, 71, 72, 7477, 122, qpois, 36
124126, 130, 132, 135, 142, 144, 146,
148, 151154, 156, 158, 159, 162, 164,
165, 167, 200, 202, 212, 216, 224, 227 R
list, 91, 92, 133, 145, 152 rbind, 100, 107, 108
list.les, 131 rbinom, 35, 188
load, 15, 127 read.csv, 105, 119122
ls, 14 read.delim, 119, 121
read.table, 105, 119, 121
read.xls, 127
M read.xlsx, 124
matrix, 99 read_csv, 122
mday, 74 read_excel, 126, 127
mdy, 72 read_fwf, 123
Index 235
Words Double, 7, 31, 56, 58, 85, 88, 90, 95, 127
Dplyr package, 195, 219
Duration, 76, 77
A
Abbreviate, 47
API key, 151, 153, 155, 157, 158, 160 E
Application-programming interface (API), Element equality, 53
129, 150162 Element selector, 139, 148
Apply family, 190 Evaluation, 177, 190
Arguments, 18, 20, 44, 50, 68, 72, 73, 75, Exact equality, 39, 5354
104, 112, 114, 120126, 130, 147, Excel, 105, 119, 123127, 129134, 163,
149, 153, 154, 161, 162, 164, 165169, 216
173179, 191, 192, 194, 201, 204, Exponential distribution, 36
206, 214, 225, 231 Exporting data, 4, 15
Assignment, 19, 20, 114, 205 Extract dates, 7375
Attributes, 8183, 85, 8788, 91, 9395, 99, Extracting patterns, 64
101103, 105, 109111, 212, 217 Extract substrings, 4749
B F
Binomial distribution, 3436 Factors, 6769, 105, 106, 120, 126, 149, 194,
BlsAPI package, 151153 196, 226
Filtering data, 221
Floating point numbers, 31
C for loop, 22, 23
Calculator, 15, 21 Function components, 173175
Case conversion, 46 Functional programming language,
Character replacement, 4647 173, 201
Character strings, 4, 16, 41, 48, 5255, 62, 65,
66, 68, 72, 136138, 141, 142, 164
Comparison operators, 38 G
Console, 11, 1316, 72, 203, 207 Gamma distribution, 37
CRAN, 11, 18 gdata package, 123, 130
Create dates, 73 Getting help, 3
Creating new variables, 219 Grouping data, 222223
CSV, 119121, 123, 130, 163, 164
Current date & time, 71
H
HTML, 134150
D httr package, 150, 151, 158162, 203
Data frame, 13, 14, 82, 105107, 110, 112,
113, 115, 122, 144, 146, 148, 163, 165,
169, 191, 193, 195, 196, 205, 212214, I
216, 219, 226228, 230, 232 if statement, 179, 184, 189
Data structures, 4, 81, 83, 89, 95, 105, 137, ifelse statement, 184
159, 186, 219 Importing data, 119, 123, 163, 165
Data wrangling, 3, 4, 14, 66, 199, 201, 219, Integer, 22, 31, 32, 39, 44, 67, 68, 82, 85, 88,
223 91, 107, 126, 127, 189
Dates, 4, 7178, 126, 127, 154, 214, 215 Invalid parameters, 178179
Date sequences, 71, 7576
Daylight savings, 71, 7678
Detecting patterns, 63 J
Dimensions, 4, 82, 86, 99, 101, 103, 104, 112 Joining data, 226228
Index 237
L R
Lazy evaluation, 177 R, 3, 4, 79, 1117, 1925, 31, 32, 34, 41, 43,
Levels, 8, 39, 6769, 129, 135, 149, 159, 162, 4649, 51, 52, 55, 59, 62, 63, 66, 67,
173, 175, 222, 224, 229 71, 72, 76, 77, 81, 85, 91, 99, 105,
List, 9, 1317, 19, 49, 60, 64, 65, 72, 76, 119123, 125, 127130, 143, 148,
9193, 9597, 105, 106, 110, 111, 113, 150152, 157, 158, 162165, 169,
128, 130, 133, 134, 136138, 141, 144, 173, 175, 177, 181, 183, 186, 190,
146, 151, 152, 158, 159, 161, 173, 178, 191, 193, 196, 197, 201, 207, 218,
183, 190, 192194, 203, 219 219, 232
Locating patterns, 6364 r2excel package, 163, 167
Logical operators, 37, 203 RCurl package, 148
Long data, 213 readr package, 122123, 125, 163165
Loop control statements, 4, 183 readxl package, 125127, 165
Lubridate package, 73, 75, 76 Regular expressions, 4, 55, 60, 216
Repeat loop, 183, 189
Replace substrings, 47, 48
M Replacing patterns, 65
Magrittr package, 199, 203, 204, 207 Reproducible, 31, 37
Manipulate dates, 7375 rjson package, 152
Matrix, 99101, 103, 106, 107, 109, 187, 190, rnoaa package, 150, 153
191, 193, 195, 196 R object, 169
Metacharacters, 56 rOpenSci, 8
Missing values, 4, 113116, 164, 179 Rounding, 3940, 174
RStudio, 3, 8, 1113, 16, 19, 26, 135, 232
rtimes package, 155157
N rvest package, 135, 143146, 150
Naming, 2425, 92, 148, 159, 163, 203
Near equality, 37, 39
Nested list, 86, 92, 97, 190, 200, 222 S
Normal distribution, 34, 35, 37 Scoping rules, 175
Scraping data, 4, 105, 134, 143
Script editor, 13
O Seed, 31, 37
OAuth, 151, 158, 160162 Selecting variables, 220221
Open source, 7, 8, 24 Sequence of non-random numbers, 3233
Order data, 225 Sequence of random numbers, 3337
Organization, 4, 7, 8, 25, 150, 151, 158, 165 sequences, 25, 3237, 41, 5557, 75, 85, 183,
207, 211
Set intersection, 5253
P Set union, 52
Packages, 3, 9, 11, 1519, 71, 119, 123, 125, 130, Simplifying, 8890, 95, 96
150, 151, 157, 158, 163, 196, 203, 207 Sorting, 4, 154
Pattern matching, 55, 60 Sourcing functions, 173
Pattern replacement, 60 String manipulation, 46, 63
Pipe operator, 4, 135, 201, 204, 207 String splitting, 6566
Poisson distribution, 36, 187 stringr package, 41, 46
POSIX, 55, 5859 Style guide, 24, 25, 27
POSIXct, 75, 126 Subsetting, 8890, 93, 9597, 103104,
Preserving, 89, 95, 228 111112, 114, 136, 155, 159, 222
Printing strings, 4346 Summary functions, 224, 231
Summary statistics, 196, 222225
Syntax, 11, 13, 23, 46, 5559, 73, 77, 135,
Q 136, 150, 184, 186, 191, 192, 195,
Quantiers, 55, 59 219
238 Index
W
U while loop, 187, 188
Uniform numbers, 34 whitespace, 46, 52, 120
Wide data, 212213
Workspace environment, 13, 14
V
Vector, 13, 14, 2224, 31, 32, 35, 37, 38, 43,
44, 46, 47, 5254, 6065, 67, 68, 75, X
85, 87, 90, 99, 100, 105, 106, 108, 110, xlsx package, 123125
XML package, 132