This is the code repository for Apache Spark Quick Start Guide, published by Packt.
Quickly learn the art of writing efficient big data applications with Apache Spark
Apache Spark is a flexible framework that allows processing of batch and real-time data. Its unified engine has made it quite popular for big data use cases. This book will help you to get started with Apache Spark 2.0 and write big data applications for a variety of use cases.
This book covers the following exciting features:
- Learn core concepts such as RDDs, DataFrames, transformations, and more
- Set up a Spark development environment
- Choose the right APIs for your applications
- Understand Spark’s architecture and the execution ?ow of a Spark application
- Explore built-in modules for SQL, streaming, ML, and graph analysis
- Optimize your Spark job for better performance
If you feel this book is for you, get your copy today!
All of the code is organized into folders. For example, Chapter02.
Following is what you need for this book: This book is aimed at Business Analysts, Data Analysts and Data Scientists who wish to make a hands-on start in order to take advantage of modern Big Data technologies combined with Advanced Analytics.
With the following software and hardware list you can run all code files present in the book (Chapter 1-8).
Chapter | Software required | OS required |
---|---|---|
1-8 | Scala 2.12.6 | Ubuntu- Linux |
1-8 | Python 3.6 | Ubuntu- Linux |
1-8 | Java 8 | Ubuntu- Linux |
1-8 | Apache Spark 2.3.1 | Ubuntu- Linux |
We also provide a PDF file that has color images of the screenshots/diagrams used in this book. Click here to download it.
-
Big Data Analytics Using Apache Spark [Video] [Packt]
-
Machine Learning with Apache Spark Quick Start Guide [Packt] [Amazon]
Shrey Mehrotra has over 8 years of IT experience and, for the past 6 years, has been designing the architecture of cloud and big-data solutions for the finance, media, and governance sectors. Having worked on research and development with big-data labs and been part of Risk Technologies, he has gained insights into Hadoop, with a focus on Spark, HBase, and Hive. His technical strengths also include Elasticsearch, Kafka, Java, YARN, Sqoop, and Flume. He likes spending time performing research and development on different big-data technologies. He is the coauthor of the books Learning YARN and Hive Cookbook, a certified Hadoop developer, and he has also written various technical papers.
Akash Grade is a data engineer living in New Delhi, India. Akash graduated with a BSc in computer science from the University of Delhi in 2011, and later earned an MSc in software engineering from BITS Pilani. He spends most of his time designing highly scalable data pipeline using big-data solutions such as Apache Spark, Hive, and Kafka. Akash is also a Databricks-certified Spark developer. He has been working on Apache Spark for the last five years, and enjoys writing applications in Python, Go, and SQL.
Click here if you have any feedback or suggestions.
If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.
Simply click on the link to claim your free PDF.