
PySpark Cookbook
by: Tomasz Drabas (Author),Denny Lee(Author)
Publisher: Packt Publishing
Publication Date: 2018/6/29
Language: English
Print Length: 330 pages
ISBN-10: 1788835360
ISBN-13: 9781788835367
Book Description
Combine the power of Apache Spark and Python to build effective big data applicationsKey FeaturesPerform effective data processing, machine learning, and analytics using PySparkOvercome challenges in developing and deploying Spark solutions using PythonExplore recipes for efficiently combining Python and Apache Spark to process dataBook DescriptionApache Spark is an open source framework for efficient cluster computing with a strong interface for data parallelism and fault tolerance. The PySpark Cookbook presents effective and time-saving recipes for leveraging the power of Python and putting it to use in the Spark ecosystem.You’ll start by learning the Apache Spark architecture and how to set up a Python environment for Spark. You’ll then get familiar with the modules available in PySpark and start using them effortlessly. In addition to this, you’ll discover how to abstract data with RDDs and DataFrames, and understand the streaming capabilities of PySpark. You’ll then move on to using ML and MLlib in order to solve any problems related to the machine learning capabilities of PySpark and use GraphFrames to solve graph-processing problems. Finally, you will explore how to deploy your applications to the cloud using the spark-submit command.By the end of this book, you will be able to use the Python API for Apache Spark to solve any problems associated with building data-intensive applications.What you will learnConfigure a local instance of PySpark in a virtual environmentInstall and configure Jupyter in local and multi-node environmentsCreate DataFrames from JSON and a dictionary using pyspark.sqlExplore regression and clustering models available in the ML moduleUse DataFrames to transform data used for modelingConnect to PubNub and perform aggregations on streamsWho This Book Is ForThe PySpark Cookbook is for you if you are a Python developer looking for hands-on recipes for using the Apache Spark 2.x ecosystem in the best possible way. A thorough understanding of Python (and some familiarity with Spark) will help you get the best out of the book.Table of ContentsSpark installation and configurationAbstracting data with RDDsAbstracting data with DataFramesPreparing data for modelingMachine Learning with MLLibMachine Learning with ML moduleStructured streaming with PySparkGraphFrames – Graph Theory with PySpark
About the Author
Combine the power of Apache Spark and Python to build effective big data applicationsKey FeaturesPerform effective data processing, machine learning, and analytics using PySparkOvercome challenges in developing and deploying Spark solutions using PythonExplore recipes for efficiently combining Python and Apache Spark to process dataBook DescriptionApache Spark is an open source framework for efficient cluster computing with a strong interface for data parallelism and fault tolerance. The PySpark Cookbook presents effective and time-saving recipes for leveraging the power of Python and putting it to use in the Spark ecosystem.You’ll start by learning the Apache Spark architecture and how to set up a Python environment for Spark. You’ll then get familiar with the modules available in PySpark and start using them effortlessly. In addition to this, you’ll discover how to abstract data with RDDs and DataFrames, and understand the streaming capabilities of PySpark. You’ll then move on to using ML and MLlib in order to solve any problems related to the machine learning capabilities of PySpark and use GraphFrames to solve graph-processing problems. Finally, you will explore how to deploy your applications to the cloud using the spark-submit command.By the end of this book, you will be able to use the Python API for Apache Spark to solve any problems associated with building data-intensive applications.What you will learnConfigure a local instance of PySpark in a virtual environmentInstall and configure Jupyter in local and multi-node environmentsCreate DataFrames from JSON and a dictionary using pyspark.sqlExplore regression and clustering models available in the ML moduleUse DataFrames to transform data used for modelingConnect to PubNub and perform aggregations on streamsWho This Book Is ForThe PySpark Cookbook is for you if you are a Python developer looking for hands-on recipes for using the Apache Spark 2.x ecosystem in the best possible way. A thorough understanding of Python (and some familiarity with Spark) will help you get the best out of the book.Table of ContentsSpark installation and configurationAbstracting data with RDDsAbstracting data with DataFramesPreparing data for modelingMachine Learning with MLLibMachine Learning with ML moduleStructured streaming with PySparkGraphFrames – Graph Theory with PySpark Read more
PySpark Cookbook
未经允许不得转载:电子书百科大全 » PySpark Cookbook
相关推荐
Machine Learning Model Serving Patterns and Best Practices: A definitive guide to deploying, monitoring, and providing accessibility to ML models in production
Persuasive Gaming in Context (Games and Play, 6)
Data Engineering with Databricks Cookbook: Build effective data and AI solutions using Apache Spark, Databricks, and Delta Lake
Embedded Artificial Intelligence: Principles, Platforms and Practices
Big Data on Kubernetes: A practical guide to building efficient and scalable data solutions
Advanced Spiking Neural P Systems: Models and Applications (Computational Intelligence Methods and Applications)
Streaming Databases: Unifying Batch and Stream Processing
The Definitive Guide to Data Integration: Unlock the power of data integration to efficiently manage, transform, and analyze data
电子书百科大全
评论前必须登录!
立即登录 注册