Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Practical Data Analysis Cookbook
Practical Data Analysis Cookbook
Practical Data Analysis Cookbook
Ebook884 pages5 hours

Practical Data Analysis Cookbook

Rating: 0 out of 5 stars

()

Read preview

About this ebook

About This Book
  • Clean dirty data, extract accurate information, and explore the relationships between variables
  • Forecast the output of an electric plant and the water flow of American rivers using pandas, NumPy, Statsmodels, and scikit-learn
  • Find and extract the most important features from your dataset using the most efficient Python libraries
Who This Book Is For

This book is for everyone who wants to get into the data science field and needs to build up their skills on a set of examples that aim to tackle the problems faced in the corporate world. More advanced practitioners might also find some of the examples refreshing and the more advanced topics covered interesting.

LanguageEnglish
Release dateApr 29, 2016
ISBN9781783558513
Practical Data Analysis Cookbook

Related to Practical Data Analysis Cookbook

Related ebooks

Computers For You

View More

Related articles

Reviews for Practical Data Analysis Cookbook

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Practical Data Analysis Cookbook - Tomasz Drabas

    Table of Contents

    Practical Data Analysis Cookbook

    Credits

    About the Author

    Acknowledgments

    About the Reviewers

    www.PacktPub.com

    Support files, eBooks, discount offers, and more

    Why Subscribe?

    Free Access for Packt account holders

    Preface

    What this book covers

    What you need for this book

    Who this book is for

    Sections

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    Conventions

    Reader feedback

    Customer support

    Downloading the example code

    Downloading the color images of this book

    Errata

    Piracy

    Questions

    1. Preparing the Data

    Introduction

    Reading and writing CSV/TSV files with Python

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    Reading and writing JSON files with Python

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    Reading and writing Excel files with Python

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    Reading and writing XML files with Python

    Getting ready

    How to do it…

    How it works…

    Retrieving HTML pages with pandas

    Getting ready

    How to do it…

    How it works…

    Storing and retrieving from a relational database

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    Storing and retrieving from MongoDB

    Getting ready

    How to do it…

    How it works…

    See also

    Opening and transforming data with OpenRefine

    Getting ready

    How to do it…

    See also

    Exploring the data with Open Refine

    Getting ready

    How to do it…

    Removing duplicates

    Getting ready

    How to do it…

    Using regular expressions and GREL to clean up data

    Getting ready

    How to do it…

    See also

    Imputing missing observations

    Getting ready

    How to do it…

    How it works…

    There's more…

    Normalizing and standardizing the features

    Getting ready

    How to do it…

    How it works…

    Binning the observations

    Getting ready

    How to do it…

    How it works…

    There's more…

    Encoding categorical variables

    Getting ready

    How to do it…

    How it works…

    2. Exploring the Data

    Introduction

    Producing descriptive statistics

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also…

    Exploring correlations between features

    Getting ready

    How to do it…

    How it works…

    See also…

    Visualizing the interactions between features

    Getting ready

    How to do it…

    How it works…

    See also…

    Producing histograms

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also…

    Creating multivariate charts

    Getting ready

    How to do it…

    How it works…

    See also…

    Sampling the data

    Getting ready

    How to do it…

    How it works…

    There's more…

    Splitting the dataset into training, cross-validation, and testing

    Getting ready

    How to do it…

    How it works…

    There's more…

    3. Classification Techniques

    Introduction

    Testing and comparing the models

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    Classifying with Naïve Bayes

    Getting ready

    How to do it…

    How it works…

    See also

    Using logistic regression as a universal classifier

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    Utilizing Support Vector Machines as a classification engine

    Getting ready

    How to do it…

    How it works…

    There's more…

    Classifying calls with decision trees

    Getting ready

    How to do it…

    How it works…

    There's more…

    Predicting subscribers with random tree forests

    Getting ready

    How to do it…

    How it works…

    There's more…

    Employing neural networks to classify calls

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    4. Clustering Techniques

    Introduction

    Assessing the performance of a clustering method

    Getting ready

    How to do it…

    How it works…

    See also…

    Clustering data with k-means algorithm

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also…

    Finding an optimal number of clusters for k-means

    Getting ready

    How to do it…

    How it works…

    There's more…

    Discovering clusters with mean shift clustering model

    Getting ready

    How to do it…

    How it works…

    See also…

    Building fuzzy clustering model with c-means

    Getting ready

    How to do it…

    How it works…

    Using hierarchical model to cluster your data

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also…

    Finding groups of potential subscribers with DBSCAN and BIRCH algorithms

    Getting ready

    How to do it…

    How it works…

    See also…

    5. Reducing Dimensions

    Introduction

    Creating three-dimensional scatter plots to present principal components

    Getting ready

    How to do it…

    How it works…

    Reducing the dimensions using the kernel version of PCA

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    Using Principal Component Analysis to find things that matter

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    Finding the principal components in your data using randomized PCA

    Getting ready

    How to do it…

    How it works…

    There's more…

    Extracting the useful dimensions using Linear Discriminant Analysis

    Getting ready

    How to do it…

    How it works…

    Using various dimension reduction techniques to classify calls using the k-Nearest Neighbors classification model

    Getting ready

    How to do it…

    How it works…

    6. Regression Methods

    Introduction

    Identifying and tackling multicollinearity

    Getting ready

    How to do it…

    How it works…

    There's more…

    Building Linear Regression model

    Getting ready

    How to do it…

    How it works…

    There's more…

    Using OLS to forecast how much electricity can be produced

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    Estimating the output of an electric plant using CART

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    Employing the kNN model in a regression problem

    Getting ready

    How to do it…

    How it works…

    Applying the Random Forest model to a regression analysis

    Getting ready

    How to do it…

    How it works…

    Gauging the amount of electricity a plant can produce using SVMs

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    Training a Neural Network to predict the output of a power plant

    Getting ready

    How to do it…

    How it works…

    See also

    7. Time Series Techniques

    Introduction

    Handling date objects in Python

    Getting ready

    How to do it…

    How it works…

    There's more…

    Understanding time series data

    Getting ready

    How to do it…

    How it works…

    There's more…

    Smoothing and transforming the observations

    Getting ready

    How to do it…

    How it works…

    There's more…

    Filtering the time series data

    Getting ready

    How to do it…

    How it works…

    There's more…

    Removing trend and seasonality

    Getting ready

    How to do it…

    How it works…

    There's more…

    Forecasting the future with ARMA and ARIMA models

    Getting ready

    How to do it…

    How it works…

    See also

    8. Graphs

    Introduction

    Handling graph objects in Python with NetworkX

    Getting ready

    How to do it…

    How it works…

    There's more…

    See also

    Using Gephi to visualize graphs

    Getting ready

    How to do it…

    There's more…

    See also

    Identifying people whose credit card details were stolen

    Getting ready

    How to do it…

    How it works…

    There's more…

    Identifying those responsible for stealing the credit cards

    Getting ready

    How to do it…

    How it works…

    See also

    9. Natural Language Processing

    Introduction

    Reading raw text from the Web

    Getting ready

    How to do it…

    How it works…

    Tokenizing and normalizing text

    Getting ready

    How to do it…

    How it works…

    See also

    Identifying parts of speech, handling n-grams, and recognizing named entities

    Getting ready

    How to do it…

    How it works…

    There's more…

    Identifying the topic of an article

    Getting ready

    How to do it…

    How it works…

    Identifying the sentence structure

    Getting ready

    How to do it…

    How it works…

    See also

    Classifying movies based on their reviews

    Getting ready

    How to do it…

    How it works…

    10. Discrete Choice Models

    Introduction

    Preparing a dataset to estimate discrete choice models

    Getting ready

    How to do it…

    How it works…

    There's more…

    Estimating the well-known Multinomial Logit model

    Getting ready

    How to do it…

    How it works…

    See also

    Testing for violations of the Independence from Irrelevant Alternatives

    Getting ready

    How to do it…

    How it works…

    There's more…

    Handling IIA violations with the Nested Logit model

    Getting ready

    How to do it…

    How it works…

    Managing sophisticated substitution patterns with the Mixed Logit model

    Getting ready

    How to do it…

    How it works…

    11. Simulations

    Introduction

    Using SimPy to simulate the refueling process of a gas station

    Getting ready

    How to do it…

    How it works…

    There's more…

    Simulating out-of-energy occurrences for an electric car

    Getting ready

    How to do it…

    How it works…

    Determining if a population of sheep is in danger of extinction due to a wolf pack

    Getting ready

    How to do it…

    How it works…

    Index

    Practical Data Analysis Cookbook


    Practical Data Analysis Cookbook

    Copyright © 2016 Packt Publishing

    All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

    Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

    Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

    First published: April 2011

    Production reference: 1250416

    Published by Packt Publishing Ltd.

    Livery Place

    35 Livery Street

    Birmingham B3 2PB, UK.

    ISBN 978-1-78355-166-8

    www.packtpub.com

    Credits

    Author

    Tomasz Drabas

    Reviewers

    Brett Bloomquist

    Khaled Tannir

    Commissioning Editor

    Dipika Gaonkar

    Acquisition Editor

    Prachi Bisht

    Content Development Editor

    Pooja Mhapsekar

    Technical Editor

    Bharat Patil

    Copy Editor

    Tasneem Fatehi

    Project Coordinator

    Francina Pinto

    Proofreader

    Safis Editing

    Indexer

    Mariammal Chettiyar

    Production Coordinator

    Nilesh R. Mohite

    Cover Work

    Nilesh R. Mohite

    About the Author

    Tomasz Drabas is a data scientist working for Microsoft and currently residing in the Seattle area. He has over 12 years of international experience in data analytics and data science in numerous fields, such as advanced technology, airlines, telecommunications, finance, and consulting.

    Tomasz started his career in 2003 with LOT Polish Airlines in Warsaw, Poland, while finishing his master's degree in strategy management. In 2007, he moved to Sydney to pursue a doctoral degree in operations research at the University of New South Wales, School of Aviation; his research crossed boundaries between discrete choice modeling and airline operations research. During his time in Sydney, he worked as a data analyst for Beyond Analysis Australia and as a senior data analyst/data scientist for Vodafone Hutchison Australia, among others. He has also published scientific papers, attended international conferences, and served as a reviewer for scientific journals.

    In 2015, he relocated to Seattle to begin his work for Microsoft. There he works on numerous projects involving solving problems in high-dimensional feature space.

    Acknowledgments

    First and foremost, I would like to thank my wife, Rachel, and daughter, Skye, for encouraging me to undertake this challenge and tolerating long days of developing code and late nights of writing up. You are the best and I love you beyond bounds! Also, thanks to my family for putting up with me (in general).

    Tomasz Bednarz has not only been a great friend but also a great mentor when I was learning programming—thank you! I also want to thank my current and former managers, Mike Stephenson and Rory Carter, as well as numerous colleagues and friends who also encouraged me to finish this book.

    Special thanks go to my two former supervisors, Dr Richard Cheng-Lung Wu and Dr Tomasz Jablonski. The master's project with Tomasz sparked my interest in neural networks—lessons that I will never forget. Without Richard's help, I would not have been able to finish my PhD and will always be grateful for his help, guidance, and friendship.

    About the Reviewers

    Brett Bloomquist holds a BS in mathematics and an MS in computer science, specializing in computer-aided geometric design. He has 26 years of work experience in the software industry with a focus on geometric modeling algorithms and computer graphics. More recently, Brett has been applying his mathematics and visualization background as a principal data scientist.

    Khaled Tannir is a visionary solution architect with more than 20 years of technical experience focusing on big data technologies, data science machine learning, and data mining since 2010.

    He is widely recognized as an expert in these fields and has a bachelor's degree in electronics and a master's degree in system information architectures. He is working on completing his PhD.

    Khaled has more than 15 certifications (R programming, big data, and many more) and is a Microsoft Certified Solution Developer (MCSD) and an avid technologist.

    He has worked for many companies in France (and recently in Canada), leading the development and implementation of software solutions and giving technical presentations.

    He is the author of the books RavenDB 2.x Beginner's Guide and Optimizing Hadoop MapReduce, both by Packt Publishing (which were translated in Simplified Chinese) and a technical reviewer on the books, Pentaho Analytics for MongoDB, MongoDB High Availability, and Learning Predictive Analytics with R, by Packt Publishing.

    He enjoys taking landscape and night photos, traveling, playing video games, creating funny electronics gadgets using Arduino, Raspberry Pi, and .Net Gadgeteer, and of course spending time with his wife and family.

    You can connect with him on LinkedIn or reach him at <contact@khaledtannir.net>.

    www.PacktPub.com

    Support files, eBooks, discount offers, and more

    For support files and downloads related to your book, please visit www.PacktPub.com.

    Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at for more details.

    At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.

    https://www2.packtpub.com/books/subscription/packtlib

    Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.

    Why Subscribe?

    Fully searchable across every book published by Packt

    Copy and paste, print, and bookmark content

    On demand and accessible via a web browser

    Free Access for Packt account holders

    If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view 9 entirely free books. Simply use your login credentials for immediate access.

    Preface

    Data analytics and data science have garnered a lot of attention from businesses around the world. The amount of data generated these days is mind-boggling, and it keeps growing everyday; with the proliferation of mobiles, access to Facebook, YouTube, Netflix, or other 4K video content providers, and increasing reliance on cloud computing, we can only expect this to increase.

    The task of a data scientist is to clean, transform, and analyze the data in order to provide the business with insights about its customers and/or competitors, monitor the health of the services provided by the company, or automatically present recommendations to drive more opportunities for cross-selling (among many others).

    In this book, you will learn how to read, write, clean, and transform the data—the tasks that are the most time-consuming but also the most critical. We will then present you with a broad array of tools and techniques that any data scientist should master, ranging from classification, clustering, or regression, through graph theory and time-series analysis, to discrete choice modeling and simulations. In each chapter, we will present an array of detailed examples written in Python that will help you tackle virtually any problem that you might encounter in your career as a data scientist.

    What this book covers

    Chapter 1, Preparing the Data, covers the process of reading and writing from and to various data formats and databases, as well as cleaning the data using OpenRefine and Python.

    Chapter 2, Exploring the Data, describes various techniques that aid in understanding the data. We will see how to calculate distributions of variables and correlations between them and produce some informative charts.

    Chapter 3, Classification Techniques, introduces several classification techniques, from simple Naïve Bayes classifiers to more sophisticated Neural Networks and Random Tree Forests.

    Chapter 4, Clustering Techniques, explains numerous clustering models; we start with the most common k-means method and finish with more advanced BIRCH and DBSCAN models.

    Chapter 5, Reducing Dimensions, presents multiple dimensionality reduction techniques, starting with the most renowned PCA, through its kernel and randomized versions, to LDA.

    Chapter 6, Regression Methods, covers many regression models, both linear and nonlinear. We also bring back random forests and SVMs (among others) as these can be used to solve either classification or regression problems.

    Chapter 7, Time Series Techniques, explores the methods of handling and understanding time series data as well as building ARMA and ARIMA models.

    Chapter 8, Graphs, introduces NetworkX and Gephi to handle, understand, visualize, and analyze data in the form of graphs.

    Chapter 9, Natural Language Processing, describes various techniques related to the analytics of free-flow text: part-of-speech tagging, topic extraction, and classification of data in textual form.

    Chapter 10, Discrete Choice Models, explains the choice modeling theory and some of the most popular models: the Multinomial, Nested, and Mixed Logit models.

    Chapter 11, Simulations, covers the concepts of agent-based simulations; we simulate the functioning of a gas station, out-of-power occurrences for electric vehicles, and sheep-wolf predation scenarios.

    What you need for this book

    For this book, you need a personal computer (it can be a Windows machine, Mac, or Linux) with an installed and configured Python 3.5 environment; we use the Anaconda distribution of Python that can be downloaded at https://www.continuum.io/downloads.

    Throughout this book, we use various Python modules: pandas, NumPy/SciPy, SciKit-Learn, MLPY, StatsModels, PyBrain, NLTK, BeautifulSoup, Optunity, Matplotlib, Seaborn, Bokeh, PyLab, OpenPyXl, PyMongo, SQLAlchemy, NetworkX, and SimPy. Most of the modules used come preinstalled with Anaconda, but some of them need to be installed via either the conda installer or by downloading the module and using the python setup.py install command. It is fine if some of those modules are not currently installed on your machine; we will guide you through the installation process.

    We also use several non-Python tools: OpenRefine to aid in data cleansing and analysis, D3.js to visualize data, Postgres and MongoDB databases to store data, Gephi to visualize graphs, and PythonBiogeme to estimate discrete choice models. We will provide detailed installation instructions where needed.

    Who this book is for

    This book is for everyone who wants to get into the data science field and needs to build up their skills on a set of examples that aim to tackle the problems faced in the corporate world. More advanced practitioners might also find some of the examples refreshing and the more advanced topics covered interesting.

    Sections

    In this book, you will find several headings that appear frequently (Getting ready, How to do it, How it works, There's more, and See also).

    To give clear instructions on how to complete a recipe, we use these sections as follows:

    Getting ready

    This section tells you what to expect in the recipe, and describes how to set up any software or any preliminary settings required for the recipe.

    How to do it…

    This section contains the steps required to follow the recipe.

    How it works…

    This section usually consists of a detailed explanation of what happened in the previous section.

    There's more…

    This section consists of additional information about the recipe in order to make the reader more knowledgeable about the recipe.

    See also

    This section provides helpful links to other useful information for the recipe.

    Conventions

    In this book, you will find a number of text styles that distinguish between different kinds of information. Here are some examples of these styles and an explanation of their meaning.

    Code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles are shown as follows: We can include other contexts through the use of the include directive.

    A block of code is set as follows:

    for p in all_disputed_transactions:

        try:

            transactions[p[0]].append(p[2]['amount'])

        except:

            transactions[p[0]] = [p[2]['amount']]

    Any command-line input or output is written as follows:

    cd networkx python setup.py install

    New terms and important words are shown in bold. Words that you see on the screen, for example, in menus or dialog boxes, appear in the text like this: We start with using Range on the age filter.

    Note

    Warnings or important notes appear in a box like this.

    Tip

    Tips and tricks appear like this.

    Reader feedback

    Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.

    To send us general feedback, simply e-mail <feedback@packtpub.com>, and mention the book's title in the subject of your message.

    If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

    Customer support

    Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

    Downloading the example code

    You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

    You can download the code files by following these steps:

    Log in or register to our website using your e-mail address and password.

    Hover the mouse pointer on the SUPPORT tab at the top.

    Click on Code Downloads & Errata.

    Enter the name of the book in the Search box.

    Select the book for which you're looking to download the code files.

    Choose from the drop-down menu where you purchased this book from.

    Click on Code Download.

    You can also download the code files by clicking on the Code Files button on the book's webpage at the Packt Publishing website. This page can be accessed by entering the book's name in the Search box. Please note that you need to be logged in to your Packt account.

    Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

    WinRAR / 7-Zip for Windows

    Zipeg / iZip / UnRarX for Mac

    7-Zip / PeaZip for Linux

    The code bundle for this book is also available on GitHub at https://github.com/drabastomek/practicalDataAnalysisCookbook/tree/master/Data.

    Downloading the color images of this book

    We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from https://www.packtpub.com/sites/default/files/downloads/practicaldataanalysiscookbook_ColorImages.pdf.

    Errata

    Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.

    To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.

    Piracy

    Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.

    Please contact us at <copyright@packtpub.com> with a link to the suspected pirated material.

    We appreciate your help in protecting our authors and our ability to bring you valuable content.

    Questions

    If you have a problem with any aspect of this book, you can contact us at <questions@packtpub.com>, and we will do our best to address the problem.

    Chapter 1. Preparing the Data

    In this chapter, we will cover the basic tasks of reading, storing, and cleaning data using Python and OpenRefine. You will learn the following recipes:

    Reading and writing CSV/TSV files with Python

    Reading and writing JSON files with Python

    Reading and writing Excel files with Python

    Reading and writing XML files with Python

    Retrieving HTML pages with pandas

    Storing and retrieving from a relational database

    Storing and retrieving from MongoDB

    Opening and transforming data with OpenRefine

    Exploring the data with OpenRefine

    Removing duplicates

    Using regular expressions and GREL to clean up the data

    Imputing missing observations

    Normalizing and standardizing features

    Binning the observations

    Encoding categorical variables

    Introduction

    For the following set of recipes, we will use Python to read data in various formats and store it in RDBMS and NoSQL databases.

    All the source codes and datasets that we will use in this book are available in the GitHub repository for this book. To clone the repository, open your terminal of choice (on Windows, you can use command line, Cygwin, or Git Bash and in the Linux/Mac environment, you can go to Terminal) and issue the following command (in one line):

    git clone https://github.com/drabastomek/practicalDataAnalysisCookbook.git

    Tip

    Note that you need Git installed on your machine. Refer to https://git-scm.com/book/en/v2/Getting-Started-Installing-Git for installation instructions.

    In the following four sections, we will use a dataset that consists of 985 real estate transactions. The real estate sales took place in the Sacramento area over a period of five consecutive days. We downloaded the data from https://support.spatialkey.com/spatialkey-sample-csv-data/—in specificity, http://samplecsvs.s3.amazonaws.com/Sacramentorealestatetransactions.csv. The data was then transformed into various formats that are stored in the Data/Chapter01 folder in the GitHub repository.

    In addition, you will learn how to retrieve information from HTML files. For this purpose, we will use the Wikipedia list of airports starting with the letter A, https://en.wikipedia.org/wiki/List_of_airports_by_IATA_code:_A.

    To clean our dataset, we will use OpenRefine; it is a powerful tool to read, clean, and transform data.

    Reading and writing CSV/TSV files with Python

    CSV and TSV formats are essentially text files formatted in a specific way: the former one separates data using a comma and the latter uses tab \t characters. Thanks to this, they are really portable and facilitate the ease of sharing data between various platforms.

    Getting ready

    To execute this recipe, you will need the pandas module installed. These modules are all available in the Anaconda distribution of Python and no further work is required if you already use this distribution. Otherwise, you will need to install pandas and make sure that it loads properly.

    Note

    You can download Anaconda from http://docs.continuum.io/anaconda/install. If you already have Python installed but do not have pandas, you can download the package from https://github.com/pydata/pandas/releases/tag/v0.17.1 and follow the instructions to install it appropriately for your operating system (http://pandas.pydata.org/pandas-docs/stable/install.html).

    No other prerequisites are required.

    How to do it…

    The pandas module is a library that provides high-performing, high-level data structures (such as DataFrame) and some basic analytics tools for Python.

    Note

    The DataFrame

    Enjoying the preview?
    Page 1 of 1