Elasticsearch 5.x Cookbook - Third Edition
By Alberto Paro
()
About this ebook
- Deploy and manage simple Elasticsearch nodes as well as complex cluster topologies
- Write native plugins to extend the functionalities of Elasticsearch 5.x to boost your business
- Packed with clear, step-by-step recipes to walk you through the capabilities of Elasticsearch 5.x
If you are a developer who wants to get the most out of Elasticsearch for advanced search and analytics, this is the book for you. Some understanding of JSON is expected. If you want to extend Elasticsearch, understanding of Java and related technologies is also required.
Read more from Alberto Paro
ElasticSearch Cookbook - Second Edition Rating: 0 out of 5 stars0 ratingsElasticSearch Cookbook Rating: 5 out of 5 stars5/5
Related to Elasticsearch 5.x Cookbook - Third Edition
Related ebooks
WildFly Cookbook Rating: 0 out of 5 stars0 ratingsPostgreSQL 9 Administration Cookbook - Second Edition Rating: 0 out of 5 stars0 ratingsPhpStorm Cookbook Rating: 0 out of 5 stars0 ratingsiOS Development with Xamarin Cookbook Rating: 0 out of 5 stars0 ratingsGroovy 2 Cookbook Rating: 0 out of 5 stars0 ratingsMockito Cookbook Rating: 0 out of 5 stars0 ratingsChef Infrastructure Automation Cookbook - Second Edition Rating: 0 out of 5 stars0 ratingsAngularJS Web Application Development Cookbook Rating: 0 out of 5 stars0 ratingsUnderstanding Azure Monitoring: Includes IaaS and PaaS Scenarios Rating: 0 out of 5 stars0 ratingsAlfresco 3 Cookbook Rating: 0 out of 5 stars0 ratingsPractical OneOps Rating: 0 out of 5 stars0 ratingsCloud Development and Deployment with CloudBees Rating: 0 out of 5 stars0 ratingsKubernetes: Preparing for the CKA and CKAD Certifications Rating: 0 out of 5 stars0 ratingsTraefik API Gateway for Microservices: With Java and Python Microservices Deployed in Kubernetes Rating: 0 out of 5 stars0 ratingsDocker A Complete Guide - 2020 Edition Rating: 0 out of 5 stars0 ratingsMySQL 5.1 Plugin Development Rating: 0 out of 5 stars0 ratingsSoftware architecture A Complete Guide - 2019 Edition Rating: 0 out of 5 stars0 ratingsDocker Complete Self-Assessment Guide Rating: 0 out of 5 stars0 ratingsApache Cassandra Essentials Rating: 4 out of 5 stars4/5Learning Magento Theme Development Rating: 0 out of 5 stars0 ratingsFlex on Java Rating: 0 out of 5 stars0 ratingsDevOps Implementation Roadmap Third Edition Rating: 0 out of 5 stars0 ratingsCreating Development Environments with Vagrant - Second Edition Rating: 0 out of 5 stars0 ratingsDocker Swarm Mode A Clear and Concise Reference Rating: 0 out of 5 stars0 ratingsScrum Release Management: Successful Combination of Scrum, Lean Startup, and User Story Mapping Rating: 0 out of 5 stars0 ratingsRed Hat OpenShift A Complete Guide - 2020 Edition Rating: 0 out of 5 stars0 ratingsKubernetes A Complete Guide Rating: 0 out of 5 stars0 ratingsMastering DevOps in Kubernetes: Maximize your container workload efficiency with DevOps practices in Kubernetes (English Edition) Rating: 0 out of 5 stars0 ratings
Computers For You
Mastering ChatGPT: 21 Prompts Templates for Effortless Writing Rating: 5 out of 5 stars5/5CompTIA Security+ Practice Questions Rating: 2 out of 5 stars2/5Procreate for Beginners: Introduction to Procreate for Drawing and Illustrating on the iPad Rating: 0 out of 5 stars0 ratingsAP Computer Science Principles Premium, 2024: 6 Practice Tests + Comprehensive Review + Online Practice Rating: 0 out of 5 stars0 ratingsCreating Online Courses with ChatGPT | A Step-by-Step Guide with Prompt Templates Rating: 4 out of 5 stars4/5How to Create Cpn Numbers the Right way: A Step by Step Guide to Creating cpn Numbers Legally Rating: 4 out of 5 stars4/5The ChatGPT Millionaire Handbook: Make Money Online With the Power of AI Technology Rating: 0 out of 5 stars0 ratingsSQL QuickStart Guide: The Simplified Beginner's Guide to Managing, Analyzing, and Manipulating Data With SQL Rating: 4 out of 5 stars4/5Network+ Study Guide & Practice Exams Rating: 4 out of 5 stars4/5Deep Search: How to Explore the Internet More Effectively Rating: 5 out of 5 stars5/5101 Awesome Builds: Minecraft® Secrets from the World's Greatest Crafters Rating: 4 out of 5 stars4/5CompTIA IT Fundamentals (ITF+) Study Guide: Exam FC0-U61 Rating: 0 out of 5 stars0 ratingsGrokking Algorithms: An illustrated guide for programmers and other curious people Rating: 4 out of 5 stars4/5Ultimate Guide to Mastering Command Blocks!: Minecraft Keys to Unlocking Secret Commands Rating: 5 out of 5 stars5/5ChatGPT Ultimate User Guide - How to Make Money Online Faster and More Precise Using AI Technology Rating: 0 out of 5 stars0 ratingsPractical Lock Picking: A Physical Penetration Tester's Training Guide Rating: 5 out of 5 stars5/5Remote/WebCam Notarization : Basic Understanding Rating: 3 out of 5 stars3/5Childhood Unplugged: Practical Advice to Get Kids Off Screens and Find Balance Rating: 0 out of 5 stars0 ratingsElon Musk Rating: 4 out of 5 stars4/5Going Text: Mastering the Command Line Rating: 4 out of 5 stars4/5The Professional Voiceover Handbook: Voiceover training, #1 Rating: 5 out of 5 stars5/5Dark Aeon: Transhumanism and the War Against Humanity Rating: 5 out of 5 stars5/5Master Builder Roblox: The Essential Guide Rating: 4 out of 5 stars4/5
Reviews for Elasticsearch 5.x Cookbook - Third Edition
0 ratings0 reviews
Book preview
Elasticsearch 5.x Cookbook - Third Edition - Alberto Paro
Table of Contents
Credits
About the Author
About the Reviewer
www.PacktPub.com
eBooks, discount offers, and more
Why subscribe?
Customer Feedback
Dedication
Preface
What this book covers
What you need for this book
Who this book is for
Sections
Getting ready
How to do it…
How it works…
There's more…
See also
Conventions
Reader feedback
Customer support
Downloading the example code
Errata
Piracy
Questions
1. Getting Started
Introduction
Understanding node and cluster
Getting ready
How it work...
There's more...
See also
Understanding node services
Getting ready
How it works...
Managing your data
Getting ready
How it works...
There's more...
Best practices
See also
Understanding cluster, replication, and sharding
Getting ready
How it works...
Best practice
There's more...
Solving the yellow status
Solving the red status
See also
Communicating with Elasticsearch
Getting ready
How it works...
Using the HTTP protocol
Getting ready
How to do it...
How it works...
There's more...
Using the native protocol
Getting ready
How to do it...
How it works...
There's more...
See also
2. Downloading and Setup
Introduction
Downloading and installing Elasticsearch
Getting ready
How to do it...
How it works...
There's more...
See also
Setting up networking
Getting ready
How to do it...
How it works...
See also
Setting up a node
Getting ready
How to do it...
How it works...
There's more...
See also
Setting up for Linux systems
Getting ready
How to do it...
How it works...
Setting up different node types
Getting ready
How to do it...
How it works...
Setting up a client node
Getting ready
How to do it...
How it works...
Setting up an ingestion node
Getting ready
How to do it...
How it works...
Installing plugins in Elasticsearch
Getting ready
How to do it...
How it works...
There's more...
See also
Installing plugins manually
Getting ready
How to do it...
How it works...
Removing a plugin
Getting ready
How to do it...
How it works...
Changing logging settings
Getting ready
How to do it...
How it works...
Setting up a node via Docker
Getting ready
How to do it...
How it works...
There's more...
See also
3. Managing Mappings
Introduction
Using explicit mapping creation
Getting ready
How to do it...
How it works...
There's more...
See also
Mapping base types
Getting ready
How to do it...
How it works...
There's more...
See also
Mapping arrays
Getting ready
How to do it...
How it works...
Mapping an object
Getting ready
How to do it...
How it works...
See also
Mapping a document
Getting ready
How to do it...
How it works...
See also
Using dynamic templates in document mapping
Getting ready
How to do it...
How it works...
There's more...
See also
Managing nested objects
Getting ready
How to do it...
How it works...
There's more...
See also
Managing child document
Getting ready
How to do it...
How it works...
There's more...
See also
Adding a field with multiple mapping
Getting ready
How to do it...
How it works...
There's more...
See also
Mapping a GeoPoint field
Getting ready
How to do it...
How it works...
There's more...
Mapping a GeoShape field
Getting ready
How to do it
How it works...
See also
Mapping an IP field
Getting ready
How to do it...
How it works...
Mapping an attachment field
Getting ready
How to do it...
How it works...
There's more...
See also
Adding metadata to a mapping
Getting ready
How to do it...
How it works...
Specifying a different analyzer
Getting ready
How to do it...
How it works...
See also
Mapping a completion field
Getting ready
How to do it...
How it works...
See also
4. Basic Operations
Introduction
Creating an index
Getting ready
How to do it...
How it works...
There's more...
See also
Deleting an index
Getting ready
How to do it...
How it works...
See also
Opening/closing an index
Getting ready
How to do it...
How it works...
See also
Putting a mapping in an index
Getting ready
How to do it...
How it works...
There's more...
See also
Getting a mapping
Getting ready
How to do it...
How it works...
See also
Reindexing an index
Getting ready
How to do it...
How it works...
See also
Refreshing an index
Getting ready
How to do it...
How it works...
See also
Flushing an index
Getting ready
How to do it...
How it works...
See also
ForceMerge an index
Getting ready
How to do it...
How it works...
There's more...
See also
Shrinking an index
Getting ready
How to do it...
How it works...
There's more...
See also
Checking if an index or type exists
Getting ready
How to do it...
How it works...
Managing index settings
Getting ready
How to do it...
How it works...
There's more...
See also
Using index aliases
Getting ready
How to do it...
How it works...
There's more...
Rollover an index
Getting ready
How to do it…
How it works...
See also
Indexing a document
Getting ready
How to do it...
How it works...
There's more...
See also
Getting a document
Getting ready
How to do it...
How it works...
There is more...
See also
Deleting a document
Getting ready
How to do it...
How it works...
See also
Updating a document
Getting ready
How to do it...
How it works...
See also
Speeding up atomic operations (bulk operations)
Getting ready
How to do it...
How it works...
Speeding up GET operations (multi GET)
Getting ready
How to do it...
How it works...
See also...
5. Search
Introduction
Executing a search
Getting ready
How to do it...
How it works...
There's more...
See also
Sorting results
Getting ready
How to do it...
How it works...
There's more...
See also
Highlighting results
Getting ready
How to do it...
How it works…
See also
Executing a scrolling query
Getting ready
How to do it...
How it works...
There's more...
See also
Using the search_after functionality
Getting ready
How to do it...
How it works...
See also
Returning inner hits in results
Getting ready
How to do it...
How it works...
See also
Suggesting a correct query
Getting ready
How to do it...
How it works...
See also
Counting matched results
Getting ready
How to do it...
How it works...
There's more...
See also
Explaining a query
Getting ready
How to do it...
How it works...
Query profiling
Getting ready
How to do it...
How it works...
Deleting by query
Getting ready
How to do it...
How it works...
There's more...
See also
Updating by query
Getting ready
How to do it...
How it works...
There's more...
See also
Matching all the documents
Getting ready
How to do it...
How it works...
See also
Using a boolean query
Getting ready
How to do it...
How it works...
6. Text and Numeric Queries
Introduction
Using a term query
Getting ready
How to do it...
How it works...
There's more...
Using a terms query
Getting ready
How to do it...
How it works...
There's more...
See also
Using a prefix query
Getting ready
How to do it...
How it works...
There's more...
See also
Using a wildcard query
Getting ready
How to do it...
How it works...
See also
Using a regexp query
Getting ready
How to do it...
How it works...
See also
Using span queries
Getting ready
How to do it...
How it works...
See also
Using a match query
Getting ready
How to do it...
How it works...
See also
Using a query string query
Getting ready
How to do it...
How it works...
There's more...
See also
Using a simple query string query
Getting ready
How to do it...
How it works...
See also
Using the range query
Getting ready
How to do it...
How it works...
There's more...
The common terms query
Getting ready
How to do it...
How it works...
See also
Using IDs query
Getting ready
How to do it...
How it works...
See also
Using the function score query
Getting ready
How to do it...
How it works...
See also
Using the exists query
Getting ready
How to do it...
How it works...
Using the template query
Getting ready
How to do it...
How it works...
There's more...
See also
7. Relationships and Geo Queries
Introduction
Using the has_child query
Getting ready
How to do it...
How it works...
There's more...
See also
Using the has_parent query
Getting ready
How to do it...
How it works...
See also
Using nested queries
Getting ready
How to do it...
How it works...
See also
Using the geo_bounding_box query
Getting ready
How to do it...
How it works...
See also
Using the geo_polygon query
Getting ready
How to do it...
How it works...
See also
Using the geo_distance query
Getting ready
How to do it...
How it works...
See also
Using the geo_distance_range query
Getting ready
How to do it...
How it works...
See also
8. Aggregations
Introduction
Executing an aggregation
Getting ready
How to do it...
How it works...
See also
Executing stats aggregations
Getting ready
How to do it...
How it works...
See also
Executing terms aggregation
Getting ready
How to do it...
How it works...
There's more...
See also
Executing significant terms aggregation
Getting ready
How to do it...
How it works...
Executing range aggregations
Getting ready
How to do it...
How it works...
There's more...
See also
Executing histogram aggregations
Getting ready
How to do it...
How it works...
There's more...
See also
Executing date histogram aggregations
Getting ready
How to do it...
How it works...
See also
Executing filter aggregations
Getting ready
How to do it...
How it works...
There's more...
See also
Executing filters aggregations
Getting ready
How to do it...
How it works...
Executing global aggregations
Getting ready
How to do it...
How it works...
Executing geo distance aggregations
Getting ready
How to do it...
How it works...
See also
Executing children aggregations
Getting ready
How to do it...
How it works...
Executing nested aggregations
Getting ready
How to do it...
How it works...
There's more...
Executing top hit aggregations
Getting ready
How to do it...
How it works...
See also
Executing a matrix stats aggregation
Getting ready
How to do it...
How it works...
Executing geo bounds aggregations
Getting ready
How to do it...
How it works...
See also
Executing geo centroid aggregations
Getting ready
How to do it...
How it works...
See also
9. Scripting
Introduction
Painless scripting
Getting ready
How to do it...
How it works...
There's more
See also
Installing additional script plugins
Getting ready
How to do it...
How it works...
There's more...
Managing scripts
Getting ready
How to do it...
How it works...
There's more...
See also
Sorting data using scripts
Getting ready
How to do it...
How it works...
There's more...
Computing return fields with scripting
Getting ready
How to do it...
How it works...
See also
Filtering a search via scripting
Getting ready
How to do it...
How it works...
There's more...
See also
Using scripting in aggregations
Getting ready
How to do it...
How it works...
Updating a document using scripts
Getting ready
How to do it...
How it works...
There's more...
Reindexing with a script
Getting ready
How to do it...
How it works...
10. Managing Clusters and Nodes
Introduction
Controlling cluster health via an API
Getting ready
How to do it...
How it works...
There's more...
See also
Controlling cluster state via an API
Getting ready
How to do it...
How it works...
There's more...
See also
Getting nodes information via API
Getting ready
How to do it...
How it works...
There's more...
See also
Getting node statistics via the API
Getting ready
How to do it...
How it works...
There's more...
Using the task management API
Getting ready
How to do it...
How it works...
There's more...
See also
Hot thread API
Getting ready
How to do it...
How it works...
Managing the shard allocation
Getting ready
How to do it...
How it works...
There's more...
See also
Monitoring segments with the segment API
Getting ready
How to do it...
How it works...
See also
Cleaning the cache
Getting ready
How to do it...
How it works...
11. Backup and Restore
Introduction
Managing repositories
Getting ready
How to do it...
How it works...
There's more...
See also
Executing a snapshot
Getting ready
How to do it...
How it works...
There's more...
Restoring a snapshot
Getting ready
How to do it...
How it works...
Setting up a NFS share for backup
Getting ready
How to do it...
How it works...
Reindexing from a remote cluster
Getting ready
How to do it...
How it works...
See also
12. User Interfaces
Introduction
Installing and using Cerebro
Getting ready
How to do it...
How it works...
There's more...
Installing Kibana and X-Pack
Getting ready
How to do it...
How it works...
Managing Kibana dashboards
Getting ready
How to do it...
How it works...
Monitoring with Kibana
Getting ready
How to do it...
How it works...
See also
Using Kibana dev-console
Getting ready
How to do it...
How it works...
There's more...
Visualizing data with Kibana
Getting ready
How to do it...
How it works...
Installing Kibana plugins
Getting ready
How to do it...
How it works...
Generating graph with Kibana
Getting ready
How to do it...
How it works...
13. Ingest
Introduction
Pipeline definition
Getting ready
How to do it...
How it works...
There's more...
Put an ingest pipeline
Getting ready
How to do it...
How it works...
Get an ingest pipeline
Getting ready
How to do it...
How it works...
There's more...
Delete an ingest pipeline
Getting ready
How to do it...
How it works...
Simulate an ingest pipeline
Getting ready
How to do it...
How it works...
There's more...
Built-in processors
Getting ready
How to do it...
How it works...
See also
Grok processor
Getting ready
How to do it...
How it works...
See also
Using the ingest attachment plugin
Getting ready
How to do it...
How it works...
Using the ingest GeoIP plugin
Getting ready
How to do it...
How it works...
See also
14. Java Integration
Introduction
Creating a standard Java HTTP client
Getting ready
How to do it...
How it works...
See also
Creating an HTTP Elasticsearch client
Getting ready
How to do it...
How it works...
See also
Creating a native client
Getting ready
How to do it...
How it works...
There's more...
See also
Managing indices with the native client
Getting ready
How to do it...
How it works...
See also
Managing mappings
Getting ready
How to do it...
How it works...
There's more...
See also
Managing documents
Getting ready
How to do it...
How it works...
See also
Managing bulk actions
Getting ready
How to do it...
How it works...
Building a query
Getting ready
How to do it...
How it works...
There's more...
Executing a standard search
Getting ready
How to do it...
How it works...
See also
Executing a search with aggregations
Getting ready
How to do it...
How it works...
See also
Executing a scroll search
Getting ready
How to do it...
How it works...
See also
15. Scala Integration
Introduction
Creating a client in Scala
Getting ready
How to do it...
How it works...
See also
Managing indices
Getting ready
How to do it...
How it works...
See also
Managing mappings
Getting ready
How to do it...
How it works...
See also
Managing documents
Getting ready
How to do it...
How it works...
There's more...
See also
Executing a standard search
Getting ready
How to do it...
How it works...
See also
Executing a search with aggregations
Getting ready
How to do it...
How it works...
See also
16. Python Integration
Introduction
Creating a client
Getting ready
How to do it...
How it works…
See also
Managing indices
Getting ready
How to do it…
How it works…
There's more…
See also
Managing mappings include the mapping
Getting ready
How to do it…
How it works…
See also
Managing documents
Getting ready
How to do it…
How it works…
See also
Executing a standard search
Getting ready
How to do it…
How it works…
See also
Executing a search with aggregations
Getting ready
How to do it…
How it works…
See also
17. Plugin Development
Introduction
Creating a plugin
Getting ready
How to do it...
How it works...
There's more...
Creating an analyzer plugin
Getting ready
How to do it...
How it works...
There's more...
Creating a REST plugin
Getting ready
How to do it...
How it works...
See also
Creating a cluster action
Getting ready
How to do it...
How it works...
See also
Creating an ingest plugin
Getting ready
How to do it...
How it works...
18. Big Data Integration
Introduction
Installing Apache Spark
Getting ready
How to do it...
How it works...
There's more...
Indexing data via Apache Spark
Getting ready
How to do it...
How it works...
See also
Indexing data with meta via Apache Spark
Getting ready
How to do it...
How it works...
There's more...
Reading data with Apache Spark
Getting ready
How to do it...
How it works...
Reading data using SparkSQL
Getting ready
How to do it...
How it works...
Indexing data with Apache Pig
Getting ready
How to do it...
How it works...
Elasticsearch 5.x Cookbook Third Edition
Copyright © 2017 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: December 2013
Second edition: January 2015
Third edition: February 2017
Production reference: 1310117
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-78646-558-0
www.packtpub.com
Credits
About the Author
Alberto Paro is an engineer, project manager, and software developer. He currently works as freelance trainer/consultant on big data technologies and NoSQL solutions. He loves to study emerging solutions and applications mainly related to big data processing, NoSQL, natural language processing, and neural networks. He began programming in BASIC on a Sinclair Spectrum when he was eight years old, and to date, has collected a lot of experience using different operating systems, applications, and programming languages.
In 2000, he graduated in computer science engineering from Politecnico di Milano with a thesis on designing multiuser and multidevice web applications. He assisted professors at the university for about a year. He then came in contact with The Net Planet Company and loved their innovative ideas; he started working on knowledge management solutions and advanced data mining products. In summer 2014, his company was acquired by a big data technologies company, where he worked until the end of 2015 mainly using Scala and Python on state-of-the-art big data software (Spark, Akka, Cassandra, and YARN). In 2013, he started freelancing as a consultant for big data, machine learning, Elasticsearch and other NoSQL products. He has created or helped to develop big data solutions for business intelligence, financial, and banking companies all over the world. A lot of his time is spent teaching how to efficiently use big data solutions (mainly Apache Spark), NoSql datastores (Elasticsearch, HBase, and Accumulo) and related technologies (Scala, Akka, and Playframework). He is often called to present at big data or Scala events. He is an evangelist on Scala and Scala.js (the transcompiler from Scala to JavaScript).
In his spare time, when he is not playing with his children, he likes to work on open source projects. When he was in high school, he started contributing to projects related to the GNOME environment (gtkmm). One of his preferred programming languages is Python, and he wrote one of the first NoSQL backends on Django for MongoDB (Django-MongoDB-engine). In 2010, he began using Elasticsearch to provide search capabilities to some Django e-commerce sites and developed PyES (a Pythonic client for Elasticsearch), as well as the initial part of the Elasticsearch MongoDB river. He is the author of Elasticsearch Cookbook as well as a technical reviewer of Elasticsearch Server-Second Edition, Learning Scala Web Development, and the video course, Building a Search Server with Elasticsearch, all of which are published by Packt Publishing.
It would have been difficult for me to complete this book without the support of a large number of people.
First, I would like to thank my wife, my children and the rest of my family for their support.
A personal thanks to my best friends, Mauro and Michele, and to all the people that helped me and my family.
I'd like to express my gratitude to everyone at Packt Publishing who are involved in the development and production of this book. I'd like to thank Amrita Noronha for guiding this book to completion and Deepti Tuscano and Marcelo Ochoa for patiently going through the first draft and providing their valuable feedback. Their professionalism, courtesy, good judgment, and passion for books are much appreciated.
About the Reviewer
Marcelo Ochoa works at the system laboratory of Facultad de Ciencias Exactas of the Universidad Nacional del Centro de la Provincia de Buenos Aires and is the CTO at Scotas.com, a company that specializes in near real-time search solutions using Apache Solr and Oracle. He divides his time between university jobs and external projects related to Oracle and big data technologies. He has worked on several Oracle-related projects, such as the translation of Oracle manuals and multimedia CBTs. His background is in database, network, web, and Java technologies. In the XML world, he is known as the developer of the DB Generator for the Apache Cocoon project. He has worked on the open source projects DBPrism and DBPrism CMS, the Lucene-Oracle integration using the Oracle JVM Directory implementation, and the Restlet.org project, where he worked on the Oracle XDB Restlet Adapter, which is an alternative to writing native REST web services inside a database-resident JVM.
Since 2006, he has been part of an Oracle ACE program. Oracle ACEs are known for their strong credentials as Oracle community enthusiasts and advocates, with candidates nominated by ACEs in the Oracle technology and applications communities.
He has coauthored Oracle Database Programming using Java and Web Services by Digital Press and Professional XML Databases by Wrox Press, and has been the technical reviewer for several books by Packt Publishing such as Apache Solr 4 Cookbook and ElasticSearch Server.
www.PacktPub.com
eBooks, discount offers, and more
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at customercare@packtpub.com for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
https://www2.packtpub.com/books/subscription/packtlib
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.
Why subscribe?
Fully searchable across every book published by Packt
Copy and paste, print, and bookmark content
On demand and accessible via a web browser
Customer Feedback
Thank you for purchasing this Packt book. We take our commitment to improving our content and products to meet your needs seriously—that's why your feedback is so valuable. Whatever your feelings about your purchase, please consider leaving a review on this book's Amazon page. Not only will this help us, more importantly it will also help others in the community to make an informed decision about the resources that they invest in to learn.
You can also review for us on a regular basis by joining our reviewers' club. If you're interested in joining, or would like to learn more about the benefits we offer, please contact us: customerreviews@packtpub.com.
Dedication
To Giulia and Andrea, my extraordinary children.
Preface
The most common requirements of today standard applications are the search and analytics capabilities. On the market we can find a lot of solutions to answer to these need both in commercial and in open source world. One of the most used libraries for searching is Apache Lucene. This library is the base of a large number of search solutions such as Apache Solr, Indextank, and Elasticsearch.
Elasticsearch is one of the most powerful solution, written with the cloud and distributed computing in mind. Its main author, Shay Banon, famous for having developed Compass (http://www.compass-project.org), released the first version of Elasticsearch in March 2010.
Thus the main scope of Elasticsearch is to be a search engine; it also provides a lot of features that allows using it also as data-store and analytic engine via its aggregation framework.
Elasticsearch contains a lot of innovative features: JSON REST based, natively distributed in a map/reduce approach for both search and analytics, easy to set up and extensible with plugins. From 2010 when it started to be developed, to last version (5.x) there is a big evolution of the product becoming one of the most used datastore for a lot of markets. In this book we will go in depth on these changes and features and many others capabilities available in Elasticsearch.
Elasticsearch is also a product in continuous evolution and new functionalities are released both by the Elasticsearch Company (the company founded by Shay Banon to provide commercial support for Elasticsearch) and by Elasticsearch users as plugin (mainly available on GitHub). Today a lot of the major world players in IT industry (see some use cases at https://www.elastic.co/use-cases) are using Elasticsearch for its simplicity and advanced features.
In my opinion, Elasticsearch is probably one of the most powerful and easy-to-use search solution on the market. In writing this book and these recipes, I and the book reviewers have tried to transmit our knowledge, our passion, and best practices to better manage it.
What this book covers
Chapter 1, Getting Started, The goal of this chapter is to give the reader an overview of the basic concepts of Elasticsearch and the ways to communicate with it.
Chapter 2, Downloading and Setup, covers the basic steps to start using Elasticsearch from the simple install to a cloud ones.
Chapter 3, Managing Mappings, covers the correct definition of the data fields to improve both indexing and searching quality.
Chapter 4, Basic Operations, teaches the most common actions that are required to ingest data in Elasticsearch and to manage it.
Chapter 5, Search, talks about executing search, sorting and related API calls. The API discussed in this chapter are the main
Chapter 6, Text and Numeric Queries, talks about Search DSL part on text and numeric fields —the core of the search functionalities of Elasticsearch.
Chapter 7, Relationships and Geo Queries, talks about queries that works on related document (child/parent, nested) and geo located fields.
Chapter 8, Aggregations, covers another capability of Elasticsearch, the possibility to execute analytics on search results to improve both the user experience and to drill down the information contained in Elasticsearch.
Chapter 9, Scripting, shows how to customize Elasticsearch with scripting and use the scripting capabilities in different part of Elasticsearch (search, aggregation, and ingest) using different languages. The chapter is mainly focused on Painless the new scripting language developed by Elastic Team.
Chapter 10, Managing Clusters and Nodes, shows how to analyze the behavior of a cluster/node to understand common pitfalls.
Chapter 11, Backup and Restore, covers one of the most important component in managing data: Backup. It shows how to manage a distributed backup and restore of snapshots.
Chapter 12, User Interfaces, describes two of the most common user interfaces for Elasticsearch 5.x: Cerebro, mainly used for admin activities, and Kibana with X-Pack as a common UI extension for Elasticsearch.
Chapter 13, Ingest, talks about the new ingest functionality introduced in Elasticsearch 5.x to import data in Elasticsearch via an ingestion pipeline.
Chapter 14, Java Integration, describes how to integrate Elasticsearch in Java application using both REST and native protocols.
Chapter 15, Scala Integration, describes how to integrate Elasticsearch in Scala using elastic4s: an advanced type-safe and feature rich Scala library based on native Java API.
Chapter 16, Python Integration, covers the usage of the official Elasticsearch Python client.
Chapter 17, Plugin Development, describes how to create native plugins to extend Elasticsearch functionalities. Some examples show the plugin skeletons, the setup process, and their building.
Chapter 18, Big Data Integration, covers how to integrate Elasticsearch in common big data tools such as Apache Spark and Apache Pig.
What you need for this book
For this book you will need a computer, of course. In terms of software required you don’t have to be worried, all the components we use are open source and available for every platform.
For all the REST example, the CURL software (http://curl.haxx.se/) is used to simulate the command from command line. It’s common preinstalled in Linux and Mac OS X operative systems. For Windows, it can be downloaded from its site and put in a PATH that can be called from command-line.
For the Chapter 14, Java Integration and and Chapter 17, Plugin Development, it is required the Maven build tool (http://maven.apache.org/), which is a standard for managing build, packaging and deploy in Java. It is natively supported in Java IDEs such as Eclipse and Intellij IDEA.
For Chapter 15, Scala Integration, SBT, (http://www.scala-sbt.org/) is required to compile Scala projects, but it can be also used with IDE that supports Scala such as Eclipse and Intellij IDEA.
The Chapter 16, Python Integration, requires the Python interpreter installed. By default it’s available on Linux and Mac OS X , for Windows can be downloaded from the official python site (http://www.python.org). For the current examples the version 2.X is used.
Who this book is for
This book is for developers who want to start using both Elasticsearch and at the same time improve their Elasticsearch knowledge. The book covers all the aspects of using Elasticsearch and provides solutions and hints for everyday usage. The recipes are reduced in complexity to easy focus the reader on the discussed Elasticsearch aspect and to easily memorize the Elasticsearch functionalities.
The latter chapters that discuss the Elasticsearch integration in JAVA, Scala, Python, and Big Data tools show the user how to integrate the power of Elasticsearch in their applications.
The chapter, that talks about plugin development, shows an advanced usage of Elasticsearch and its core extension, so some skilled Java know-how is required.
Sections
In this book, you will find several headings that appear frequently (Getting ready, How to do it, How it works, There's more, and See also).
To give clear instructions on how to complete a recipe, we use these sections as follows:
Getting ready
This section tells you what to expect in the recipe, and describes how to set up any software or any preliminary settings required for the recipe.
How to do it…
This section contains the steps required to follow the recipe.
How it works…
This section usually consists of a detailed explanation of what happened in the previous section.
There's more…
This section consists of additional information about the recipe in order to make the reader more knowledgeable about the recipe.
See also
This section provides helpful links to other useful information for the recipe.
Conventions
In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning.
Code words in text are shown as follows: After the name and the type parameters, usually a river requires an extra configuration that can be passed in the _metaproperty
A block of code is set as follows:
cluster.name: elasticsearch
node.name: My wonderful server
network.host: 192.168.0.1
discovery.zen.ping.unicast.hosts: [192.168.0.2
,192.168.0.3[9300-9400]
]
Any command-line input or output is written as follows:
curl -XDELETE 'http://127.0.0.1:9200/_river/my_river/'
Note
Warnings or important notes appear in a box like this.
Tip
Tips and tricks appear like this.
Reader feedback
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of.
To send us general feedback, simply send an e-mail to feedback@packtpub.com, and mention the book title via the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on http://www.packtpub.com/authors.
Customer support
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
Downloading the example code
You can download the example code files for this book from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
You can download the code files by following these steps:
Log in or register to our website using your e-mail address and password.
Hover the mouse pointer on the SUPPORT tab at the top.
Click on Code Downloads & Errata.
Enter the name of the book in the Search box.
Select the book for which you're looking to download the code files.
Choose from the drop-down menu where you purchased this book from.
Click on Code Download.
You can also download the code files by clicking on the Code Files button on the book's webpage at the Packt Publishing website. This page can be accessed by entering the book's name in the Search box. Please note that you need to be logged in to your Packt account.
Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:
WinRAR / 7-Zip for Windows
Zipeg / iZip / UnRarX for Mac
7-Zip / PeaZip for Linux
The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Elasticsearch-5x-Cookbook-Third-Edition. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!
Errata
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website, or added to any list of existing errata, under the Errata section of that title. Any existing errata can be viewed by selecting your title from http://www.packtpub.com/support.
Piracy
Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at copyright@packtpub.com with a link to the suspected pirated material.
We appreciate your help in protecting our authors, and our ability to bring you valuable content.
Questions
You can contact us at questions@packtpub.com if you are having a problem with any aspect of the book, and we will do our best
Chapter 1. Getting Started
In this chapter, we will cover the following recipes:
Understanding node and cluster
Understanding node services
Managing your data
Understanding cluster, replication, and sharding
Communicating with Elasticsearch
Using the HTTP protocol
Using the native protocol
Introduction
To efficiently use Elasticsearch, it is very important to understand its design and working.
The goal of this chapter is to give the readers an overview of the basic concepts of Elasticsearch and to be a quick reference for them. It's essential to better understand them to not fall in common pitfalls due to the lack of know-how about Elasticsearch architecture and internals.
The key concepts that we will see in this chapter are node, index, shard, type/mapping, document, and field.
Elasticsearch can be used in several ways such as:
Search engine, which is its main usage
Analytics framework via its powerful aggregation system
Data store, mainly for log
A brief description of the Elasticsearch logic helps the user to improve performance, search quality and decide when and how to optimize the infrastructure to improve scalability and availability. Some details on data replications and base node communication processes are also explained in the upcoming section, Understanding cluster, replication, and sharding.
At the end of this chapter, the protocols used to manage Elasticsearch are also discussed.
Understanding node and cluster
Every instance of Elasticsearch is called node. Several nodes are grouped in a cluster. This is the base of the cloud nature of Elasticsearch.
Getting ready
To better understand the following sections, knowledge of the basic concepts such as application node and cluster are required.
How it work...
One or more Elasticsearch nodes can be setup on physical or a virtual server depending on the available resources such as RAM, CPUs, and disk space.
A default node allows us to store data in it and to process requests and responses. (In Chapter 2, Downloading and Setup, we will see details on how to set up different nodes and cluster topologies).
When a node is started, several actions take place during its startup: such as:
Configuration is read from the environment variables and from the elasticsearch.yml configuration file
A node name is set by config file or chosen from a list of built-in random names
Internally, the Elasticsearch engine initializes all the modules and plugins that are available in the current installation
After node startup, the node searches for other cluster members and checks its index and shard status.
To join two or more nodes in a cluster, these rules must be matched:
The version of Elasticsearch must be the same (2.3, 5.0, and so on), otherwise the join is rejected
The cluster name must be the same
The network must be configured to support broadcast discovery (default) and they can communicate with each other. (Refer to How to setup networking recipe Chapter 2, Downloading and Setup).
A common approach in cluster management is to have one or more master nodes, which is the main reference for all cluster-level actions, and the other ones called secondary, that replicate the master data and actions.
To be consistent in write operations, all the update actions are first committed in the master node and then replicated in secondary ones.
In a cluster with multiple nodes, if a master node dies, a master-eligible one is elected to be the new master. This approach allows automatic failover to be setup in an Elasticsearch cluster.
There's more...
In Elasticsearch, we have four kinds of nodes:
Master nodes that are able to process REST (https://en.wikipedia.org/wiki/Representational_state_transfer) responses and all other operations of search. During every action execution, Elasticsearch generally executes actions using a MapReduce approach (https://en.wikipedia.org/wiki/MapReduce): the non data node is responsible for distributing the actions to the underlying shards (map) and collecting/aggregating the shard results (reduce) to send a final response. They may use a huge amount of RAM due to operations such as aggregations, collecting hits, and caching (that is, scan/scroll queries).
Data nodes that are able to store data in them. They contain the indices shards that store the indexed documents as Lucene indexes.
Ingest nodes that are able to process ingestion pipeline (new in Elasticsearch 5.x).
Client nodes (no master and no data) that are used to do processing in a way; if something bad happens (out of memory or bad queries), they are able to be killed/restarted without data loss or reduced cluster stability. Using the standard configuration, a node is both master, data container and ingest node.
In big cluster architectures, having some nodes as simple client nodes with a lot of RAM, with no data, reduces the resources required by data nodes and improves performance in search using the local memory cache of them.
See also
The Setting up a single node, Setting a multi node cluster and Setting up different node types recipes in Chapter 2, Downloading and Setup.
Understanding node services
When a node is running, a lot of services are managed by its instance. Services provide additional functionalities to a node and they cover different behaviors such as networking, indexing, analyzing, and so on.
Getting ready
Starting an Elasticsearch node, a lot of output will be prompted; this output is provided during services start up. Every Elasticsearch server, that is running, provides services.
How it works...
Elasticsearch natively provides a large set of functionalities that can be extended with additional plugins.
During a node startup, a lot of required services are automatically started. The most important ones are:
Cluster services: This helps you to manage the cluster state and intra node communication and synchronization
Indexing service: This helps you to manage all the index operations, initializing all active indices and shards
Mapping service: This helps you to manage the document types stored in the cluster (we'll discuss mapping in Chapter 3, Managing Mappings)
Network services: This includes services such as HTTP REST services (default on port 9200), and internal ES protocol (port 9300), if the thrift plugin is installed
Plugin service: (We will discuss in Chapter 2, Downloading and Setup, for installation and Chapter 12, User Interfaces for detail usage)
Aggregation services: This provides advanced analytics on stored Elasticsearch documents such as statistics, histograms, and document grouping
Ingesting services: This provides support for document preprocessing before ingestion such as field enrichment, NLP processing, types conversion, and automatic field population
Language scripting services: This allows adding new language scripting support to Elasticsearch
Tip
Throughout the book, we'll see recipes that interact with Elasticsearch services. Every base functionality or extended functionality is managed in Elasticsearch as a service.
Managing your data
If you'll be using Elasticsearch as a search engine or a distributed data store, it's important to understand concepts on how Elasticsearch stores and manages your data.
Getting ready
To work with Elasticsearch data, a user must have basic knowledge of data management and JSON (https://en.wikipedia.org/wiki/JSON) data format that is the lingua franca for working with Elasticsearch data and services.
How it works...
Our main data container is called index (plural indices) and it can be considered similar to a database in the traditional SQL world. In an index, the data is grouped in data types called mappings in Elasticsearch. A mapping describes how the records are composed (fields). Every record, that must be stored in Elasticsearch, must be a JSON object.
Natively, Elasticsearch is a schema-less data store: when you put records in it, during insert it processes the records, splits it in fields, and updates the schema to manage the inserted data.
To manage huge volumes of records, Elasticsearch uses the common approach to split an index into multiple parts (shards) so that they can be spread on several nodes. The shard management is transparent to user usage: all common record operations are managed automatically in Elasticsearch's application layer.
Every record is stored in only a shard; the sharding algorithm is based on record ID, so many operations, that require loading and changing of records/objects, can be achieved without hitting all the shards, but only the shard (and their replicas) that contains your object.
The following schema compares Elasticsearch structure with SQL and MongoDB ones:
The following screenshot is a conceptual representation of an Elasticsearch cluster with three nodes, one index with four shards and replica set to 1 (primary shards are in bold):
There's more...
Elasticsearch, to ensure safe operations on index/mapping/objects, internally has rigid rules about how to execute operations.
In Elasticsearch the operations are divided into:
Cluster/Index operations: All write actions are locking, first they are applied to the master node and then to the secondary one. The read operations are typically broadcasted to all the nodes.
Document operations: All write actions are locking only for the single hit shard. The read operations are balanced on all the shard replicas.
When a record is saved in Elasticsearch, the destination shard is chosen based on:
The unique identifier (ID) of the record. If the ID is missing, it is auto generated by Elasticsearch
If routing or parent (we'll see it in the parent/child mapping) parameters are defined, the correct shard is chosen by the hash of these parameters
Splitting an index in a shard allows you to store your data in different nodes, because Elasticsearch tries to balance the shard distribution on all the available nodes.
Every shard can contain up to 2³² records (about 4.9 Billions), so the real limit to shard size it is the storage size.
Shards contain your data, and during the search process all the shards are used to calculate and retrieve results: so Elasticsearch performance in big data scales horizontally with the number of shards.
All native records operations (that is, index, search, update, and delete) are managed in shards.
The shard management is completely transparent to the user. Only advanced users tend to change the default shard routing and management to cover their custom scenarios, for example, if there is a requirement to put customer data in the same shard to speed up his operations (search/index/analytics).
Best practices
It's best practice not to have too big in size shard (over 10Gb) to avoid poor performance in indexing due to continuous merging and resizing of index segments.
While indexing (a record update is equal to indexing a new element) Lucene, the Elasticsearch engine, writes the indexed documents in blocks (segments/files) to speed up the write process. Over time the small segments are deleted and their sum up is written as a new fragment. Having big fragments due to big shards with a lot of data slows down the indexing performance.
It is not good to over-allocate the number of shards to avoid poor search performance because Elasticsearch works in a map and reduce way due to native distribute search. Shards consist of the worker that does the job of indexing/searching and the master/client nodes do the redux part (collect the results from shards and compute the result to be sent to the user). Having a huge number of empty shards in indices consumes only memory and increases search times due to an overhead on network and results aggregation phases.
See also
You can also view more information about Shard at http://en.wikipedia.org/wiki/Shard_(database_architecture)
Understanding cluster, replication, and sharding
Related to shards management, there are key concepts of replication and cluster status.
Getting ready
You need one or more nodes running to have a cluster. To test an effective cluster, you need at least two nodes (that can be on the same machine).
How it works...
An index can have one or more replicas (full copies of your data, automatically managed by Elasticsearch): the shards are called primary ones if they are part of the primary replica, and secondary ones if they are part of other replicas.
To maintain consistency in write operations, the following workflow is executed:
The write is first executed in the primary shard
If the primary write is successfully done, it is propagated simultaneously in all the secondary shards
If a primary shard becomes unavailable, a secondary one is elected as primary (if available) and the flow is re-executed
During search operations, if there are some replicas, a valid set of shards is chosen randomly between primary and secondary to improve performances. Elasticsearch has several allocation algorithms to better distribute shards on nodes. For reliability, replicas are allocated in a way that if a single node becomes unavailable, there is always at least one replica of each shard that is still available on the remaining nodes.
The following figure shows some example of possible shards and replica configuration:
The replica has a cost to increase the indexing time due to data node synchronization and also the time spent to propagate the message to the slaves (mainly in an asynchronous way).
Best practice
To prevent data loss and to have high availability, it's good to have at least one replica; so, your system can survive a node failure without downtime and without loss of data.
A typical approach for scaling performance in search when your customer number is to increase the replica number.
There's more...
Related to the concept of replication, there is the cluster status indicator of the health of your cluster.
It can cover three different states:
Green: This state depicts that everything is ok.
Yellow: This state depicts that some shards are missing but you can work.
Red: This state depicts that, Houston we have a problem
. Some primary shards are missing. The cluster will not accept writing and errors and stale actions may happen due to missing shards. If the missing shard cannot be restored, you have lost your data.
Solving the yellow status
Mainly yellow status is due to some shards that are not allocated.
If your cluster is in recovery
status (this means that it's starting up and checking the shards before we put them online),