Apache Kafka Quick Start Guide

Leverage Apache Kafka 2.0 to simplify real-time data processing for distributed applications

Nonfiction, Computers, Database Management, Data Processing, Internet, Web Development, Java, General Computing
Cover of the book Apache Kafka Quick Start Guide by Raúl Estrada, Packt Publishing
View on Amazon View on AbeBooks View on Kobo View on B.Depository View on eBay View on Walmart
Author: Raúl Estrada ISBN: 9781788992251
Publisher: Packt Publishing Publication: December 27, 2018
Imprint: Packt Publishing Language: English
Author: Raúl Estrada
ISBN: 9781788992251
Publisher: Packt Publishing
Publication: December 27, 2018
Imprint: Packt Publishing
Language: English

Process large volumes of data in real-time while building high performance and robust data stream processing pipeline using the latest Apache Kafka 2.0

Key Features

  • Solve practical large data and processing challenges with Kafka
  • Tackle data processing challenges like late events, windowing, and watermarking
  • Understand real-time streaming applications processing using Schema registry, Kafka connect, Kafka streams, and KSQL

Book Description

Apache Kafka is a great open source platform for handling your real-time data pipeline to ensure high-speed filtering and pattern matching on the fly. In this book, you will learn how to use Apache Kafka for efficient processing of distributed applications and will get familiar with solving everyday problems in fast data and processing pipelines.

This book focuses on programming rather than the configuration management of Kafka clusters or DevOps. It starts off with the installation and setting up the development environment, before quickly moving on to performing fundamental messaging operations such as validation and enrichment.

Here you will learn about message composition with pure Kafka API and Kafka Streams. You will look into the transformation of messages in different formats, such asext, binary, XML, JSON, and AVRO. Next, you will learn how to expose the schemas contained in Kafka with the Schema Registry. You will then learn how to work with all relevant connectors with Kafka Connect. While working with Kafka Streams, you will perform various interesting operations on streams, such as windowing, joins, and aggregations. Finally, through KSQL, you will learn how to retrieve, insert, modify, and delete data streams, and how to manipulate watermarks and windows.

What you will learn

  • How to validate data with Kafka
  • Add information to existing data flows
  • Generate new information through message composition
  • Perform data validation and versioning with the Schema Registry
  • How to perform message Serialization and Deserialization
  • How to perform message Serialization and Deserialization
  • Process data streams with Kafka Streams
  • Understand the duality between tables and streams with KSQL

Who this book is for

This book is for developers who want to quickly master the practical concepts behind Apache Kafka. The audience need not have come across Apache Kafka previously; however, a familiarity of Java or any JVM language will be helpful in understanding the code in this book.

View on Amazon View on AbeBooks View on Kobo View on B.Depository View on eBay View on Walmart

Process large volumes of data in real-time while building high performance and robust data stream processing pipeline using the latest Apache Kafka 2.0

Key Features

Book Description

Apache Kafka is a great open source platform for handling your real-time data pipeline to ensure high-speed filtering and pattern matching on the fly. In this book, you will learn how to use Apache Kafka for efficient processing of distributed applications and will get familiar with solving everyday problems in fast data and processing pipelines.

This book focuses on programming rather than the configuration management of Kafka clusters or DevOps. It starts off with the installation and setting up the development environment, before quickly moving on to performing fundamental messaging operations such as validation and enrichment.

Here you will learn about message composition with pure Kafka API and Kafka Streams. You will look into the transformation of messages in different formats, such asext, binary, XML, JSON, and AVRO. Next, you will learn how to expose the schemas contained in Kafka with the Schema Registry. You will then learn how to work with all relevant connectors with Kafka Connect. While working with Kafka Streams, you will perform various interesting operations on streams, such as windowing, joins, and aggregations. Finally, through KSQL, you will learn how to retrieve, insert, modify, and delete data streams, and how to manipulate watermarks and windows.

What you will learn

Who this book is for

This book is for developers who want to quickly master the practical concepts behind Apache Kafka. The audience need not have come across Apache Kafka previously; however, a familiarity of Java or any JVM language will be helpful in understanding the code in this book.

More books from Packt Publishing

Cover of the book Go: Design Patterns for Real-World Projects by Raúl Estrada
Cover of the book R Deep Learning Essentials by Raúl Estrada
Cover of the book Prezi HOTSHOT by Raúl Estrada
Cover of the book Learn Qt 5 by Raúl Estrada
Cover of the book Pentaho 8 Reporting for Java Developers by Raúl Estrada
Cover of the book Mastering Machine Learning with Spark 2.x by Raúl Estrada
Cover of the book Hands-On Full Stack Web Development with Angular 6 and Laravel 5 by Raúl Estrada
Cover of the book Mastering Redis by Raúl Estrada
Cover of the book INSTANT JQuery Flot Visual Data Analysis by Raúl Estrada
Cover of the book ModSecurity 2.5 by Raúl Estrada
Cover of the book Rust Quick Start Guide by Raúl Estrada
Cover of the book Data Lake Development with Big Data by Raúl Estrada
Cover of the book Learning Cascading by Raúl Estrada
Cover of the book Arduino Development Cookbook by Raúl Estrada
Cover of the book Microsoft Exchange 2013 Cookbook by Raúl Estrada
We use our own "cookies" and third party cookies to improve services and to see statistical information. By using this website, you agree to our Privacy Policy