Overview of HDInsight Storm

Next in my series on HDInsight, today I’ll be talking about Storm. HDInsight Storm is a distributed stream processing computational framework. It uses spouts which define information sources and bolts which are manipulations in processing to allow batch distributed processing of streaming data.

Think of it’s apology in the shape of a direct acyclic graph. It’s a DHE where the edges are named streams and direct the data from node to node. When you put it all together, it creates the data transformation pipeline.

When you break it down, it’s topology is like that of map/reduce jobs; the difference being that map/reduce jobs run in individual batches and Storm is processed continuously in real time.

The Storm cluster has 2 different types of nodes. There’s a Master node which executes a Nimbus which assigns tasks to machines and monitors their performance. The Worker node runs Supervisor which assigns tasks to the other worker nodes and operates them as needed.

The Storm cluster can’t monitor its own state and health, so it deploys a Zookeeper node to connect to the Nimbus and Supervisor to keep an eye on things.

The 3 main components of Storm are:

1. The topology which is basically a network for the stream and spout.

2. The stream which is an unbounded pipeline of tuples.

3. The spout which is the source of the data which converts the data to the tuple of streams and then sends the bolts to be processed.

What makes this effective is that the data processing engine is guaranteed as far as every tuple will be fully processed and delivered, giving it a 99.9% uptime SLA from Microsoft. It does this by tracking the lineage of the tuple as it makes its way through the typology. It works like a query system as the messages can be replayed if there’s a failure in delivery.

Some use cases for Storm:

    • Writing the data after it gets processed into an Azure Data Lake Store.
    • As a source for Azure Event Hubs, as well as processing events from here. It can take a vehicle sensor, for instance, and can process it in Event Hubs, then send the data to Cosmos DB or an Azure Storage Blob.
    • Twitter is using Storm in a variety of ways. They use it for discovery on their data, running real time analytics and personalization in real time, so when you log into Twitter it knows your preferences based on past visits. It also works for real time Search and for their own internal revenue optimization.

As with other HDInsight components, it’s used among various typologies to solve and satisfy big data requirements and workloads. For example, if you were doing a customer churn analysis in real time based on a Twitter feed, this would be a technology you would use along side Hadoop.

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.

This site uses Akismet to reduce spam. Learn how your comment data is processed.