Category Archives: Strategy

What is Azure Application Insights?

Monitoring your website performance is key to gaining insight into your customers and users, as well as keeping an eye on the website’s performance. In today’s post I’d like to tell you about what is Azure Application Insights.

Application Insights is an application performance management service for web applications that enables you to do all the monitoring of your website performance in Azure. It’s designed to ensure you’re getting optimal performance and the best in class user experience from your website. It also has a powerful analytic tool that helps you diagnose issues and gain an understanding of how people are using your web application.

You can use it with many web platforms and although you’re sending the information about your website to Azure, the website or application itself doesn’t have to be hosted in Azure. For those who work on the dev ops processes, it will help you ensure that you are enabling continuous improvement on your web application with connectivity to bunch of development tools.

How does it work?

How Application Insights works is you insert a small package to your application and set up the Application Insights resource within Azure, thus sending the data to Azure to collect information. The web app is monitored, and it sends telemetry data to the Insight portal (the portal itself is Azure but as I mentioned, the application can be pretty much anywhere).

Along with the Application Insights from the web app, you can pull in your host environmental data, allowing you to look at performance logs, Azure diagnostics and container logs, giving you a full look at what’s going on inside the application, as well as in the environment where it lives.

You can set up periodic web tests that will allow you to send requests to the web server to ensure that it’s responding properly and that the website is working the way it’s supposed to. It’s a very straightforward implementation with a light set of code that tracks web calls that are non-blocking that are sent in separate threads after they’ve been batched together.

Some of the things you can track or collect are:

  • What are the most popular webpages in your application, at what time of day and where is that traffic coming from?
  • Dependency rates or response times and failure rates to find out if there’s an external service that’s causing performance issues on your app, maybe a user is using a portal to get through to your application and there are response time issues going through there for instance.
  • Exceptions for both server and browser information, as well as page views and load performance from the end users’ side.
  • Session info – who, what, when, where.
  • Performance and host diagnostics – giving you a complete picture of what’s happening in your application.
  • Trace logs for correlating trace events with requests to help you get a deeper insight into the data and dig deeper into the diagnostics to improve performance.

It also gives you flexibility, so you can write custom snip its of code to collect other pieces of data that aren’t part of the usual pieces collected. And all your reports can be looked at through the Azure suite of reporting tools such as Power BI to get visualization and fine-grained analytical info about your application.

Application Insights is an incredibly useful tool for anyone who has an application or website and wants to track and manage all the info that’s put out there – who’s viewing what, what’s the most popular, etc.

Azure SQL Database Reserved Capacity

Last week I posted about the Azure Reserve VM Instance where you could save some money in Azure. Another similar way to save is with Azure SQL Database Reserved Capacity. With this you can save 33% compared to license included pricing by pre-buying SQL Database pre-cores for a 1- or 3-year term.

This can be applied to a single subscription or shared across your enrollments, so you can control how many subscriptions can use the benefit, as well as how the reservation is applied to the specific subscriptions you choose.

The reservation scope to a single subscription allows you to apply it to that SQL Database resource(s) within the selected subscription. A reservation with a shared scope can be shared across subscriptions in the enrollment and there’s some flexibility involved like Managed Instances where you can scale up/down.

Some other points about SQL Database Reserved Capacity I’d like to share:

  • It provides V-cores with the size flexibility you need. You can scale those up/down within a performance tier, so do note that you must stay within the same performance tier and in the same region as well, without impacting your reserved capacity pricing.
  • You can temporarily move your hot databases between pools and single databases as part of your normal operations; again, within the same region and performance tier without losing the reserved capacity benefit.
  • You can keep an unapplied buffer in your reservation, so you can effectively manage performance spikes without exceeding your budget. Just another way to keep an eye on your Azure spend.
  • You can stack savings, so with the Azure Hybrid benefit licensing, you can save up to an additional 55%, bringing you to a total savings of over 80% by stacking these benefits.
  • With Hybrid Enterprise Edition, customers using Software Assurance can use 4 cores in the cloud for every one core they’ve purchased.

A couple things to note:

    • It cannot be applied to an MSDN subscription or a non-pay as you go subscription; so basically, it applies to Enterprise and pay as you go subscriptions.
    • Currently only applies to single databases in elastic pools. Managed Instances are still in Preview; when they are in GA (by the end of 2018 as of this post), it will also be covered by 4 Managed Instances as well.

For questions about how this licensing works, contact your Microsoft rep.

New Options for SQL 2008 End of Support

If you’re using SQL 2008 or 2008R2, you need to be aware that the extended support for those ends on July 9, 2019. This means the end of regular security updates which leads to more vulnerabilities, and the software won’t be updated, and you’ll have out of compliance risks. As such, here are some new options for SQL 2008 End of Support:

The best option would be with either a migration or an upgrade, but Microsoft has some options in place to help people out as they understand this can be easier said than done when you have applications that need to be upgraded and you must figure out how best to handle that.

That being said, upgrading provides better performance, efficiency, security features and updates, as well as new technology features and capabilities within the whole stack of SQL products (SSIS, SSRS, SSAS).

Other Options

Here are some options that Microsoft is offering to help with the end of support of 2008/2008R2:

    • First, they are going to extend security updates available for free in Azure for 2008/2008R2 for 3 more years. So, if you simply move your workload to an IaS VM in Azure, you’ll be supported without requiring application changes. You’ll have to pay for those virtual machine costs but it’s still a good deal to get you started.
    • You can migrate your workloads to Managed Instances, which will be in GA by the end of 2018. This will be able to support all applications out there, so you can start the transition up into Azure.
    • You can take advantage of the Azure hybrid licensing model to migrate to save money on licensing. With this you can save up to 55% on some of your PaaS SQL Server costs, but only if you have Enterprise Edition and Software Assurance.
    • For on-premises servers that need more time to upgrade, you’ll be able to purchase extended security service plans to extend out 3 years past the July 2019 date. So, if you’re struggling to get an application upgraded and validated, they’ll extend that out for a fee. Again, this is for customers with Software Assurance or subscription licenses under an Enterprise Agreement. These can be purchased annually and to cover only the servers that need updates.
    • Extended security updates will also be available for purchase as they get closer to the end of support, for both SQL Server and Windows Server.

Again, the first choice would be to upgrade or migrate those databases and move to Azure, but there are some challenges with doing so, and if none of those options work, there are some great options to extend your support.

Replicate Data Into Azure Database for MySQL

In today’s post I’ll talk about replicating data in Azure Database for MySQL. Data in replication allows you to synchronize data from MySQL Server running on prem, virtual machines or database services hosted by other cloud providers into the Azure Database for MySQL.

The data in replication is based on the binary (BIN) log file position-based native to MySQL. So, this is the same as if you were running it on-prem and running the BIN log replication for an enterprise-class database service.

The information in the BIN log is stored in different formats according to the database changes that are being recorded. Then sleeves are configured to read the binary log from the master and to execute the events in the BIN log on the sleeve’s local database. You would write the log on the primary server and then the sleeve knows where the primary is and pulls over that information and executes it on the secondary database.

Use cases

A best use case for data in replication is when you’re using a hybrid data solution. With the data in replication, you can keep data synchronized between your on premises servers and your database for MySQL, thus getting a cloud based secondary solution as a fall over replication, disaster recovery and business continuity.

When you want to have an application that is part cloud based and part local, the synchronization is useful for creating those hybrid applications. It’s also appealing when you have an existing local database server but want to move data to a region closer to end users if you’re geo-located.

Another common use case is multi-cloud synchronization, so for complex cloud solutions you can use the data in replication to synchronize data between Azure Database for MySQL and different cloud providers, including virtual machines and database servers hosted in the cloud. So, this is a great option for another layer of redundancy for those cloud deployments.

A few things to keep in mind:

    • The MySQL system database is not replicated, so if you make changes to accounts and permissions on the primary, they will not be replicated. These changes must be done manually on the replica server.
    • The server version must be 5.6 and later.
    • Primary and replica versions must match.
    • Each cable in the database must have a primary key for consistency.
    • Global transaction identifiers are not supported.

So, some things to consider but still a great option for the scalability, redundancy and business continuity.

Overview of Azure Reserve VM Instances

We’re all looking for ways to save money within our Azure subscriptions and resources. How does a savings of up to 72% sound? Today I’d like to give you an Overview of  Azure Reserve Virtual Machine Instances, a payment option which allows you to get that savings off the standard pay as you go plan by pre-committing to a 1 or 3-year term for the compute of virtual machine usage.

If you know you’re going to use Azure virtual machines for an extended period for your cloud workloads, then this is worth looking at. Just keep in mind that this only covers the virtual machine compute; the networking, other software, Azure services or storage, as well as Windows and SQL Server licensing does not get applied to the reserve.

Although, people who have purchased on-prem licensing for their servers can use their Azure hybrid benefit which allows you to bring your own on-prem Windows and SQL licenses to Azure. If you’re currently using an enterprise agreement or pay as you go plan, if you choose to go with Azure Reserve VM Instances, your cost would be reduced against your enterprise agreement or the credit card that you use for your pay as you go plan would be billed according to what you’re using.

When you purchase your Reserve Instances, it’s instantaneous; you just go in and specify your machine type and the term (1 or 3 years). It will detect those machine types in your current subscriptions or if you’re adding new machine types, it will apply that savings to those machine types.

So, if you know you’re going to use a particular machine type for the next year, say for migration, you’ll experience a good savings by pre-committing up front. And the scope of the Reserved Instance can go across multiple subscriptions and apply the discount to each of them.

Gotchas

A couple things to note; first, when the term expires, it does not auto renew and your discount ends. You can renew your contract and choose your hardware that you need; you’re not stuck using the same hardware you originally specified. And second, Reserved Instances cannot be used for enterprise dev test subscriptions or virtual machines in Preview.

Overview of Azure Elastic Database Jobs Service

Today I’ll give an overview of Microsoft’s newly released (in preview) Elastic Database Jobs service. This is considered as a fully hosted Azure service, whereas the previous iteration was a custom hosted and managed version available on SQL DB and SQL DW within Azure.

It’s similar in capability to an on prem SQL Server Agent, but it can reach across multiple servers, subscriptions and regions. SQL Agent is limited to just the instance on the server for the database that you’re managing. This gives you a much wider range across all your different Azure services.

Other benefits and features:

  • Significant capability added that can enable automation and execution of T-SQL jobs with PowerShell, REST API or T-SQL APIs against a group of databases.
  • Can be used for a wide variety of maintenance tasks, such as rebuilding indexes, schema changes, collecting query results and performance monitoring. Think of it in terms of a developer who’s managing many databases across multiple subscriptions to support multiple lines of business or web applications with the same database schema and they want to make a change to it.
  • The capability to maintain a larger number of databases with similar operations and it allows management for whatever databases you specify and that will ensure an optimum customer experience. You’ll also ensure maximum efficiency to maintain your databases without having to set up specific jobs on each of those servers, and to tap into them and make changes more efficiently during off hours and scale up/down when you need to. Plus, you can change that schema across all those databases with a simple interface.
  • Schedule administrative tasks that otherwise would have to be manually done.
  • Allows for some small schema changes, credential management, performance database, or even telemetry collection if you want insight into what people are doing on the databases.
  • Build indexes off hours.
  • Collect query results from multiple databases for central performance management, so you can collect this info into one place, then render info into a portal like Power BI.

Basically, it reduces management maintenance overhead with its ability to go across subscriptions. Normally, you’d have to have that job run on a specific server; but now within Azure, where you are running managed databases, you can run operations across those databases without having to set up separate jobs.

So, a cool feature – it’s now only in preview so it’s sure to grow and I’m excited about the direction.

 

Overview of Azure Operations Management Suite (OMS)

In this post I’d like to give an overview of what Azure Operations Management Suite is and what it can be used for. First, Operations Management Suite, or OMS, is a collection of management services designed for Azure cloud. As new services are added to Azure, more capabilities are being built into OMS to allow for integration.

OMS allows you to collect things in one central place like the many Azure services that need deeper insight and manageability, all from one portal, as well as being able to set up different groups and different ways of viewing your data. OMS can also be used with on prem resources with Window and Linux Agent, so you can collect logs or backup your servers or files to Azure, for example.

The key Operations Management Suite services are:

  • Log analytics allows you to monitor and analyze the availability and performance of different resources including physical and virtual machines, Azure Data Factory and other Azure services.
  • Proactive alerting for when an issue or problem in your environment is detected, so you can either take corrective action or have a preprogrammed corrective action.
  • Ability automate manual processes and enforce configuration for physical and virtual machines, like automating clean-up operations you do on servers for instance. You can do this through Runbooks which are based on PowerShell scripts or PowerShell workloads where you can programmatically do what you need to do within the OMS.
  • Integrate backups so the agent and integration allow for backing up a service, a file level; whatever you need to do for critical data and run those stores, whether they are on-prem or cloud-based resources.
  • Azure Site Recovery runs through OMS and helps you provide high availability for apps and servers that you’re running.
  • Orchestrate running your replication up into Azure. This allows you to do it from physical servers, Hyper Vs or VMware servers using Windows or Linux.

Mainly, it provides management solutions. These are prepackaged sets of templates provided by Microsoft and/or partners that help implement multiple OMS services at one time. One example is the Update Management Solution which creates a log search, dashboard and alerting inside log analytics, but at the same time creates an automation runbook for installing updates on the server. This will tell you when updates are available, when they’re needed and then let you automate the install of those updates.

There is a lot of power and capability that comes with the Operations Management Suite. It’s a great centralized management solution within Azure that is quick to configure and start using.

 

Business Continuity Strategies in Azure

Keeping businesses online and operational is a key concern, no matter the nature of your downtime. Most companies don’t focus on business continuity until it’s too late or have incomplete, untested barebones recovery plans. High Availability, Disaster Recovery and Backup are all critical to a complete business continuity solution. In a recent webinar, Senior Principal Architect Chris Seferlis discussed how leveraging Azure for disaster recovery and business continuity is the most effective way to ensure you’re protected.

If your business’s data is in the cloud, there is nothing is more pivotal than your cloud backup, recovery and migration procedures. Only 18% of decision makers feel fully prepared to recover their data center in the event of a site failure or disaster. The issues are out-of-date recovery plans and limited back-up and recovery testing.

Most disaster situations are caused by system failures, power failures, natural disasters and cyber-attacks. The challenges businesses face in disaster recovery are significant, including cost, complexity and reliability. To have a successful business continuity strategy, organizations must prioritize high availability, disaster recovery and data back-ups.

Disaster recovery is important; there is always a risk of failure with your data, including software bugs, hardware failure and human error. Important factors to consider are Recovery Time Objective (RTO); the targeted duration of time and a service level within which a business process must be restored after a disaster; and Recovery Point Objective (RPO), the maximum targeted period in which data might be lost from an IT service due to a major incident. Both RTO and RPO are business decisions.

Azure can protect against planned and unplanned events by distributing the placement of VMs across the infrastructure. Azure also helps with Disaster Recovery through consistent backup for Windows Azure VMs and file-system backup for Linux Azure VMs. Additionally, it provides efficient and reliable backups to the cloud with no infrastructure maintenance.

Watch the full webinar here to gain more insight into:

  • Azure Backup Server
  • VMWare VM Backup
  • Azure Site recovery
  • Disaster Recovery for Hyper-V
  • Azure Migration

Click here to view my slides from this presentation. If you’d like to learn more about business continuity using Azure or need help with any Azure project from discussions and planning to implementation, click the link below and talk to us today. We can help no matter where you are on your cloud journey.

Azure Blob Storage Lifecycle Management

When we talk about blob storage, we talk about the three different tiers – hot, cool or archive – for delegating the importance of data and how accessible it is. The challenge has been that when we picked the tier that was pretty much the end of story.

What we want is have our data accessible when and where we need it as it can take some time to pull from cool and archive tiers, as well as be costlier to retrieve. Also, with the more expensive hot tier, data can sit there unnecessarily, and we need a way to move it out after it becomes static or stale.

Here’s some good news! Microsoft recently introduced the public preview of Blob Storage Lifecycle Management. This now makes it easier to manage and automate that movement of data by offering a rule-based policy which you can use to transition your data to the best access tier, as well as expire data at the end of its lifecycle.

This great new toolset allows capability and flexibility to define rules for transitioning blobs to a cooler storage. You can also delete blobs by defining how long a blob should live there, define rules to be executed daily or apply rules to storage containers or subsets of blobs, thus allowing you to access certain blob containers and delete others that you specify based on how you’re moving that data around.

So, you can set up a scenario where data hasn’t been accessed in 3 months and it’s set to be transitioned from hot storage to cool, but then it sits there for 6 more months. You then want to be able to move that data off to archive. These are settings you can change based on the last modification date of the file.

You also can delete blob snapshots that have become stale after a defined period of time. Maybe you set it to delete after 120 days or maybe blobs that haven’t been accessed for a several year period—seven years being the magic number for audits and such.

Microsoft is great at listening to what users have to say and to keep evolving and adding more capability to the technology. If you love data, Azure and Azure Blob Storage as much as I do, let me know by sharing this video.

 

Overview of HDInsight Kafka

Continuing with my HDInsight series, today I’ll be talking about Kafka. HDInsight Kafka will sound much like Storm but as I get into the nuts the bolts you’ll see the differences. Kafka is an open source distributed stream platform that can be used to build real time data streaming pipelines and applications with a message broker functionality, like a message cue.

Some specific Kafka improvements with HDInsight:

  • 99.9% uptime from HDInsight
  • You get 16 terabyte managed discs which increases the scale and reduces the number of required nodes for traditional Kafka clusters, which would have a limit of 1 terabyte.
  • Kafka takes a single rack view, but Azure is designed in 2 dimensions for update and fault domains. Thus, Microsoft designed special tools to rebalance the partitions and replicas. Once you scale out, you would repartition your data and then you’d be able to take advantage of the additional nodes, as well as when you scale down.
  • Kafka allows you to change the number of worker nodes for scaling up/down, depending on the workload and this can be done through the portal or PowerShell or any automation tool within Azure.
  • Direct integration with Azure log analytics. This looks at the virtual machine level information like the disc and the network. The importance of this is it allows you to roll that up into the Microsoft OMS suite for global log analytics. So, when you’re looking at all your resources in Azure through OMS, it helps you to see it at a high level and also drill in for more details.
  • The Zookeeper manages the state of the cluster which helps the concurrency, resiliency and the low latency transactions, as well as the orchestration of the data through the nodes and clusters.
  • Records are stored in topics which is produced by a producer and consumed by consumers. The producers send records to Kafka brokers and each worker node in the cluster is considered a broker. These brokers are what is helping the data move around inside the clusters.

Again, Kafka and Storm sound relatively similar, here’s some major differences:

    • Storm was invented by Twitter; Kafka by LinkedIn. But these are all using the Hadoop platform and it’s an open source, so they can build their own iterations.
    • Storm is meant more for real time message processing; Kafka is for distributed messaging processing.
    • Storm can take data from Kafka and other database system and process the data; Kafka is taking in those streams from things like Facebook, Twitter and LinkedIn.
    • Kafka is a message broker; Storm’s primary use is stream processing.
    • In Storm there is no data storage, you can only stream data through it; Kafka stores the data on the file system. As those streams are processed, Storm can do it much faster, on a micro-batch processing level. Kafka is doing small batches, larger than micro.
    • As far as dependency, Kafka requires Zookeeper for all the orchestration; Storm does not depend on anything externally.
    • Storm has a latency of milliseconds; with Kafka it depends on the source of the data, but typically takes slightly less than 1-2 seconds. So, you’re keeping the data local in Kafka, processing it, then pushing it somewhere else. Whereas with Storm, you’re processing the data in motion as you’re pushing it somewhere else.

Basically, two different ways to solve similar problems depending on the use case. It apparently worked better for LinkedIn to design it this way as opposed to the way that Twitter handles their data.