What is Azure Application Insights?

Monitoring your website performance is key to gaining insight into your customers and users, as well as keeping an eye on the website’s performance. In today’s post I’d like to tell you about what is Azure Application Insights.

Application Insights is an application performance management service for web applications that enables you to do all the monitoring of your website performance in Azure. It’s designed to ensure you’re getting optimal performance and the best in class user experience from your website. It also has a powerful analytic tool that helps you diagnose issues and gain an understanding of how people are using your web application.

You can use it with many web platforms and although you’re sending the information about your website to Azure, the website or application itself doesn’t have to be hosted in Azure. For those who work on the dev ops processes, it will help you ensure that you are enabling continuous improvement on your web application with connectivity to bunch of development tools.

How does it work?

How Application Insights works is you insert a small package to your application and set up the Application Insights resource within Azure, thus sending the data to Azure to collect information. The web app is monitored, and it sends telemetry data to the Insight portal (the portal itself is Azure but as I mentioned, the application can be pretty much anywhere).

Along with the Application Insights from the web app, you can pull in your host environmental data, allowing you to look at performance logs, Azure diagnostics and container logs, giving you a full look at what’s going on inside the application, as well as in the environment where it lives.

You can set up periodic web tests that will allow you to send requests to the web server to ensure that it’s responding properly and that the website is working the way it’s supposed to. It’s a very straightforward implementation with a light set of code that tracks web calls that are non-blocking that are sent in separate threads after they’ve been batched together.

Some of the things you can track or collect are:

  • What are the most popular webpages in your application, at what time of day and where is that traffic coming from?
  • Dependency rates or response times and failure rates to find out if there’s an external service that’s causing performance issues on your app, maybe a user is using a portal to get through to your application and there are response time issues going through there for instance.
  • Exceptions for both server and browser information, as well as page views and load performance from the end users’ side.
  • Session info – who, what, when, where.
  • Performance and host diagnostics – giving you a complete picture of what’s happening in your application.
  • Trace logs for correlating trace events with requests to help you get a deeper insight into the data and dig deeper into the diagnostics to improve performance.

It also gives you flexibility, so you can write custom snip its of code to collect other pieces of data that aren’t part of the usual pieces collected. And all your reports can be looked at through the Azure suite of reporting tools such as Power BI to get visualization and fine-grained analytical info about your application.

Application Insights is an incredibly useful tool for anyone who has an application or website and wants to track and manage all the info that’s put out there – who’s viewing what, what’s the most popular, etc.

Azure SQL Database Reserved Capacity

Last week I posted about the Azure Reserve VM Instance where you could save some money in Azure. Another similar way to save is with Azure SQL Database Reserved Capacity. With this you can save 33% compared to license included pricing by pre-buying SQL Database pre-cores for a 1- or 3-year term.

This can be applied to a single subscription or shared across your enrollments, so you can control how many subscriptions can use the benefit, as well as how the reservation is applied to the specific subscriptions you choose.

The reservation scope to a single subscription allows you to apply it to that SQL Database resource(s) within the selected subscription. A reservation with a shared scope can be shared across subscriptions in the enrollment and there’s some flexibility involved like Managed Instances where you can scale up/down.

Some other points about SQL Database Reserved Capacity I’d like to share:

  • It provides V-cores with the size flexibility you need. You can scale those up/down within a performance tier, so do note that you must stay within the same performance tier and in the same region as well, without impacting your reserved capacity pricing.
  • You can temporarily move your hot databases between pools and single databases as part of your normal operations; again, within the same region and performance tier without losing the reserved capacity benefit.
  • You can keep an unapplied buffer in your reservation, so you can effectively manage performance spikes without exceeding your budget. Just another way to keep an eye on your Azure spend.
  • You can stack savings, so with the Azure Hybrid benefit licensing, you can save up to an additional 55%, bringing you to a total savings of over 80% by stacking these benefits.
  • With Hybrid Enterprise Edition, customers using Software Assurance can use 4 cores in the cloud for every one core they’ve purchased.

A couple things to note:

    • It cannot be applied to an MSDN subscription or a non-pay as you go subscription; so basically, it applies to Enterprise and pay as you go subscriptions.
    • Currently only applies to single databases in elastic pools. Managed Instances are still in Preview; when they are in GA (by the end of 2018 as of this post), it will also be covered by 4 Managed Instances as well.

For questions about how this licensing works, contact your Microsoft rep.

New Options for SQL 2008 End of Support

If you’re using SQL 2008 or 2008R2, you need to be aware that the extended support for those ends on July 9, 2019. This means the end of regular security updates which leads to more vulnerabilities, and the software won’t be updated, and you’ll have out of compliance risks. As such, here are some new options for SQL 2008 End of Support:

The best option would be with either a migration or an upgrade, but Microsoft has some options in place to help people out as they understand this can be easier said than done when you have applications that need to be upgraded and you must figure out how best to handle that.

That being said, upgrading provides better performance, efficiency, security features and updates, as well as new technology features and capabilities within the whole stack of SQL products (SSIS, SSRS, SSAS).

Other Options

Here are some options that Microsoft is offering to help with the end of support of 2008/2008R2:

    • First, they are going to extend security updates available for free in Azure for 2008/2008R2 for 3 more years. So, if you simply move your workload to an IaS VM in Azure, you’ll be supported without requiring application changes. You’ll have to pay for those virtual machine costs but it’s still a good deal to get you started.
    • You can migrate your workloads to Managed Instances, which will be in GA by the end of 2018. This will be able to support all applications out there, so you can start the transition up into Azure.
    • You can take advantage of the Azure hybrid licensing model to migrate to save money on licensing. With this you can save up to 55% on some of your PaaS SQL Server costs, but only if you have Enterprise Edition and Software Assurance.
    • For on-premises servers that need more time to upgrade, you’ll be able to purchase extended security service plans to extend out 3 years past the July 2019 date. So, if you’re struggling to get an application upgraded and validated, they’ll extend that out for a fee. Again, this is for customers with Software Assurance or subscription licenses under an Enterprise Agreement. These can be purchased annually and to cover only the servers that need updates.
    • Extended security updates will also be available for purchase as they get closer to the end of support, for both SQL Server and Windows Server.

Again, the first choice would be to upgrade or migrate those databases and move to Azure, but there are some challenges with doing so, and if none of those options work, there are some great options to extend your support.

Replicate Data Into Azure Database for MySQL

In today’s post I’ll talk about replicating data in Azure Database for MySQL. Data in replication allows you to synchronize data from MySQL Server running on prem, virtual machines or database services hosted by other cloud providers into the Azure Database for MySQL.

The data in replication is based on the binary (BIN) log file position-based native to MySQL. So, this is the same as if you were running it on-prem and running the BIN log replication for an enterprise-class database service.

The information in the BIN log is stored in different formats according to the database changes that are being recorded. Then sleeves are configured to read the binary log from the master and to execute the events in the BIN log on the sleeve’s local database. You would write the log on the primary server and then the sleeve knows where the primary is and pulls over that information and executes it on the secondary database.

Use cases

A best use case for data in replication is when you’re using a hybrid data solution. With the data in replication, you can keep data synchronized between your on premises servers and your database for MySQL, thus getting a cloud based secondary solution as a fall over replication, disaster recovery and business continuity.

When you want to have an application that is part cloud based and part local, the synchronization is useful for creating those hybrid applications. It’s also appealing when you have an existing local database server but want to move data to a region closer to end users if you’re geo-located.

Another common use case is multi-cloud synchronization, so for complex cloud solutions you can use the data in replication to synchronize data between Azure Database for MySQL and different cloud providers, including virtual machines and database servers hosted in the cloud. So, this is a great option for another layer of redundancy for those cloud deployments.

A few things to keep in mind:

    • The MySQL system database is not replicated, so if you make changes to accounts and permissions on the primary, they will not be replicated. These changes must be done manually on the replica server.
    • The server version must be 5.6 and later.
    • Primary and replica versions must match.
    • Each cable in the database must have a primary key for consistency.
    • Global transaction identifiers are not supported.

So, some things to consider but still a great option for the scalability, redundancy and business continuity.

Overview of Azure Reserve VM Instances

We’re all looking for ways to save money within our Azure subscriptions and resources. How does a savings of up to 72% sound? Today I’d like to give you an Overview of  Azure Reserve Virtual Machine Instances, a payment option which allows you to get that savings off the standard pay as you go plan by pre-committing to a 1 or 3-year term for the compute of virtual machine usage.

If you know you’re going to use Azure virtual machines for an extended period for your cloud workloads, then this is worth looking at. Just keep in mind that this only covers the virtual machine compute; the networking, other software, Azure services or storage, as well as Windows and SQL Server licensing does not get applied to the reserve.

Although, people who have purchased on-prem licensing for their servers can use their Azure hybrid benefit which allows you to bring your own on-prem Windows and SQL licenses to Azure. If you’re currently using an enterprise agreement or pay as you go plan, if you choose to go with Azure Reserve VM Instances, your cost would be reduced against your enterprise agreement or the credit card that you use for your pay as you go plan would be billed according to what you’re using.

When you purchase your Reserve Instances, it’s instantaneous; you just go in and specify your machine type and the term (1 or 3 years). It will detect those machine types in your current subscriptions or if you’re adding new machine types, it will apply that savings to those machine types.

So, if you know you’re going to use a particular machine type for the next year, say for migration, you’ll experience a good savings by pre-committing up front. And the scope of the Reserved Instance can go across multiple subscriptions and apply the discount to each of them.

Gotchas

A couple things to note; first, when the term expires, it does not auto renew and your discount ends. You can renew your contract and choose your hardware that you need; you’re not stuck using the same hardware you originally specified. And second, Reserved Instances cannot be used for enterprise dev test subscriptions or virtual machines in Preview.

Overview of Azure Elastic Database Jobs Service

Today I’ll give an overview of Microsoft’s newly released (in preview) Elastic Database Jobs service. This is considered as a fully hosted Azure service, whereas the previous iteration was a custom hosted and managed version available on SQL DB and SQL DW within Azure.

It’s similar in capability to an on prem SQL Server Agent, but it can reach across multiple servers, subscriptions and regions. SQL Agent is limited to just the instance on the server for the database that you’re managing. This gives you a much wider range across all your different Azure services.

Other benefits and features:

  • Significant capability added that can enable automation and execution of T-SQL jobs with PowerShell, REST API or T-SQL APIs against a group of databases.
  • Can be used for a wide variety of maintenance tasks, such as rebuilding indexes, schema changes, collecting query results and performance monitoring. Think of it in terms of a developer who’s managing many databases across multiple subscriptions to support multiple lines of business or web applications with the same database schema and they want to make a change to it.
  • The capability to maintain a larger number of databases with similar operations and it allows management for whatever databases you specify and that will ensure an optimum customer experience. You’ll also ensure maximum efficiency to maintain your databases without having to set up specific jobs on each of those servers, and to tap into them and make changes more efficiently during off hours and scale up/down when you need to. Plus, you can change that schema across all those databases with a simple interface.
  • Schedule administrative tasks that otherwise would have to be manually done.
  • Allows for some small schema changes, credential management, performance database, or even telemetry collection if you want insight into what people are doing on the databases.
  • Build indexes off hours.
  • Collect query results from multiple databases for central performance management, so you can collect this info into one place, then render info into a portal like Power BI.

Basically, it reduces management maintenance overhead with its ability to go across subscriptions. Normally, you’d have to have that job run on a specific server; but now within Azure, where you are running managed databases, you can run operations across those databases without having to set up separate jobs.

So, a cool feature – it’s now only in preview so it’s sure to grow and I’m excited about the direction.

 

Overview of Azure Operations Management Suite (OMS)

In this post I’d like to give an overview of what Azure Operations Management Suite is and what it can be used for. First, Operations Management Suite, or OMS, is a collection of management services designed for Azure cloud. As new services are added to Azure, more capabilities are being built into OMS to allow for integration.

OMS allows you to collect things in one central place like the many Azure services that need deeper insight and manageability, all from one portal, as well as being able to set up different groups and different ways of viewing your data. OMS can also be used with on prem resources with Window and Linux Agent, so you can collect logs or backup your servers or files to Azure, for example.

The key Operations Management Suite services are:

  • Log analytics allows you to monitor and analyze the availability and performance of different resources including physical and virtual machines, Azure Data Factory and other Azure services.
  • Proactive alerting for when an issue or problem in your environment is detected, so you can either take corrective action or have a preprogrammed corrective action.
  • Ability automate manual processes and enforce configuration for physical and virtual machines, like automating clean-up operations you do on servers for instance. You can do this through Runbooks which are based on PowerShell scripts or PowerShell workloads where you can programmatically do what you need to do within the OMS.
  • Integrate backups so the agent and integration allow for backing up a service, a file level; whatever you need to do for critical data and run those stores, whether they are on-prem or cloud-based resources.
  • Azure Site Recovery runs through OMS and helps you provide high availability for apps and servers that you’re running.
  • Orchestrate running your replication up into Azure. This allows you to do it from physical servers, Hyper Vs or VMware servers using Windows or Linux.

Mainly, it provides management solutions. These are prepackaged sets of templates provided by Microsoft and/or partners that help implement multiple OMS services at one time. One example is the Update Management Solution which creates a log search, dashboard and alerting inside log analytics, but at the same time creates an automation runbook for installing updates on the server. This will tell you when updates are available, when they’re needed and then let you automate the install of those updates.

There is a lot of power and capability that comes with the Operations Management Suite. It’s a great centralized management solution within Azure that is quick to configure and start using.

 

Key Terminology in Azure Databricks

In a previous post, I talked about Azure Databricks and what it is. In review, Azure Databricks is a managed platform for running Apache Spark jobs. As it’s managed, that means you don’t have to worry about managing the cluster or running performance maintenance to use Spark, like you would if you were going to deploy a full HDInsight Spark cluster.

Databricks provides a simple to operate user interface for data scientist and analysts when building models, as well as a powerful API that allows for some automation. You also can run role-based access control with Active Directory for better user integration at a more granular scale. You don’t have to tear down an HDInsight cluster to use Spark jobs as you can pause (or start) your resources on demand and scale up/out as needed.

In this post, I’ll run through some key Databricks terms to give you an overview of the different points you’ll use when running Databricks jobs:

    • Workspace – This is the central place that will allow you to organize all the work that’s being done. You can think of it as a ‘folder’ structure where you can save Notebooks and Libraries that you want to operate on and manipulate data with, and then share them securely with other users. Workspace is not meant for storing data; data should be stored in the data storage.
    • Notebooks – This is a set of any number of cells that allow you to execute commands with a programming language, such as Scala, Python, R or SQL; you can specify the language when you open a cell at the top of the Notebook. Here you can also create a dashboard that allows the output of the code to be shared rather than the code itself, and they can be scheduled as jobs for running pipelines, updating models or dashboards.
    • Libraries – These are packages or modules that provide additional functionality for developing various models for different types of analysis. Like a traditional IDE environment like Visual Studio where you have libraries you can plug in and add.
    • Tables – This is where the structured data is stored that you and your team will use for analysis. They can live in cloud storage or in the cluster that’s being used or store them in memory for faster processing of the data.
    • Clusters – Essentially a group of compute resources being used for operations like executing the code from Notebooks or Libraries. You can also pull in data from raw sources like cloud or structured/semi structured data or the data in the tables I mentioned above. Clusters can be controlled via access policies using Active Directory integration.
    • Jobs – Jobs are a tool that’s used to schedule execution within a cluster. These can be scripts using Python or JAR assemblies and you can create manual triggers that will send the jobs off or run them through a REST API.
    • Apps – Think of these as the third-party components that can tap into your Databricks cluster. A good scenario is visualizing the data with apps like Tableau or Power BI. You can consume the modules that you built and the output of the Notebooks or script that you ran to visualize that data.

Azure Data Factory V2 in GA and New Features

Today I’m excited to talk about the general availability of Azure Data Factory V2, as well as some new features that have been added over the last couple months. If you don’t know, Azure Data Factory Version 2 added some new features that V1 didn’t have.

With ADF V2 you get a browser-based interface using drag and drop technology; V1 was primarily done in the Visual Studio IDE. It also added triggers for scheduling, so you can schedule your jobs when required and in additional ways (which I’ll discuss further in a bit).

Some other features of ADF V2 that came out as it became generally available:

  • Lift and Shift operations for your SSIS packages, so if you have SSIS packages local, you can now Lift and Shift those into compute with the integration runtime service in Data Factory.
  • This also allows for cloud to cloud, cloud to prem, prem to prem and some third-party tools are supported within that as well.
  • Control flow activities like link branching, looping, conditional execution and parameterization.
  • Integration with HD Spark and Databricks for big data workloads and data science.

Some features that have come out more recently:

  • Integration with Key Vault, which gives you the ability to encrypt keys and small secrets like passwords used for keys. You can create a Linked Service to a Key Vault and reference those needed passwords rather than having to store those in search or text files or a PowerShell script and have those open. So, you can use Key Vault to reference back and run workloads without having to expose those passwords.
  • The ability to monitor Data Factory using OMS, Microsoft’s cloud-based management solution that helps you manage and protect your on-prem and cloud infrastructure. This is quick and easy to set up and allows you to reach in to different types of applications in Azure and give you additional visibility and control for things like log analytics, automation, data protection and recovery, as well as security and compliance.
  • You can monitor the overall health of your Data Factories and be able to drill in, see the details and troubleshoot if you’re having problems. This is all enable through Azure Analytics, so you turn on your Azure Analytics and Data Factory, then hook those into your OMS suite and you can monitor it as that central management point.
  • Event based triggering with integration through Data Factory. Now you have event driven architecture where you have a common data integration pattern that involves production. Instead of having to schedule a timed trigger, you can monitor a blob creation or deletion, add that file into there and you can trigger your pipeline based on that.

Azure Data Factory V2 is a neat technology and I’m interested to see where it goes as I’m sure that more features will be coming. If you have questions about Azure Data Factory or any of the new Azure resources, we are the people to talk with. We’re doing a lot of work with our clients using Azure tools and we’d love to talk to you about how we can get you using Azure in your organization.

Business Continuity Strategies in Azure

Keeping businesses online and operational is a key concern, no matter the nature of your downtime. Most companies don’t focus on business continuity until it’s too late or have incomplete, untested barebones recovery plans. High Availability, Disaster Recovery and Backup are all critical to a complete business continuity solution. In a recent webinar, Senior Principal Architect Chris Seferlis discussed how leveraging Azure for disaster recovery and business continuity is the most effective way to ensure you’re protected.

If your business’s data is in the cloud, there is nothing is more pivotal than your cloud backup, recovery and migration procedures. Only 18% of decision makers feel fully prepared to recover their data center in the event of a site failure or disaster. The issues are out-of-date recovery plans and limited back-up and recovery testing.

Most disaster situations are caused by system failures, power failures, natural disasters and cyber-attacks. The challenges businesses face in disaster recovery are significant, including cost, complexity and reliability. To have a successful business continuity strategy, organizations must prioritize high availability, disaster recovery and data back-ups.

Disaster recovery is important; there is always a risk of failure with your data, including software bugs, hardware failure and human error. Important factors to consider are Recovery Time Objective (RTO); the targeted duration of time and a service level within which a business process must be restored after a disaster; and Recovery Point Objective (RPO), the maximum targeted period in which data might be lost from an IT service due to a major incident. Both RTO and RPO are business decisions.

Azure can protect against planned and unplanned events by distributing the placement of VMs across the infrastructure. Azure also helps with Disaster Recovery through consistent backup for Windows Azure VMs and file-system backup for Linux Azure VMs. Additionally, it provides efficient and reliable backups to the cloud with no infrastructure maintenance.

Watch the full webinar here to gain more insight into:

  • Azure Backup Server
  • VMWare VM Backup
  • Azure Site recovery
  • Disaster Recovery for Hyper-V
  • Azure Migration

Click here to view my slides from this presentation. If you’d like to learn more about business continuity using Azure or need help with any Azure project from discussions and planning to implementation, click the link below and talk to us today. We can help no matter where you are on your cloud journey.