Category Archives: Strategy

What is Azure Automation?

So, what do you know about Azure Automation? In this post, I’ll fill you in on this cool, cloud-based automation service that provides you the ability to configure process automation, update management and system configuration, which is managed across your on-premises resources, as well as your Azure cloud-based resources.

Azure Automation provides complete control of deployment operation and decommissions of workloads and resources for your hybrid environment. So, we can have a single pane of glass for managing all our resources through automation.

Some features I’d like to point out are:

  • It allows you to automate those mundane, error-prone activities that you perform as part of your system configuration and maintenance.
  • You can create Notebooks in PowerShell or Python that help you reduce the chance for misconfiguration errors. And it will help lower operational costs for the maintenance of those systems, as you can script it out to do it when you need instead of manually.
  • The Notebooks can be developed for on-premises or Azure resources and they use Web Hooks that allow you to trigger automation from things such as ITSM, Dev Ops and monitoring systems. So, you can run these remotely and trigger them from wherever you need to.
  • On configuration management side, you can build these desired state configurations for your enterprise environment. This will help you to set a baseline for how your systems will operate and will identify when there’s a variance from the initial system configuration, alerting you of any anomalies that could be problematic.
  • It has a rich reporting back end and alerting interface for full visibility into what’s happening in your Windows and Linux systems – on-premises and in Azure.
  • Gives you update management aspects (in Windows and Linux) to help you define the aspects of how updates are applied, and it helps administrators to specify which updates will be deployed, as well as successful or unsuccessful deployments and the ability to specify which updates should not be deployed to systems, all done through PowerShell or Python scripts.
  • It can share capabilities, so when you’re using multiple resources or building those Notebooks for automation, it allows you to share the resources to simplify management. You can build multiple scripts but use the same resources over and over as references for things like role-based access control, variables, credentials, certificates, connections, schedules and access to source control and PowerShell modules. You can check these in and out of source control like any kind of code-based project.
  • Lastly, and one of the coolest features in my opinion, where these are templates you’re deploying out in your systems, everyone has some similar challenges. There’s a community gallery where you can go and download templates others have created or upload ones you’ve created to share. With a few basic configuration tweaks and review to make sure they’re secure, this is a great option for making the process faster by finding an existing script and cleaning it up and deploying it in your systems and environment.

So, there’s a lot you can do with this service and I think it’s worth checking out as it can make your maintenance and management much simpler.

Shell Chooses Azure Platform for AI

Artificial Intelligence (AI) is making its way into many industries today, helping to solve business problems and helping with efficiency. In this post, I’d like to share an interesting story about Shell choosing Azure for their AI platform. Shell Oil Company chose to use C3 IoT for their IoT device management and Azure for their predictive analytics.

Let’s look at how Shell is using this technology:

  • The operations that are required to fix a drill or piece of equipment in the field is much more significant when it’s unexpected. Shell can use AI to look at when maintenance is required on compressors, valves and other equipment that’s used for oil drilling. This will help to reduce unplanned downtime and repair efforts. If they can keep up with maintenance before equipment fails, they can plan downtime and do so at much less cost.
  • They’ll use AI to help steer the drill bits through shale deposits to find the best quality shale deposits.
  • Failures of equipment of great size, such as drilling equipment, can have a lot of related damage and danger. This technology will improve the safety of employees and customers by helping to reduce unexpected failures.
  • AI enabled drills will help chart a course for the well itself as it’s being drilled, as well as providing constant data from the drill bits on what type of material is being drilled through. The benefits here are 2-fold; they will get data on quality deposits and reduce the wear and tear on the drill. If the drill is using an IoT device to detect a harder material, they’ll have the knowledge to drill in a different area or to figure out the best path to reduce the wear and tear.
  • It will free up the geologists and engineers to be able to manage more drills at one time, making them more efficient, as well as reactive to deal with problems as they arise while drilling.

As with everything in Azure, this platform is a highly scalable platform that will allow Shell to grow with what is required, plus have the flexibility to take on new workloads. With IoT and AI, these workloads are very easily scaled using Azure as a platform and all the services available with it.

I wanted to share this interesting use case about Shell because it really displays the capabilities of the Azure Platform to solve the mundane and enable the unthinkable.

What is Azure Data Box and Data Box Disk?

Are you looking to move large amounts of data into Azure? How does doing it for free sound and with an easier process? Today I’m here to tell you how to do just that with the Azure Data Box.

Picture this: you have a ton of data, let’s say 50 terabytes on-prem, and you need to get that into Azure because you’re going to start doing incremental back ups of a SQL Database, for instance. You have two options to get this done.

First option is to move that data manually. Which means you have to chunk it, set it up using AZ copy or a similar Azure data tool, put it up in a blob storage, then extract it and continue with the process. Sounds pretty painful, right?

Your second option is to use Azure Data Box which allows you to move large chunks of data up into Azure. Here’s how simple it is:

  • You order the Data Box through Azure (currently available in the US and EU)
  • Once received, you connect it to your environment however you plan to move that data
  • It uses standard protocols like SMB and CIFS
  • You copy the data you want to move and return the Data Box back to Azure and then they will upload the data into your storage container(s)
  • Once the data is uploaded, they will securely erase that Data Box

With the Data Box you get:

  • 256-bit encryption
  • A super tough, hardened box that can withstand drops or water, etc.
  • It can be pushed into Azure Blob
  • You can copy data up to 10 storage accounts
  • There are two 1 gigabit/second and two 10 gigabit/second connections to allow quick movement of data off your network onto the box

In addition, Microsoft has recently announced the Data Box Disk, which is a small 8 terabyte disk that you can order up to five of as part of the Data Box Disk.

With Data Box Disc you get:

  • 35 terabytes of usable capacity per order
  • Supports Azure Blobs
  • A USB SATA 2 and 3 interface
  • Uses 128-bit encryption
  • Like Data Box, it’s a simple process to connect it, unlock it, copy the data onto the disk and it send it back to copy those into a single storage account for you

Here comes the best part—while Azure Data Box and Data Box Disk are in Preview, this is a free service. Yes, you heard it right, Microsoft will send you the Data Box or Data Box Disk for free and you can move your data up into Azure for no cost.

Sure, it will cost you money when you buy your storage account and start storing large sums of data, but storage is cheap in Azure, so that won’t break the bank.

 

What is Azure Virtual WAN?

In today’s post I’d like to talk about site to site networking service. Azure already has a site to site VPN service, but the Azure Virtual WAN is a newer service currently in Preview. This networking service is optimized for branch to service connectivity and offers the capability to use partner devices currently supplied by preferred partners (currently Riverbed and Cisco) or the ability to manually configure this connectivity with your environment.

Azure Virtual WAN has some big differences to consider:

  • Automated set up and configuration of these devices by preferred partners makes much easier to configure them. You simply set up these connections which you can export directly from the device into Azure and it automatically sets it up for you.
  • It is designed for large scalability and more through-put. The site to site service is great for smaller workloads but this new service opens the pipe and allows the data to crank through much faster.
  • It’s designed as a Hub and Spoke model. The Hub being Azure and the Spoke being your branch office – all managed within Azure.

Let’s look at the 4 main components of this service:

  • The Virtual WAN Service itself – This asset is where the resources are collected, and it represents a virtual overlay of the Azure network. Think of it as a top down view of the connectivity between all the components in Azure and in your offices.
  • A site represents the on premises VPN device and its settings. I mentioned those preferred devices from Riverbed and Sysco (with more to come) and if you’re using a supported device, you can easily drop that configuration into Azure.
  • The hub is the connection point in Azure for those sites. The site connects to the hub and the virtual WAN is overlooking all of these components.
  • The hub virtual network connection allows your connection point for your hub to your virtual network.

So, your hub and your virtual network are connected through that virtual network connection. This allows the communication between your virtual networks in Azure and your site to site virtual WAN.

This offering makes the landscape a bit different with how people are doing connectivity into Azure and connecting their remote offices by consolidating what that network looks like, as well as making it easier by offering these preferred devices.

Again, this is still in Preview but definitely something I would suggest checking out.

Informatica Enterprise Data Catalog in Azure

If you’re like many Azure customers, you’ve been on the look out for a data catalog and data lineage tool and want one with all the key capabilities you’re looking for. Today, I’d like to tell you more about the Informatica Data Catalog which was discussed briefly in a previous Azure Every Day post.

The Informatica tool helps you to analyze, consolidate and understand large volumes of metadata in your enterprise. It allows you to extract both physical and business metadata for objects and organize it based on business concepts, as well as view data lineage and relationships for each of those objects.

Sources include databases, data warehouses, business glossaries, data integration and Business Intelligence reports and more – anything data related. The catalog maintains an indexed inventory of all the dated objects or ‘assets’ in your enterprise such as tables, columns, reports, views and schemas.

Metadata and statistical information in the catalog include things like profile results, as well as info about data domains and data relationships. It’s really the who, what, when, where and how of the data in your enterprise.

Informatica Data Catalog can be use for tasks such as:

  • Find your scalable assets by being able to scour your network or cloud space to look for assets that aren’t cataloged.
  • View lineage for those assets, as well as relationships between assets.
  • Enrich assets by tagging them with additional attributes, possibly tag a specific report as a critical item.

These are lots of useful features in the Data Catalog. Some key ones are:

  • Data Discovery – Do a semantic search, dynamic filtering, data lineage and relationships for assets across your enterprise.
  • Data Classification – Automatically or manually annotate data classifications to help with governance and discovery – who should have access to what data and what does the data contain.
  • Resource Administration – Like resource, schedule and attribute management, as well as connection or profile configuration management. All the items that surround the data that help you manage the data and the metadata around it.
  • Create and edit reusable profile definition settings.
  • Monitor resources and tasks within your environment.
  • Data domain management where you can create and edit domains and the kind of groups you want to group together with like data and reports.
  • Assign logical data domains to data groups.
  • Build composite data domains for management purposes.
  • Monitor the status of tasks in progress and look at some transformation logic for assets.

On top of this, you can look at how frequently the data is accessed and how valuable it is to your business users; showing this type of information around your data so you can trim reports that aren’t being used for instance.

When we talk about modern data warehousing in the Azure cloud, this is something we’ve been looking for. It’s a useful and valuable tool for those who want those data governance and lineage tools.

Simplified Managed Disk Migration in Azure

Simplified Managed Disk Migration in Azure

In the past, migrating managed disks could be a bit of a challenge. Today I’d like to talk about how Azure has simplified the process. Microsoft recently added the ability to migrate the disks through their portal instead of having to use a command line interface or a PowerShell script.

First off, why would you want a managed disk over an unmanaged one?

  • Greater scalability due to much higher IOPs and storage limits. There’s no longer the need to add additional storage accounts when you’re adding disk space, which has been a challenge for users that were using large virtual machines and required large storage space.
  • Better availability and reliability which ensures that disks are now isolated from each other in different storage scale units.
  • Managed disks offer an over 99.99% uptime, plus are always stored with 3 replicas of the data.
  • More granular access control by employing role-based access control (RBAC) security. You have granular capability to assign access to various people in your organization.

Here’s how it works:

    • When looking at an overview of your VM if you’re using an unmanaged disk, you’ll see a ribbon or banner at the top alerting you that you’re not using managed disks and that you should. Sure, they cost a bit more, but the payback is better resiliency and reliability.
    • When you click on that banner, it will give you a wizard to walk you through how to perform that migration. It will also remind you that when you migrate, you can’t go back. Your virtual machine will remain unchanged, but you’ll want to take that into account.
    • It will reboot your VM once complete, so keep this in mind so you can plan to do this during off hours.
    • Another note, if your VM is in an availability set, you’ll be prompted to migrate that availability set first, then your migration.
    • Once you’re done and back up and running, you’ll see the new disks and the old unmanaged disks, even though they can’t be mounted. You can later clean those up and delete them.
    • You’ll have a disk for the OS and each data disk in that resource group and you’re ready to go, with more availability plus the comfort of knowing you’re running in a more continuity mode.

So, look at your virtual machines and do that migration when you have a chance. This great wizard-based feature makes it much easier. The reliability benefits will greatly outweigh the added cost.

 

What is Azure Application Insights?

Monitoring your website performance is key to gaining insight into your customers and users, as well as keeping an eye on the website’s performance. In today’s post I’d like to tell you about what is Azure Application Insights.

Application Insights is an application performance management service for web applications that enables you to do all the monitoring of your website performance in Azure. It’s designed to ensure you’re getting optimal performance and the best in class user experience from your website. It also has a powerful analytic tool that helps you diagnose issues and gain an understanding of how people are using your web application.

You can use it with many web platforms and although you’re sending the information about your website to Azure, the website or application itself doesn’t have to be hosted in Azure. For those who work on the dev ops processes, it will help you ensure that you are enabling continuous improvement on your web application with connectivity to bunch of development tools.

How does it work?

How Application Insights works is you insert a small package to your application and set up the Application Insights resource within Azure, thus sending the data to Azure to collect information. The web app is monitored, and it sends telemetry data to the Insight portal (the portal itself is Azure but as I mentioned, the application can be pretty much anywhere).

Along with the Application Insights from the web app, you can pull in your host environmental data, allowing you to look at performance logs, Azure diagnostics and container logs, giving you a full look at what’s going on inside the application, as well as in the environment where it lives.

You can set up periodic web tests that will allow you to send requests to the web server to ensure that it’s responding properly and that the website is working the way it’s supposed to. It’s a very straightforward implementation with a light set of code that tracks web calls that are non-blocking that are sent in separate threads after they’ve been batched together.

Some of the things you can track or collect are:

  • What are the most popular webpages in your application, at what time of day and where is that traffic coming from?
  • Dependency rates or response times and failure rates to find out if there’s an external service that’s causing performance issues on your app, maybe a user is using a portal to get through to your application and there are response time issues going through there for instance.
  • Exceptions for both server and browser information, as well as page views and load performance from the end users’ side.
  • Session info – who, what, when, where.
  • Performance and host diagnostics – giving you a complete picture of what’s happening in your application.
  • Trace logs for correlating trace events with requests to help you get a deeper insight into the data and dig deeper into the diagnostics to improve performance.

It also gives you flexibility, so you can write custom snip its of code to collect other pieces of data that aren’t part of the usual pieces collected. And all your reports can be looked at through the Azure suite of reporting tools such as Power BI to get visualization and fine-grained analytical info about your application.

Application Insights is an incredibly useful tool for anyone who has an application or website and wants to track and manage all the info that’s put out there – who’s viewing what, what’s the most popular, etc.

Azure SQL Database Reserved Capacity

Last week I posted about the Azure Reserve VM Instance where you could save some money in Azure. Another similar way to save is with Azure SQL Database Reserved Capacity. With this you can save 33% compared to license included pricing by pre-buying SQL Database pre-cores for a 1- or 3-year term.

This can be applied to a single subscription or shared across your enrollments, so you can control how many subscriptions can use the benefit, as well as how the reservation is applied to the specific subscriptions you choose.

The reservation scope to a single subscription allows you to apply it to that SQL Database resource(s) within the selected subscription. A reservation with a shared scope can be shared across subscriptions in the enrollment and there’s some flexibility involved like Managed Instances where you can scale up/down.

Some other points about SQL Database Reserved Capacity I’d like to share:

  • It provides V-cores with the size flexibility you need. You can scale those up/down within a performance tier, so do note that you must stay within the same performance tier and in the same region as well, without impacting your reserved capacity pricing.
  • You can temporarily move your hot databases between pools and single databases as part of your normal operations; again, within the same region and performance tier without losing the reserved capacity benefit.
  • You can keep an unapplied buffer in your reservation, so you can effectively manage performance spikes without exceeding your budget. Just another way to keep an eye on your Azure spend.
  • You can stack savings, so with the Azure Hybrid benefit licensing, you can save up to an additional 55%, bringing you to a total savings of over 80% by stacking these benefits.
  • With Hybrid Enterprise Edition, customers using Software Assurance can use 4 cores in the cloud for every one core they’ve purchased.

A couple things to note:

    • It cannot be applied to an MSDN subscription or a non-pay as you go subscription; so basically, it applies to Enterprise and pay as you go subscriptions.
    • Currently only applies to single databases in elastic pools. Managed Instances are still in Preview; when they are in GA (by the end of 2018 as of this post), it will also be covered by 4 Managed Instances as well.

For questions about how this licensing works, contact your Microsoft rep.

New Options for SQL 2008 End of Support

If you’re using SQL 2008 or 2008R2, you need to be aware that the extended support for those ends on July 9, 2019. This means the end of regular security updates which leads to more vulnerabilities, and the software won’t be updated, and you’ll have out of compliance risks. As such, here are some new options for SQL 2008 End of Support:

The best option would be with either a migration or an upgrade, but Microsoft has some options in place to help people out as they understand this can be easier said than done when you have applications that need to be upgraded and you must figure out how best to handle that.

That being said, upgrading provides better performance, efficiency, security features and updates, as well as new technology features and capabilities within the whole stack of SQL products (SSIS, SSRS, SSAS).

Other Options

Here are some options that Microsoft is offering to help with the end of support of 2008/2008R2:

    • First, they are going to extend security updates available for free in Azure for 2008/2008R2 for 3 more years. So, if you simply move your workload to an IaS VM in Azure, you’ll be supported without requiring application changes. You’ll have to pay for those virtual machine costs but it’s still a good deal to get you started.
    • You can migrate your workloads to Managed Instances, which will be in GA by the end of 2018. This will be able to support all applications out there, so you can start the transition up into Azure.
    • You can take advantage of the Azure hybrid licensing model to migrate to save money on licensing. With this you can save up to 55% on some of your PaaS SQL Server costs, but only if you have Enterprise Edition and Software Assurance.
    • For on-premises servers that need more time to upgrade, you’ll be able to purchase extended security service plans to extend out 3 years past the July 2019 date. So, if you’re struggling to get an application upgraded and validated, they’ll extend that out for a fee. Again, this is for customers with Software Assurance or subscription licenses under an Enterprise Agreement. These can be purchased annually and to cover only the servers that need updates.
    • Extended security updates will also be available for purchase as they get closer to the end of support, for both SQL Server and Windows Server.

Again, the first choice would be to upgrade or migrate those databases and move to Azure, but there are some challenges with doing so, and if none of those options work, there are some great options to extend your support.

Replicate Data Into Azure Database for MySQL

In today’s post I’ll talk about replicating data in Azure Database for MySQL. Data in replication allows you to synchronize data from MySQL Server running on prem, virtual machines or database services hosted by other cloud providers into the Azure Database for MySQL.

The data in replication is based on the binary (BIN) log file position-based native to MySQL. So, this is the same as if you were running it on-prem and running the BIN log replication for an enterprise-class database service.

The information in the BIN log is stored in different formats according to the database changes that are being recorded. Then sleeves are configured to read the binary log from the master and to execute the events in the BIN log on the sleeve’s local database. You would write the log on the primary server and then the sleeve knows where the primary is and pulls over that information and executes it on the secondary database.

Use cases

A best use case for data in replication is when you’re using a hybrid data solution. With the data in replication, you can keep data synchronized between your on premises servers and your database for MySQL, thus getting a cloud based secondary solution as a fall over replication, disaster recovery and business continuity.

When you want to have an application that is part cloud based and part local, the synchronization is useful for creating those hybrid applications. It’s also appealing when you have an existing local database server but want to move data to a region closer to end users if you’re geo-located.

Another common use case is multi-cloud synchronization, so for complex cloud solutions you can use the data in replication to synchronize data between Azure Database for MySQL and different cloud providers, including virtual machines and database servers hosted in the cloud. So, this is a great option for another layer of redundancy for those cloud deployments.

A few things to keep in mind:

    • The MySQL system database is not replicated, so if you make changes to accounts and permissions on the primary, they will not be replicated. These changes must be done manually on the replica server.
    • The server version must be 5.6 and later.
    • Primary and replica versions must match.
    • Each cable in the database must have a primary key for consistency.
    • Global transaction identifiers are not supported.

So, some things to consider but still a great option for the scalability, redundancy and business continuity.