your current SQL Database service tier is not well suited to your
needs, I’m excited to tell you about a newly created service tier in
Azure called Hyperscale. Hyperscale is a highly scalable storage and
compute performance tier that leverages the Azure architecture to scale
out resources for Azure SQL Database beyond the current limitations of
general purpose and business critical service tiers.
The Hyperscale service tier provides the following capabilities:
Support for up to 100 terabytes of database size (and this will grow over time)
Faster large database backups which are based on file snapshots
Faster database restores (also based on file snapshots)
Higher overall performance due to higher log throughput and faster transaction commit time regardless of the data volumes
The ability to rapidly scale out. You can provision one or more read
only nodes for offloading your read workload for use as hot standbys.
You can rapidly scale up your compute resources (in constant time)
to accommodate heavy workloads, so you can scale compute up and down as
needed just like Azure Data Warehouse
Who should consider moving over to the Hyperscale tier? This is not
an inexpensive tier, but it’s a great choice for companies who have
large databases and have not been able to use Azure databases in the
past due to its 4-terabyte limit, as well as for customers who see
performance and scalability limitations with the other 2 service tiers.
It is primarily designed for transactional or OLTP workloads.
However, it does support hybrid and OLAP workloads, but something to
keep in mind when designing out your databases and services. It’s also
important to note that elastic pools do not support the Hyperscale
How does it work?
You separate the compute and storage out into 4 separate nodes similar to Azure Data Warehouse.
The compute node is where the relational engine lives or where the querying process happens.
The page server node is where the scaled-out storage engine resides
and where database pages are served out to the compute nodes on demand
and keeps pages updated as transactions update data, so these nodes are
moving the data around for you.
The log service node is where the log records are kept as they come
in from the compute node and kept in a durable cache, then they’re
forwarded along to additional compute nodes and caches to ensure
consistency. When all this is spread out and everything is consistently
spread across the compute nodes, it will get stored in Azure storage for
long term storage of your logs.
Lastly, the Azure storage node is where all the data is pushed from
the page servers. So, all the data that eventually lands in the database
gets pushed over to Azure storage and this is also the storage that
gets used for backups, as well as where the replication between
availability groups happens.
This Hyperscale tier is an exciting opportunity for those customers
that don’t have their requirements fulfilled with prior service tiers.
It’s another great Microsoft offering that’s worth checking out if you
have had these service tier issues up to now. And it helps to leave a
line of distinction between Azure Data Warehouse and Azure Database
because you now can scale out/up and tons of data, but it’s still built
out for the transactional processing, as opposed to Azure Data Warehouse
which is more of the analytical or massively parallel processing.
what do you know about Azure Automation? In this post, I’ll fill you in
on this cool, cloud-based automation service that provides you the
ability to configure process automation, update management and system
configuration, which is managed across your on-premises resources, as
well as your Azure cloud-based resources.
Azure Automation provides complete control of deployment operation
and decommissions of workloads and resources for your hybrid
environment. So, we can have a single pane of glass for managing all our
resources through automation.
Some features I’d like to point out are:
It allows you to automate those mundane, error-prone activities that
you perform as part of your system configuration and maintenance.
You can create Notebooks in PowerShell or Python that help you
reduce the chance for misconfiguration errors. And it will help lower
operational costs for the maintenance of those systems, as you can
script it out to do it when you need instead of manually.
The Notebooks can be developed for on-premises or Azure resources
and they use Web Hooks that allow you to trigger automation from things
such as ITSM, Dev Ops and monitoring systems. So, you can run these
remotely and trigger them from wherever you need to.
On configuration management side, you can build these desired state
configurations for your enterprise environment. This will help you to
set a baseline for how your systems will operate and will identify when
there’s a variance from the initial system configuration, alerting you
of any anomalies that could be problematic.
It has a rich reporting back end and alerting interface for full
visibility into what’s happening in your Windows and Linux systems –
on-premises and in Azure.
Gives you update management aspects (in Windows and Linux) to help
you define the aspects of how updates are applied, and it helps
administrators to specify which updates will be deployed, as well as
successful or unsuccessful deployments and the ability to specify which
updates should not be deployed to systems, all done through PowerShell
or Python scripts.
It can share capabilities, so when you’re using multiple resources
or building those Notebooks for automation, it allows you to share the
resources to simplify management. You can build multiple scripts but use
the same resources over and over as references for things like
role-based access control, variables, credentials, certificates,
connections, schedules and access to source control and PowerShell
modules. You can check these in and out of source control like any kind
of code-based project.
Lastly, and one of the coolest features in my opinion, where these
are templates you’re deploying out in your systems, everyone has some
similar challenges. There’s a community gallery where you can go and
download templates others have created or upload ones you’ve created to
share. With a few basic configuration tweaks and review to make sure
they’re secure, this is a great option for making the process faster by
finding an existing script and cleaning it up and deploying it in your
systems and environment.
So, there’s a lot you can do with this service and I think it’s worth
checking out as it can make your maintenance and management much
I’d like to discuss the recently announced Azure Firewall service that is now just released in GA. Azure Firewall is a managed, cloud-based network security service that protects your Azure Virtual Network resources. It is a fully stateful PaaS firewall with built-in high availability and unrestricted cloud scalability.
It’s in the cloud and Azure ecosystem and it has some of that
built-in capability. With Azure Firewall you can centrally create,
enforce and log application and network connectivity policies across
subscriptions and virtual networks, giving you a lot of flexibility.
It is also fully integrated with Azure Monitor for log analytics.
That’s big because a lot of firewalls are not fully integrated with log
analytics which means you can’t centralize these logs in OMS, for
instance, which would give you a great platform in a single pane of
glass for monitoring many of the technologies being used in Azure.
Some of the features within:
Built in high availability, so there’s no additional load balances that need to be built and nothing to configure.
Unrestricted cloud scalability. It can scale up as much as you need
to accommodate changing network traffic flows – no need to budget for
your peak traffic, it will accommodate any peaks or valleys
It has application FQDN filtering rules. You can limit outbound
HTTP/S traffic to specified lists of fully qualified domain names
including wildcards. And the feature does not require SSL termination.
There are network traffic filtering rules, so you can create, allow
or deny network filtering rules by source and destination IP address,
port and protocol. Those rules are enforced and logged across multiple
subscriptions and virtual networks. This is another great example of
having availability and elasticity to be able to manage many components
at one time.
It has fully qualified domain name tagging. If you’re running
Windows updates across multiple servers, you can tag that service as an
allowed service to come through and then it becomes a set standard for
all your services behind that firewall.
Outbound SNAT and inbound DNAT support, so you can identify and
allow traffic originating from your virtual network to remote Internet
destinations, as well as inbound network traffic to your firewall public
IP address is translated (Destination Network Address Translation) and
filtered to the private IP addresses on your virtual networks.
That integration with Azure Monitor that I mentioned in which all
events are integrated with Azure Monitor, allowing you to archive logs
to a storage account, stream events to your Event Hub, or send them to
Another nice thing to note is when you set up an express route or a
VPN from your on premises environment to Azure, you can use this as your
single firewall for all those virtual networks and allow traffic in and
out from there and monitor it all from that single place.
This was just released in GA so there are a few hiccups, but if none of the service challenges effect you, I suggest you give it a try. It will only continue to come along and get better as with all the Azure services. I think it’s going to be a great firewall service option for many.
The operations that are required to fix a drill or piece of equipment in the field is much more significant when it’s unexpected. Shell can use AI to look at when maintenance is required on compressors, valves and other equipment that’s used for oil drilling. This will help to reduce unplanned downtime and repair efforts. If they can keep up with maintenance before equipment fails, they can plan downtime and do so at much less cost.
They’ll use AI to help steer the drill bits through shale deposits to find the best quality shale deposits.
Failures of equipment of great size, such as drilling equipment, can have a lot of related damage and danger. This technology will improve the safety of employees and customers by helping to reduce unexpected failures.
AI enabled drills will help chart a course for the well itself as it’s being drilled, as well as providing constant data from the drill bits on what type of material is being drilled through. The benefits here are 2-fold; they will get data on quality deposits and reduce the wear and tear on the drill. If the drill is using an IoT device to detect a harder material, they’ll have the knowledge to drill in a different area or to figure out the best path to reduce the wear and tear.
It will free up the geologists and engineers to be able to manage more drills at one time, making them more efficient, as well as reactive to deal with problems as they arise while drilling.
As with everything in Azure, this platform is a highly scalable platform that will allow Shell to grow with what is required, plus have the flexibility to take on new workloads. With IoT and AI, these workloads are very easily scaled using Azure as a platform and all the services available with it.
I wanted to share this interesting use case about Shell because it really displays the capabilities of the Azure Platform to solve the mundane and enable the unthinkable.