Category Archives: Training

An Overview of Azure File Sync

I have a question… Who is still using a file server? No need to answer, I know that most of us still are and need to use them for various reasons. We love them—well, we also hate them, as they are a pain to manage.

The pains with Windows File Server:

  • They never seem to have enough storage.
  • They never seem to be properly cleaned up; users don’t delete the files they’re supposed to.
  • The data never seems accessible when and where you need it.

In this blog, I’d like to walk you through Azure File Sync, so you can see for yourself how much better it is.

    • Let’s say I’m setting up a file server in my Seattle headquarters and that file server begins having problems, maybe I’m running out of space for example.
    • I decide to hook this up in a file share in Azure space.
    • I can set up cloud tiering and set up a threshold (say 50%), so that everything beyond that threshold, those files will start moving up into Azure.
    • When I set this threshold, it will start taking the oldest files and graying them out as far as users are concerned. The files are still there and visible as there, but they’ve been pushed off to the cloud, so that space has now been freed up on the file server.
    • If users ever need those files, they can click on them and redownload.
    • Now, let’s say I want to bring on another server at a branch office. I can simply bring up that server, synchronize it with the branch office based on those files in Azure.
    • From here, I can hook up my SMBs and NFS shares for my users and applications, as well as my work folders using multi-site technology. I have all my files synchronized and it’s going to give me direct cloud access to these files.
    • I can hook up my IaaS and PaaS solutions with my REST API or my SMB shares to be able to access these files.
    • With everything synchronized, I’m able to have a rapid file server disaster/data recovery. If my server in Seattle goes down, I simply remove it; my files are already up in Azure.
    • I bring on a new server, sync it back to Azure. My folders start to populate, and as they get used, people will download the files back and the rules that were set up will maintain.
    • The great thing is it can be used with SQL Server 2012 R2, as well as SQL Server 2016.
    • Now I have an all-encompassing solution (with integrated cloud back up within Azure) with better availability, better DR capability and essentially bottomless storage. Azure Backup Vault gets backed up automatically and storage is super cheap.

With Azure File Sync I get:

1. A centralize file service in Azure storage.

2. Cache in multiple locations for fast, local performance.

3.  I can utilize cloud based backup and fast data/disaster recovery.

Overview and Benefits of Azure Cognitive Services

With Artificial Intelligence and Machine Learning, the possibilities for your applications are endless. Would you like to be able to infuse your apps, websites and bots with intelligent algorithms to see, hear, speak, understand and interpret your user needs through natural methods of communication, all without having any data science expertise?

Continue reading Overview and Benefits of Azure Cognitive Services

What is Azure Cosmos DB?

Are you familiar with Azure Cosmos DB? Cosmos DB is Microsoft’s globally distributed, multi-model database. With the click of a button, it allows you to elastically and independently scale throughput and storage across any number of Azure’s geographic regions, so you can put the data where your customers are.

Cosmos DB has custom built APIs that allow you a multitude of data sources, like SQL Server, Mongo DB and Azure tables, as well as offering 5 consistency models. It offers comprehensive Service Level Agreements (SLAs) with money back guarantees for availability (99.99% to be exact), latency, consistency and throughput; a big deal when you need to serve your customers at optimum performance.

Cosmos DB is a great option for many different use cases:

  • Companies that are doing IOT and telematics. Cosmos DB can ingest huge bursts of data, and process and analyze that data in near real-time. Then it will automatically archive all the data it ingests.
  • Retail and Marketing. Take an auto parts product catalog, for example, with tons of parts within the catalog, each with its own properties (some unique and some shared across parts). The next year, new vehicles or new parts model come out, with some similar and different properties. All that data adds up very quickly. Cosmos DB offers a very flexible schema in a hierarchical structure that can easily change the data around as things change.
  • Gaming Industry. Games like Halo 5 by Microsoft are built on a Cosmos DB platform, because they need performance that is quickly and dynamically scalable. You’ve got things like millisecond read-times, which avoids any lags in game play. You can index player related data and it has a social graph database that’s easily implemented with flexible schema for all social aspects of gaming.

Azure Cosmos DB ensures that your data gets there and gets there fast, with a wealth of features and benefits to make your life easier. And it’s easy to set up and manage.

 

Overview of Azure Databricks

I’d like to tell you about Azure Databricks. If you don’t know what that is, Azure Databricks provides an end-to-end, managed Apache Spark platform optimized for the cloud. It’s a fast, easy and collaborative analytics platform designed to help bridge the gap between data scientists, data engineers and business decision-makers using the power of Databricks on Azure.

Azure Databricks uses Microsoft Azure Active Directory as its security infrastructure and it’s optimized for ease of use, as well as ease of deployment within Azure. It features optimized connectors to Azure storage platforms (e.g. Data Lake and Blob Storage) for the fastest possible data access, and one-click management directly from the Azure console.

Some key features are:

Auto-scaling – This feature makes scaling much quicker and allows you to scale up or down as you need.

Auto-terminator – Helps you control the costs of your compute time, as well as assist you in preventing cost overruns (a concern for many cloud users).

Notebook Platform – The Notebook Platform supports standard languages (SQL, Python and R for example) and it builds a whole discussion environment around those platforms, enhancing collaboration amongst teams.

Here are some simple steps to get you started:

  • First, you’re going to prepare your data by ingesting it from your Azure storage platform, which has native support with Azure Databricks.
  • Next, you’re going to do any kind of transformation you need on your ingested data and store it in a Data Warehouse.
  • From here, you’ll want to start to perform analytics on your data. These platforms are built for lots of data and you’ll have the capability to explore large data sets in real time, as well as the ability to explore very quickly.
  • Lastly, you’re going to display the data. Databricks has native support for tools like Power BI to build your dashboards and analytics models.

So, Azure Databricks provides an end-to-end data solution. You can quickly spin up a cluster or do advanced analytics with this powerful platform. And with it, you can create and monitor robust pipelines that will help you dig deep and better understand your data, allowing you to make better business decisions.

3 Reasons Why You Should Move Your Business to the Cloud

Cyber security is on everyone’s mind these days and it can be a challenge for many organizations. If this sounds like you and you haven’t moved to the cloud, it’s something you should think about. I’d like to tell you why you should move your business to the cloud and why it could be more secure there.

1.  When you’re in the cloud business, having a secure cloud drives more business. That’s why cloud companies are willing to invest more to hire the best and brightest. So, the top security people in the world are going to the top cloud companies in the world.

2.  When moving to the cloud, typically, the customer only has to focus on one aspect of security because the rest is already taken care of, so by default, secure. You’d have to intentionally unlock something to make yourself less secure.

3.  Regulatory and certification requirements are more easily satisfied. With a foundation in place that’s already secure and certified, it allows you to focus on your app or infrastructure or whatever requirements you need to satisfy those regulatory compliance issues.

So, make this your year to move to the cloud and take some of the cyber security challenges off your mind.

Overview of Power BI Embedded

Everyone is familiar with Power BI Desktop, Cloud and On-Prem. But not as many are familiar with Power BI Embedded. So, what is it? Power BI embedded allows your company to embed the dashboards and reports in your in-house developed applications, and you only need one Power BI account to be able to have a Power BI embedded environment.

This Azure service is separate from Power BI Premium or Pro and is built for compute, rather than per user, as with other Power BI iterations. The design is to focus on your applications and your customer, instead of the management and maintenance of things.

You have options when setting up your Azure tenant. You can use your existing tenant ID, create a new application for the tenant or a tenant for a specific customer. There are 3 straightforward steps to get you up and running:

1.  Set up your Azure Power BI Embedded environment within Azure. Then set up your tenets, user requirements and workspaces.

2.  Then you’re going to embed your content by going to your backend and set up your application and connect to Azure through the REST API that Azure provides. This is all secure and encrypted traffic going over SSL. If you’re using the authentication when you’re displaying your reports and dashboards, then you’re doing this through your backend application authentication system, rather than the Azure application authentication system.

3.  Lastly, you’re going to release your reports and dashboards to production. You’ll need to decide what compute requirements you need and then set up your tiered pricing, pick your plan and you’re ready to go.

Most Important Components of Azure Data Factory

Are you new to Azure and not know what Azure Data Factory is? Azure Data Factory is Microsoft’s cloud version of an ETL or ELT tool that helps you get your data from one place to another and to transform it. Today, I’d like to tell you about the high-level components within Azure Data Factory. These components pull together a data factory that helps your data flow from its source and have an ultimate end-product for consumption.

  • Pipeline – A pipeline is a logical grouping of activities that performs a grouping of work. An example of an activity may be: you’re copying on-premise data from one data source to the cloud (Azure Data Lake for instance), you then want to run it through an HDI Hadoop cluster for further processing and analysis and put it into a reporting area. The components will be contained inside the pipeline and would be chained together to create a sequence of events, depending upon your specific requirement.
  • Linked Service – This is very similar to the concept of a connection string in SQL Server, where you’re saying what is the source and destination of your data.
  • Trigger – A trigger is a unit of processing that determines when a pipeline needs to be run. These can be scheduled or set off (triggered) by a different event.
  • Parameter – Essentially, the information you can store inside a pipeline that will pass in an argument when you need to fill in what that dataset or linked service is.
  • Control Flow – The control flow in a data factory is what’s orchestrating how the pipeline is going to be sequenced. This includes activities you’ll be performing with those pipelines, such as sequencing, branching and looping.

Top 5 Takeaways from the Microsoft ICA Boot Camp

I was a recent attendee at the Microsoft International Cloud Architect Boot Camp, where I had the opportunity to participate in hands-on sessions, working closely with Microsoft teams and specialists, as well as other Microsoft Partners. This boot camp contained exclusive content that Pragmatic Works gets access to as a partner and as a preferred service within the Microsoft stack.

Here, I’d like to share my top 5 takeaways from this event:

1. Commitment to Security – As a cloud solution architect, I’m asked many questions around security and Microsoft Azure. One thing that amazed me was the commitment that Microsoft has made to security. They spend over a billion dollars each year on security to ensure they are secure from all threats. Microsoft is also the #1 attack to surface in the world. They are truly committed to making sure that your data and surfaces are secure.

2. Security Certifications – Microsoft has passed over 70 regulatory and government certifications when it comes to security and standardized processes. Their second-place competitor, AWS, has only completed 44 of these certifications. Getting these certifications and adhering to certain security and regulatory standards can be expensive, but there is a significant benefit for enterprise, government and small/medium-sized businesses.

3. Right-sizing Their Environment – This can be a challenge for many companies. Microsoft’s internal teams have gone completely to Azure and are managing their platforms within Azure for SQL databases, virtual machines and all other services Azure offers. By doing some specific right-sizing and keeping watch on what’s offered, they lowered their workloads and kept their CPU at the 95th percentile, and more importantly, they were able cut down on spending for their internals needs – to the tune of over 3 million dollars a month!

4. Differentiators from AWS – AWS is currently the #1 cloud platform as far as revenue and volume. But Microsoft is quickly catching up and they’ve identified several differentiators from AWS. Some key differentiators, such as Azure Recovery Zones and other such services, which have been slow to come up, will have releases to general audiences by the end of 2018. MS does not see any other differentiators that will allow AWS to continue to hold that lead.

5. Connections/Partnerships – By having Office 365, Dynamics 365, and Skype and LinkedIn connections, as well as the commitments to partners and ISVs, gives Microsoft a competitive advantage over AWS in what their ecosystem looks like. A common complaint heard is how AWS doesn’t work well with, or cater to, partners, leaving them to figure it out themselves.

Power View Drill Drown/Up Bug or, by Design?

While rehearsing my demo for my upcoming session at SQL Saturday Boston this weekend, #sqlsat500 where I will be lecturing and demonstrating Scratching the Surface of Power View, I noticed a quirky issue where I wasn’t able to drill through my column data, but was fine with row data.  Further, after some playing, I was able to drill down if I changed the order of my column values, or tried using other fields.  Everyone hates NULL data, and I think this is just another reason why.  As it turns out, there are some NULLs that happen to show up as the first column when you drill down, so the ability drill “up” is lost, and the only way to get back to your top-level data is to close and re-open the report.

The Scenario:

Using the AdventureWorksDW2014 Data, and building a basic example of creating a matrix, then adding the Drill Down properties, my selections look like this:

As you can see.  A fairly simple example, just trying to capture the essence of Drilling Down and back out again.  When I click to drill down in to the “Black” column, I get the following:

Note, the first column heading has no title, and there isn’t an available “Drill Up” arrow either.  At this point, I’m stuck.  I can’t go back up to my original report, and now need to close and re-open to start over.  If I change the order of the columns so that “Style” is listed first, above “Color”, the Drill Down and Drill Up work fine.  In order to workaround the issue, I replaced all the NULLs in the table with “NA” and Viola! Works perfectly.  I’m not sure if this is by design from Microsoft in order to force the data to have values, but since it’s their dataset to begin with, I’m assuming it’s a bug.

Hope this helps anyone else running into this issue!

Pragmatic Works SSRS Master Class Review

A little while back, the boys from Pragmatic Works (www.pragmaticworks.com) came up to Boston for their “Master” level training series on SSRS. I attended the class, and wanted to share my experience so other interested people can get a preview of what to expect from taking a PW class.

History…

So, up front, I want to be honest about the fact that I am somewhat biased about the training services provided by PW due to my own past experiences in taking their classes. They have a plethora of offerings in relation to the SQL Server Stack and Data topics. I have attended several of their virtual and on-site classes in the past, and am always pleased with the course material, humorous injections, and interesting nuggets they always provide.

Content…

Ok, now that the free paid advertisement is over, let’s get to the meat and potatoes of the class.  Taught by Devin Knight (@knight_devin) and Mike Davis (@MikeDavisSQL), the focus of the class was to look at some of the deeper components of how to really expand the capabilities of SSRS beyond the “canned” options and features.  One thing I really like about the presentation style of the courses, beyond the humor, is the fact that they will talk about best practices, give demonstrations and examples of them, then follow-up with tips and tricks to get around some of the nuances of the technology being taught.  This particular course focused on several areas of particular interest to me, and some others that don’t apply to my situation, so they were just good for informational purposes.  The areas within SSRS covered were:

    • Good Report Design
    • Custom Code
    • Reporting from Cubes
    • Utilizing Reports
    • Subscriptions
    • IDEs
    • Configuration and Security

As well as a section on Power View.

Takeaways…

Overall, I thought the delivery of the class and the ability of the presenters to break up the material in order to keep it interesting was very good.  Devin and Mike clearly know their stuff, and very obviously love doing it as well as sharing their insight and the various “Microsoftisms” that can occur.  Below is a bit of detail on what worked and didn’t work as well for me about the class specifically.

Good stuff:

  • The examples used about displaying the numbers in such a way that they become more readable by highlighting numbers falling in certain ranges, or more specifically, in-report KPIs, were very helpful.
  • Getting into some of the deeper security and configuration topics offered some different techniques on establishing better security alternatives.
  • The modules on linking and mapping within reports were good to have in this course, as those areas have provided some headaches to myself and my peers in the past.

Neutral stuff:

  • Personally, I don’t do a whole lot with SSAS and Cubes, as most of my work surrounds SSIS and SSRS, but there were some interesting nuggets revealed and “ah ha” moments, as this appears to be somewhat of a tricky subject.

Not as useful stuff:

  • Given the class is at the “Master” level, I’m not sure an entire module and lab needed to be dedicated to Good Report Design.  My assumption is that the majority of the people in the class were there because they have a fair amount of experience in delivering reports, and they were looking for ways to better extend them.  Maybe instead of a module, using poor report design as a “pop quiz” throughout presenting in order to break up the monotony and add some humor.
  • I found the preparation materials for the class to be lacking a bit.  Many of the students in the class (I believe there were around 75 of us) did not have their environment setup correctly on the first day so Mike and Devin spent much of the day running from person to person to make sure the configuration was correct.  I especially remember this being the case for the SSAS and Cube sections.

Conclusion…

I’ve been a big fan of Pragmatic Works products and training for a while now and recommend them to anyone that is looking to brush up on their SQL skills, or might have a need for their suite of tools.  I found this class to be helpful and was able to use some of the topics covered almost immediately after taking the class.