Category Archives: SQL Server

Introducing Azure SQL Database Hyperscale Service Tier

If your current SQL Database service tier is not well suited to your needs, I’m excited to tell you about a newly created service tier in Azure called Hyperscale. Hyperscale is a highly scalable storage and compute performance tier that leverages the Azure architecture to scale out resources for Azure SQL Database beyond the current limitations of general purpose and business critical service tiers.

The Hyperscale service tier provides the following capabilities:

  • Support for up to 100 terabytes of database size (and this will grow over time)
  • Faster large database backups which are based on file snapshots
  • Faster database restores (also based on file snapshots)
  • Higher overall performance due to higher log throughput and faster transaction commit time regardless of the data volumes
  • The ability to rapidly scale out. You can provision one or more read only nodes for offloading your read workload for use as hot standbys.
  • You can rapidly scale up your compute resources (in constant time) to accommodate heavy workloads, so you can scale compute up and down as needed just like Azure Data Warehouse

Who should consider moving over to the Hyperscale tier? This is not an inexpensive tier, but it’s a great choice for companies who have large databases and have not been able to use Azure databases in the past due to its 4-terabyte limit, as well as for customers who see performance and scalability limitations with the other 2 service tiers.

It is primarily designed for transactional or OLTP workloads. However, it does support hybrid and OLAP workloads, but something to keep in mind when designing out your databases and services. It’s also important to note that elastic pools do not support the Hyperscale service tier.

How does it work?

  • You separate the compute and storage out into 4 separate nodes similar to Azure Data Warehouse.
  • The compute node is where the relational engine lives or where the querying process happens.
  • The page server node is where the scaled-out storage engine resides and where database pages are served out to the compute nodes on demand and keeps pages updated as transactions update data, so these nodes are moving the data around for you.
  • The log service node is where the log records are kept as they come in from the compute node and kept in a durable cache, then they’re forwarded along to additional compute nodes and caches to ensure consistency. When all this is spread out and everything is consistently spread across the compute nodes, it will get stored in Azure storage for long term storage of your logs.
  • Lastly, the Azure storage node is where all the data is pushed from the page servers. So, all the data that eventually lands in the database gets pushed over to Azure storage and this is also the storage that gets used for backups, as well as where the replication between availability groups happens.

This Hyperscale tier is an exciting opportunity for those customers that don’t have their requirements fulfilled with prior service tiers. It’s another great Microsoft offering that’s worth checking out if you have had these service tier issues up to now. And it helps to leave a line of distinction between Azure Data Warehouse and Azure Database because you now can scale out/up and tons of data, but it’s still built out for the transactional processing, as opposed to Azure Data Warehouse which is more of the analytical or massively parallel processing.

Azure SQL Database Reserved Capacity

Last week I posted about the Azure Reserve VM Instance where you could save some money in Azure. Another similar way to save is with Azure SQL Database Reserved Capacity. With this you can save 33% compared to license included pricing by pre-buying SQL Database pre-cores for a 1- or 3-year term.

This can be applied to a single subscription or shared across your enrollments, so you can control how many subscriptions can use the benefit, as well as how the reservation is applied to the specific subscriptions you choose.

The reservation scope to a single subscription allows you to apply it to that SQL Database resource(s) within the selected subscription. A reservation with a shared scope can be shared across subscriptions in the enrollment and there’s some flexibility involved like Managed Instances where you can scale up/down.

Some other points about SQL Database Reserved Capacity I’d like to share:

  • It provides V-cores with the size flexibility you need. You can scale those up/down within a performance tier, so do note that you must stay within the same performance tier and in the same region as well, without impacting your reserved capacity pricing.
  • You can temporarily move your hot databases between pools and single databases as part of your normal operations; again, within the same region and performance tier without losing the reserved capacity benefit.
  • You can keep an unapplied buffer in your reservation, so you can effectively manage performance spikes without exceeding your budget. Just another way to keep an eye on your Azure spend.
  • You can stack savings, so with the Azure Hybrid benefit licensing, you can save up to an additional 55%, bringing you to a total savings of over 80% by stacking these benefits.
  • With Hybrid Enterprise Edition, customers using Software Assurance can use 4 cores in the cloud for every one core they’ve purchased.

A couple things to note:

    • It cannot be applied to an MSDN subscription or a non-pay as you go subscription; so basically, it applies to Enterprise and pay as you go subscriptions.
    • Currently only applies to single databases in elastic pools. Managed Instances are still in Preview; when they are in GA (by the end of 2018 as of this post), it will also be covered by 4 Managed Instances as well.

For questions about how this licensing works, contact your Microsoft rep.

New Options for SQL 2008 End of Support

If you’re using SQL 2008 or 2008R2, you need to be aware that the extended support for those ends on July 9, 2019. This means the end of regular security updates which leads to more vulnerabilities, and the software won’t be updated, and you’ll have out of compliance risks. As such, here are some new options for SQL 2008 End of Support:

The best option would be with either a migration or an upgrade, but Microsoft has some options in place to help people out as they understand this can be easier said than done when you have applications that need to be upgraded and you must figure out how best to handle that.

That being said, upgrading provides better performance, efficiency, security features and updates, as well as new technology features and capabilities within the whole stack of SQL products (SSIS, SSRS, SSAS).

Other Options

Here are some options that Microsoft is offering to help with the end of support of 2008/2008R2:

    • First, they are going to extend security updates available for free in Azure for 2008/2008R2 for 3 more years. So, if you simply move your workload to an IaS VM in Azure, you’ll be supported without requiring application changes. You’ll have to pay for those virtual machine costs but it’s still a good deal to get you started.
    • You can migrate your workloads to Managed Instances, which will be in GA by the end of 2018. This will be able to support all applications out there, so you can start the transition up into Azure.
    • You can take advantage of the Azure hybrid licensing model to migrate to save money on licensing. With this you can save up to 55% on some of your PaaS SQL Server costs, but only if you have Enterprise Edition and Software Assurance.
    • For on-premises servers that need more time to upgrade, you’ll be able to purchase extended security service plans to extend out 3 years past the July 2019 date. So, if you’re struggling to get an application upgraded and validated, they’ll extend that out for a fee. Again, this is for customers with Software Assurance or subscription licenses under an Enterprise Agreement. These can be purchased annually and to cover only the servers that need updates.
    • Extended security updates will also be available for purchase as they get closer to the end of support, for both SQL Server and Windows Server.

Again, the first choice would be to upgrade or migrate those databases and move to Azure, but there are some challenges with doing so, and if none of those options work, there are some great options to extend your support.

How and When to Scale Up/Out Using Azure Analysis Services

Some of you may not know when or how to scale up your queries or scale out your processing. Today I’d like to help with understanding when and how using Azure Analysis Services. First, you need to decide which tier you should be using. You can do that by looking at the QPUs (Query Processing Units) of each tier on Azure. Here’s a quick breakdown:

  • Developer Tier – gives you up to 20 QPUs
  • Basic Tier – is a mid-scale tier, not meant for heavy loads
  • Standard Tier (currently the highest available) – allows you more capability and flexibility

Let’s start with when to scale up your queries. You need to scale up when your reports are slow, so you’re reporting out of Power BI and the throughput isn’t working for your needs. What you’re doing with scaling up is adding more resources. The QPU is a combination of your CPU, memory and other factors like the number of users.

Memory checks are straightforward. You run the metrics in the Azure portal and you can see what your memory usage is, if your memory limited or memory hard settings are being saturated. If so, you need to either upgrade your tier or adjust the level within your current tier.

CPU bottlenecks are a bit tougher to figure out. You can get an idea by starting to watch your QPUs to see if you’re saturating those using those metrics and looking at the logs within the Azure portal. Then you want to watch your processor pool job que length and your processing pool busy, non-IO threads. This should give you an idea of how it’s performing.

For the most part, you’re going to want to scale up when the processing engine is taking too long to process the data to build your models.

Next up, scaling out. You’ll want to scale out if you’re having problems with responsiveness with reporting because the reporting requirements are saturating what you currently have available. Typically, in cases with a large number of users, you can fix this by scaling out and adding more nodes.

You can add up to 7 additional query replicas; these are Read-only replicas that you can report off, but the processing is handled on the initial instance of Azure Analysis Services and subsequent queries are being handled as part of those query replicas. Hence, any processing is not affecting the responsiveness of the reports.

After it separates the model processing from query engine, then you can measure the performance by watching the log analytics and query processing units and see how they’re performing. If you’re still saturating those, you’ll need to re-evaluate whether you need additional QPUs or to upgrade your tiers.

Something to keep in mind is once you’ve processed your data, you must resynchronize it across all of those queries. So, if you’re going to be processing data throughout the day, it’s a good idea not only to run those queries, but also to strategically synchronize them as well.

Also important to know is that scale out does require the Standard Edition Tier; Basic and Developer will not work for this purpose. There are some interesting resources out there that allow you to auto scale. It will be based on a schedule using a PowerShell runbook. It uses your Azure automation account to schedule when it’s going to scale up or out based on the needs of the environment. For example, if you know Monday mornings you’re going to need additional processing power to run your queries efficiently, you’ll want to set up a schedule for that time and then you can scale it back.

Another note is that you can scale up to a higher tier, but you cannot scale those back automatically if you’re running a script. But with this ability it does allow you to be prepared for additional requirements in that environment.

I hope this helped with questions you have about scaling up and out.

 

Azure Common Data Services

What do you know about Azure Common Data Services? Today I’d like to talk about this product for apps which was recently re-done by Microsoft to expand upon the product’s vision. Common Data Services is an Azure-based business application platform that enables you to easily build and extend applications with your customer’s business data.

Common Data Services helps you bring together your data from across the Dynamics 365 Suite (CRM, AX, Nav, GP) and use this common data service to more easily extract data rather than having to get into the core of those applications. It also allows you to focus on building and delivering the apps that you want and insights and process automation that will help you run more efficiently. Plus it integrates nicely with PowerApps, Power BI and Microsoft Flow.

Some other key things:

  • If you want to build Power BI reports from your Dynamics 365 CRM data, there are pre-canned entities provided by Microsoft.
  • Data within the Common Data Services (CDS) is stored within a set of entities. An entity is just a set of records that’s used to store data, similar to how a table stores data within a database.
  • CDS should be thought of as a managed database service. You don’t have to create indexes or do any kind of database tuning; you’re not managing a database server as you would with SQL Server or a data warehouse. It’s designed to be somewhat of a centralized data repository from which you can report or do further things with.
  • PowerApps is quickly becoming a good replacement for things like Microsoft Access as it comes with along with functionality and feature sets. A common use for PowerApps is extending that data rather than having to dig into the background platform.
  • This technology is easy to use, to share and to secure. You set up your user account as you would with Azure Services, giving specific permissions/access based on the user.
  • It gives you the metadata you need based on that data and you can specify what kind of field or column you’re working with within that entity.
  • It gives you the ability of logic and validation; you can create business process rules and data flows from entity to entity or vice versa or from app to entity to PowerApps.
  • You can create workflows that automate business process, such as data cleansing or record updating; these workflows can run in the background without having to manage manually.
  • Gets good connectivity with Excel which makes it user friends for people comfortable with that platform.
  • For power users, there’s an SDK available for developers, which allows you to extend the product and build some cool custom apps.

I don’t think of this as a replacement for Azure SQL DW or DB but it does give you the capability to have table-based data in the cloud that has some nice hooks into the Dynamics 365 space, as well as outputting to PowerApps and Power BI.

What is Azure Cosmos DB?

Are you familiar with Azure Cosmos DB? Cosmos DB is Microsoft’s globally distributed, multi-model database. With the click of a button, it allows you to elastically and independently scale throughput and storage across any number of Azure’s geographic regions, so you can put the data where your customers are.

Cosmos DB has custom built APIs that allow you a multitude of data sources, like SQL Server, Mongo DB and Azure tables, as well as offering 5 consistency models. It offers comprehensive Service Level Agreements (SLAs) with money back guarantees for availability (99.99% to be exact), latency, consistency and throughput; a big deal when you need to serve your customers at optimum performance.

Cosmos DB is a great option for many different use cases:

  • Companies that are doing IOT and telematics. Cosmos DB can ingest huge bursts of data, and process and analyze that data in near real-time. Then it will automatically archive all the data it ingests.
  • Retail and Marketing. Take an auto parts product catalog, for example, with tons of parts within the catalog, each with its own properties (some unique and some shared across parts). The next year, new vehicles or new parts model come out, with some similar and different properties. All that data adds up very quickly. Cosmos DB offers a very flexible schema in a hierarchical structure that can easily change the data around as things change.
  • Gaming Industry. Games like Halo 5 by Microsoft are built on a Cosmos DB platform, because they need performance that is quickly and dynamically scalable. You’ve got things like millisecond read-times, which avoids any lags in game play. You can index player related data and it has a social graph database that’s easily implemented with flexible schema for all social aspects of gaming.

Azure Cosmos DB ensures that your data gets there and gets there fast, with a wealth of features and benefits to make your life easier. And it’s easy to set up and manage.

 

Top 5 Takeaways from the Microsoft ICA Boot Camp

I was a recent attendee at the Microsoft International Cloud Architect Boot Camp, where I had the opportunity to participate in hands-on sessions, working closely with Microsoft teams and specialists, as well as other Microsoft Partners. This boot camp contained exclusive content that Pragmatic Works gets access to as a partner and as a preferred service within the Microsoft stack.

Here, I’d like to share my top 5 takeaways from this event:

1. Commitment to Security – As a cloud solution architect, I’m asked many questions around security and Microsoft Azure. One thing that amazed me was the commitment that Microsoft has made to security. They spend over a billion dollars each year on security to ensure they are secure from all threats. Microsoft is also the #1 attack to surface in the world. They are truly committed to making sure that your data and surfaces are secure.

2. Security Certifications – Microsoft has passed over 70 regulatory and government certifications when it comes to security and standardized processes. Their second-place competitor, AWS, has only completed 44 of these certifications. Getting these certifications and adhering to certain security and regulatory standards can be expensive, but there is a significant benefit for enterprise, government and small/medium-sized businesses.

3. Right-sizing Their Environment – This can be a challenge for many companies. Microsoft’s internal teams have gone completely to Azure and are managing their platforms within Azure for SQL databases, virtual machines and all other services Azure offers. By doing some specific right-sizing and keeping watch on what’s offered, they lowered their workloads and kept their CPU at the 95th percentile, and more importantly, they were able cut down on spending for their internals needs – to the tune of over 3 million dollars a month!

4. Differentiators from AWS – AWS is currently the #1 cloud platform as far as revenue and volume. But Microsoft is quickly catching up and they’ve identified several differentiators from AWS. Some key differentiators, such as Azure Recovery Zones and other such services, which have been slow to come up, will have releases to general audiences by the end of 2018. MS does not see any other differentiators that will allow AWS to continue to hold that lead.

5. Connections/Partnerships – By having Office 365, Dynamics 365, and Skype and LinkedIn connections, as well as the commitments to partners and ISVs, gives Microsoft a competitive advantage over AWS in what their ecosystem looks like. A common complaint heard is how AWS doesn’t work well with, or cater to, partners, leaving them to figure it out themselves.

Top 10 Best Practices about SSIS and SSRS I’ve learned the hard way

Over the past 7 or 8 years, I’ve gone from “0 to 60” when it comes to database design, development, ETL and BI.  Most of the skills I’ve learned were a result of mistakes I’ve made, and I’ve been mostly self-taught with the exception of some more recent “formalized” learning with programs from Pragmatic Works.  As I’ve grown more and more satisfaction with the process, I’ve gone on and started working towards my MCSA (2 of 3 complete) in SQL 2012, as well as speaking at SQL Saturdays and local user groups in New England.  It’s become one of the most rewarding, exciting and challenging aspects of my career.  As a result, I’ve posted some blog articles about some of the challenges I’ve overcome, though not frequently enough, and attempted to become more active in some SQL forums.  The list below is far from complete when it comes to all of the best practices I’ve learned over the years, however, many of these lessons and best practices have really helped me to be organized when it comes to good BI architecture.  I hope to provide at least one item that will benefit a newbie or even a seasoned pro.  So, without further ado, here are the top 10 best practices I’ve learned the hard way…

  1. Use Source Control
    1. For anyone who was a developer in their past life, or is one now, this is a no-brainer, no-alternative best practice.  In my case, because I come from a management and systems background, I’ve had to learn this the hardway.  If this is your first foray into development, get ready, because you’re in for some mistakes, and you’re going to delete or change some code you really wish you didn’t.  Whether it be for reference purposes on something you want to change, or something you do by accident, you’re going to need that code you just got rid of yesterday, and we both know you didn’t back up your Visual Studio jobs… Hence, source control.  Github and Microsoft offer great solutions for Visual Studio, and Redgate offers a great solution for SSMS.  I highly recommend checking them out and using the tools!  There are some other options out there that are free, or will save your code to local storage locations, but the cloud is there for a reason, and many of us are on the go, so having it available from all locations is very helpful.
  2. Standardize Transform Names
    1. Leaving the default names for the OLE DB connector or the Execute SQL Task seems like something very silly a person might do because it’s so easy to label them for what they actually are, but I have to admit here, I’m guilty of doing so.  I’ve found myself in situations where I’ve thrown together a quick ETL package for the purpose of testing, or further work on the database side, and then I’ve forgotten to go back and fix them.  Fast-Forward 6 months, and saying to myself, “I think I had a package like that once before”, only to find it, open it, and not have a clue what it’s actually doing.  This of course requires me to go through each component and refresh my memory on what I did.  Truth be told, in this day of resources, memory, etc, there is absolutely no need to not give all the details needed within the title of the tool being used. Don’t be lazy, you never know when it might bite you!
  3. Document inside your code and ETL workflow
    1. If you’re using the script transforms, it opens a pretty standard development window anyone familiar with Visual Studio will recognize.  As with any development, good code comes with good documentation.  Use it!  Your predecessors, if not you, yourself, will be very appreciative down the road.  Name your functions appropriately, and explain what you’re doing throughout the code.  Further, as you build your workflows, you have the ability to document what each step of the process is doing, use it!  This also goes back to point 2.  With standardized names for your transforms alongside documenting the workflow as you go, it paints a very nice picture of what your workflow is doing.
  4. Setup detailed alerts for failures
    1. The traditional workflows in SSIS allow for users to create mail notifications for successful and unsuccessful steps within the workflow.  Of course, depending on how your packages are being run, you could have the same type of notification directly from the SQL Server running a SQL Job and also send notification, however, why use the SQL Alerts to tell you that “Job A” failed, and no real good information when you can have your package tell you exactly which transform or component failed, and what the error was when it failed.  There are some limitations to the canned SSIS component as to the methods you can use to send an email, however, there are also some workarounds like the one here on Stackoverflow that shows how to use the script task to connect to and send mail through gmail.  Either way, there is plenty of functionality that will help you to be informed of exactly what is happening inside a job and where there are warnings and errors for each component.  Taking the time to do this is much better than getting a phone call with complaints about data not being updated properly!
  5. Standardize File and Folder Names and Locations
    1. Ok, ok, I know this is getting a bit redundant… but remember, these are the mistakes I made as a newb, and I really want to help you out as you start to use the software more and more, and get more and more complex with workflows.  This one was a really big one for me.  Because I do all of my work in visual studio, and all of the BI jobs look the same when it comes to the file level, I really needed to be able to show the separation between my ETL jobs, SSRS jobs and SSAS jobs.  This also helped me out with my source control structure as well.  I separated SSIS, SSRS, and SSAS jobs (you can even go as far as separating Tabular and Multi-dimentional if necessary) into separate folders, then labeled each type with “ETL” or “reports” as part of the file name.  It saves me time when I’m opening a recent job because typically I’m working on tweaking the ETL at the same time as developing the reports to get just the right mix.
  6. Have a “Report Request Form”
    1. When you first start writing reports, it’s really exciting.  You’re delivering value to the business and really helping people do their jobs, especially when you start transforming and aggregating data… but soon, you become more and more relied upon for for those reports, and no two reports are alike it would seem.  A common best practice for people who are spitting out report after report is to have a report requirements request form like the one here from SQL Chick.  This request form is pretty in-depth, so tweak as necessary, but it will really help you to prioritize and design reports going forward.
  7. Experiment to fine tune and improve performance
    1. So, this best practice item is really a whole blog post unto itself, but it’s something to be aware of.  Just as a quick for instance, the “OLE DB command” transform is a great tool in theory, however, because of some of the nuances of the tool, if you’re using a large dataset, it can be significantly slower than using the “Execute SQL Task”, but the only way to know this is to compare them side by side, which I had to do, and realized the SQL Task took about 3 min, and the OLE DB Command took about 45 minutes.  Moral of the story: if something seems to take a long time, there may, and most likely is, a better way to do it, go out and play!
  8. Set your backup type to Basic or Bulk-logged on staging tables
    1. Ok, so normally I would stress the importance of backups (always do them automatically, including logs, and before you change anything), but that would be more of a blog post about maintenance and configuration, but this is more to focus on ETL and Reporting, so let’s talk about the effect loading lots of data will have on your database, or better yet, why not check out this article that has a very good description of the 3 different backup models.  The basics are just this… if it’s a staging table or db for data, you can probably get away with the basic model.  If it’s a critical db, but you’re doing lots of data loads, the bulk-logged model  will ignore SELECT INTO, bcp, INSERT SELECT and BULK INSERT DML operations so your transaction logs don’t get huge, fast.
  9. Temp Tables will likely make your ETL run faster than staging tables
    1. I can’t really take credit for this little nugget here.  When doing a project with one of my coding buddies, he came in to my office one day and said “Hey, did you know that using temp tables in SQL will allow you to use multiple processors on the server at one time?”  I did not know this, and man did it make a difference.  What a huge performance boost it was for my project.  Now, like everything else, there are exceptions to the rule.  Some guys who are much smarter than me had this discussion on a forum that sheds some more light on the topic, and here are some more scenarios where it might not make sense.  I think it’s good information overall, and the more information you have, then better off you’ll be when you’re designing your BI Architecture
  10. Have a development and testing platform
    1. As with some of the other best practices listed above, for some, this is a no-brainer.  For others… we have no brains when it comes to this stuff and we need to have it beaten into our heads (I’m the latter if that wasn’t already clear).  I can’t stress enough how much this will save you.  You should never, ever, be doing development on a production environment.  There are just too many things that can go wrong.  Even those “quick” or “minor” changes can really cause a calamity and ruin your day quickly.  Now, there can be challenges if you don’t have a proper production/dev/test environment at your office or your client’s location, however, with SQL Server Developer’s Edition now being a free tool, and PCs these days having tons of resources, you should be able to do your testing on even the simplest of computers and get a warm and fuzzy that you’re going to be able to deploy this latest package, report, or code successfully.  Performance tuning might not be truly possible to do comparisons against a beefy production server, but you should be able to establish a baseline and have a general idea of how performance will be for various configurations.

Well, that’s it for this post.  I really hope you’re able to provide even the slightest hint of learning something new here because that’s always my goal.  If you have questions, you can follow me or send me a message on twitter @bizdataviz as I’m always happy to hear how I can write better blog posts to help people out whom are just getting their feet wet.

Power View Date Filter Troubles

Now that I’m doing some more development of dashboards in the Power View interface, I thought it might be helpful to post about some of the date “gotchas” I’ve run in to with the development interface.  Power View is very nice about handling Date filters if you are able to specify a date you’d like to use as a start point, end point, or 2 dates to post data about between, however, if you’re looking to determine if a date range is say, “This Week”, “This Month”, “This Quarter”, etc… you’ll have quite a bit more trouble.  The workaround I’ve found to solve this issue is to create a new column that results in a boolean (true/false) answer.  Then within your Power View filtering, you are able to choose “IsPastWeek” = True, etc.  For Example, in your formula bar (or whatever tool you’re using, excel, powerbi, or SSAS tabular):

The above formula, simply put, says If the date is later than today minus 7 days, then return true, else, return false.  Below is a view of the filters in my Power View Dashboard:

By setting this filter, the ability to look at specific dates or time periods makes it easier to report graphically.  Also, the ability to filter on separate objects within the same canvas allows us to easily compare similar time periods side by side.

Some other options for comparing date ranges are:

IsCurrentMonth =if(MONTH(Table[Date])=MONTH(NOW()),TRUE,FALSE)

IsCurrentYear =if(YEAR(Table[Date])=YEAR(NOW()),TRUE,FALSE)

And many more can be found here: Time Intelligence in Power Pivot in Excel

SSMS Database Diagram Crash

I was attempting to create a database diagram with the SSMS 2012 diagramming tool today and ran into the following issue:

“Attempted to read or write protected memory. This is often an indication that other memory is corrupt. (Microsoft.VisualStudio.OLE.Interop)”

with the details:

===================================

Attempted to read or write protected memory. This is often an indication that other memory is corrupt. (Microsoft.VisualStudio.OLE.Interop)

——————————

Program Location:

at Microsoft.VisualStudio.OLE.Interop.IOleCommandTarget.QueryStatus(Guid& pguidCmdGroup, UInt32 cCmds, OLECMD[] prgCmds, IntPtr pCmdText)

at Microsoft.VisualStudio.Platform.WindowManagement.DocumentObjectSite.QueryStatus(Guid& pguidCmdGroup, UInt32 cCmds, OLECMD[] prgCmds, IntPtr pCmdText)

at Microsoft.VisualStudio.Platform.WindowManagement.WindowFrame.QueryStatus(Guid& pguidCmdGroup, UInt32 cCmds, OLECMD[] prgCmds, IntPtr pCmdText)

at Microsoft.Internal.VisualStudio.Shell.Interop.IVsTrackSelectionExPrivate.Register()

at Microsoft.VisualStudio.Platform.WindowManagement.WindowFrame.ConnectSelectionContext()

at Microsoft.VisualStudio.Platform.WindowManagement.WindowFrame.Activate()

at Microsoft.VisualStudio.Platform.WindowManagement.WindowManagerService.viewManager_ActiveViewChanged(Object sender, ActiveViewChangedEventArgs e)

at System.EventHandler`1.Invoke(Object sender, TEventArgs e)

at Microsoft.VisualStudio.PlatformUI.ExtensionMethods.RaiseEvent[TEventArgs](EventHandler`1 eventHandler, Object source, TEventArgs args)

at Microsoft.VisualStudio.PlatformUI.Shell.ViewManager.SetActiveView(View view, ActivationType type)

at Microsoft.VisualStudio.Platform.WindowManagement.WindowFrame.ShowInternal(ShowFlags showFlags)

at Microsoft.VisualStudio.Platform.WindowManagement.WindowFrame.<Show>b__26()

at Microsoft.VisualStudio.ErrorHandler.CallWithCOMConvention(Func`1 method)

at Microsoft.VisualStudio.Platform.WindowManagement.WindowFrame.Show()

at Microsoft.VisualStudio.Platform.WindowManagement.WindowFrame.MarshalingWindowFrame.<Microsoft.VisualStudio.Shell.Interop.IVsWindowFrame.Show>b__7a()

at Microsoft.VisualStudio.Shell.ThreadHelper.Invoke[TResult](Func`1 method)

at Microsoft.VisualStudio.Platform.WindowManagement.WindowFrame.MarshalingWindowFrame.Microsoft.VisualStudio.Shell.Interop.IVsWindowFrame.Show()

at Microsoft.SqlServer.Management.UI.VSIntegration.Editors.VirtualProject.CreateDesigner(Urn origUrn, DocumentType editorType, DocumentOptions aeOptions, IManagedConnection con, String fileName)

at Microsoft.SqlServer.Management.UI.VSIntegration.Editors.VirtualProject.Microsoft.SqlServer.Management.UI.VSIntegration.Editors.ISqlVirtualProject.CreateDesigner(Urn origUrn, DocumentType editorType, DocumentOptions aeOptions, IManagedConnection con, String fileName)

at Microsoft.SqlServer.Management.UI.VSIntegration.Editors.ISqlVirtualProject.CreateDesigner(Urn origUrn, DocumentType editorType, DocumentOptions aeOptions, IManagedConnection con, String fileName)

at Microsoft.SqlServer.Management.UI.VSIntegration.Editors.VsDocumentMenuItem.CreateDesignerWindow(IManagedConnection mc, DocumentOptions options)

at Microsoft.SqlServer.Management.UI.VSIntegration.Editors.VsDocumentMenuItem.InvokeDesigner(IManagedConnection connection)

at Microsoft.SqlServer.Management.UI.VSIntegration.Editors.VsDocumentMenuItem.Invoke()

at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.ToolsMenuItemBase.MenuInvokedHandler(Object sender, EventArgs args)

So, after some digging, I found a post that suggested I upgrade my tools to SSMS 2014, which yielded the same results, and did not work.

The next post suggestion I found recommended replacing the DLLs in the following location:

C:\Program Files (x86)\Common Files\microsoft shared\Visual Database Tools\

and

C:\Program Files (x86)\Common Files\microsoft shared\MSDesigners8\

Which also did not work.

If anyone knows of a better work around, I’d like some input!