Menu
SQL Server 2014 pushes the pedal to the metal

SQL Server 2014 pushes the pedal to the metal

SQL Server 2014 is a significant release with two overarching themes: cloud and speed -- or, to be specific, Azure integrations and in-memory OLTP (online transaction processing). Truth be told, I'm more excited about the speed features than the cloud stuff, but I also understand there is a growing portion of the customer base that is heading to cloud-based operations, and these shops will find the cloud features useful.

For all SQL Server shops, I don't think we've seen a release this significant since SQL Server 2008 RTM. Looking back, you might remember that SQL Server 2008 brought Change Data Capture, data compression, and most important, PowerShell. Then SQL Server 2008 R2 brought us PowerPivot plug-ins for self-service BI, along with StreamInsight and Master Data Services. SQL Server 2012 gave us Availability Groups, columnstore indexes, and several T-SQL enhancements. You might classify SQL Server 2008 as a data warehouse release, SQL Server 2008 R2 as a BI release, and SQL Server 2012 as an HA release.

SQL Server 2014 brings enhancements to a number of these areas, including Availability Groups and columnstore indexes, and others, such as security (encrypted backups). The bulk of this review will focus on the cloud and performance features.

Your database on Azure

SQL Server 2014 has two ways to work with Azure storage. First, you can back up a database to Azure blob storage. While this feature (officially called SQL Server Backup to URL) was introduced in 2012, it gets a boost in SQL Server 2014 with SQL Server Managed Backup to Windows Azure -- or simply Managed Backup, since it only works with Azure at this point. Managed Backup is well named because it literally manages your backups for you. SQL Server looks at your retention period and your transaction workload, figures out when the best time would be to take the backups, and just handles it for you. You can set it up at the instance level or the database level, so you have decent control over which databases get backed up to the cloud.

While Managed Backup can help simplify your backup strategy, especially for smaller shops without a full-time DBA, there are a couple caveats here. First, you have to be careful about the size of the databases you're pushing across the Internet. Your success at this will depend largely on your connection and on any number of other factors once you get into the wild. You may see intermittent performance problems.

Second, you need to keep an eye on your usage. If your databases get too big or if someone drops a large database on the server and it gets backed up to the cloud, you could blow up your service agreement and wind up paying a lot more for storage than you intended. That said, for smaller databases, Managed Backup is a beautiful replacement for shipping tapes offsite.

Backups aren't the only role for Azure to play with SQL Server. You can also define a Windows Azure VM as a secondary replica to an Availability Group. This can be a great option for shops that need a reliable HA solution but don't have a second data center to fall back on. SQL Server even provides you with a wizard to aid in setup. This feature comes with some stiff prereqs, such as a site-to-site VPN between your on-premise subnet and Azure, so make sure your infrastructure team can make it happen before you bet the farm on it.

One more Azure feature worth knowing about is SQL Server Data Files in Windows Azure. The names are getting longer, aren't they? I'm far less excited about this feature, which allows you to create an on-premise database and store its files on Azure blob storage, because it's so open to abuse (by shops that will be sorely tempted to run huge databases with heavy transaction workloads from the cloud), but I can see some benefits in certain situations.

For example, when you create a database on a local server, you normally store the database files on either local storage or a SAN. Data Files in Windows Azure allows you to instead put the files in a URL-accessible location on Azure storage. This could be beneficial if you wind up doing a migration from one server to another because you simply have to attach the database files to the new server -- there's no data movement involved. That means you don't have to worry about restoring a database after a failure, or copy files across the network, or wait for four hours for your tapes to be brought back on-site. The blob storage is always there.

For some shops, Data Files in Windows Azure could be a critical piece in their disaster recovery strategy. It is a true hybrid scenario where you manage every aspect of your SQL Server on-premise, but the actual data resides in the cloud. It also provides you with high availability that you don't have to manage. Unfortunately, it also puts your database at the mercy of the ebbs and flows of the Internet. Troubleshooting database performance is more complicated because you don't know if there's an issue with the local server, the indexes, the queries, or the Internet. We've had the ability to attach database files across local networks for years now, and it's been discouraged for performance reasons. I have a hard time believing that adding the open Internet to the equation is suddenly a good idea.

Feeding the need for speed

These days, even the smallest of online startups is instantly thrown into a global marketplace and could find itself inundated with server requests. When it comes to performance, Microsoft has risen to the challenge with some nice features for this release.

We'll start with what I consider to be the flagship feature of SQL Server 2014, Hekaton. It's Microsoft's flavor of memory-optimized tables: All you have to do is define a table as being memory-optimized, and the engine will take care of the rest. And unlike other in-memory database solutions, Hekaton allows you to bring individual tables into memory instead of the entire database. This follows a more realistic business scenario too, because you typically have only a handful of tables that need this kind of boost, and forcing the entire database into memory for the sake of a few tables isn't a good use of resources.

Hekaton achieves its turbo boost through a combination of optimized algorithms, optimistic concurrency, eliminating physical locks and latches, and oh yeah, storing the table in memory. If you have stored procedures that hit only the memory-optimized table, you can convert them into Hekaton procedures to get even more speed. This compiles them into native C code, which optimizes their efficiency.

There are limitations to the types of objects that can be converted, so you should read up and test your code before going whole hog with this solution. As a quick example, optimized stored procedures can't contain cursors, subqueries, or even CTEs (common table expressions). Those are a few examples in a long list, so you need to do your homework; expect database and table requirements as well. Given that this is a brand new feature, I would expect these restrictions to diminish over time.

What kind of speeds are we talking about here? Will it be enough to make a conversion worthwhile? I've seen some pretty impressive demos, and Microsoft's site boasts a 10- to 30-fold performance boost. The exact speeds depend on many factors of course, but from what I've seen in the demos and in my own testing, these are realistic numbers. The truth on the ground is that if you have a highly transactional OLTP scenario and it meets the requirements for Hekaton, you'll fall in love.

SQL Server 2012 introduced columnstore indexes for dramatically warehouse performance. I've personally seen queries that took several minutes with a traditional index reduced to subsecond times with the addition of a columnstore index. The problem was that columnstore indexes weren't updateable. In order to load data into the table, you had to drop the indexes first and re-create them afterward. SQL Server 2014 has solved that problem by making columnstore indexes updateable -- nice job, Microsoft.

I always tell my junior DBAs that there are only two ways to resolve a bottleneck: Either reduce the workload or increase the throughput. While the previous features address the throughput, these next two features deal with the workload.

Resource Governor has finally gained physical I/O control. Disk is quite often the bottleneck on any given system because it's the one component with moving mechanical parts. Quite often, disks get overworked because of some rogue query sucking up tons of extra I/O against the disks. In fact, I've seen warehouses brought down by queries pulling billions upon billions of IOPS. Now you can finally put queries into their own resource pool and limit the amount of I/O per volume they're allowed. This means no more ridiculous queries taking up all of your precious disk resources. Reducing the amount of I/O on a disk system can have a tremendous impact on overall performance.

Taking a shortcut

This next feature, called Delayed Durability, doesn't actually speed up the database, but it makes the database seem faster from the end-user point of view. Let's walk through a typical transaction (simplified) to illustrate this.

Let's say the end-user updates a record. The update first gets written to the log in memory, and the log entry gets written to the disk. After the log entry is written to the disk, the application is informed that the transaction is complete and the user can continue working. The actual update to the database happens behind the scenes at a later time. Now, with Delayed Durability, the application is informed that the transaction is complete before the log is written to the disk. This means the end-user doesn't have to wait for the transaction to be hardened to the much slower disk media before he can continue working. I've seen many systems where the log was the bottleneck, and Delayed Durability can definitely help address that.

But like I said, Delayed Durability is not reducing the workload at all. Everything that used to be done is still being done. The database engine just releases control back to the client sooner than it used to, so the end-user can get back to work faster. This feature is configurable as well -- you can control it at the database level, the transaction level, or even at the atomic block level (for an entire stored procedure). Nevertheless, be careful when considering this feature. If your system should fail before these transactions get hardened to disk, you will lose them. This is what durability means: The transaction is recoverable. If you delay the durability of your transactions to get better client response times, be aware that it's possible to lose some data.

And finally, SQL Server 2014 brings a couple of security enhancements. Of course we now have backup encryption, which is one more nail in the coffin of the third-party backup products. The final nail will be object-level recovery. There are also a couple of new server-level permissions -- CONNECT ANY USER DATABASE and SELECT ALL USER SECURABLES -- that allow you to manage security much more easily than ever before. The reason is that you can assign these permissions once for all current and future databases. You no longer have to keep assigning these permissions as databases come and go.

SQL Server 2014 is a solid release with a good mix of new features and enhancements to existing ones. While I'm personally not drawn to the cloud features, I'm absolutely thrilled with the work Microsoft has done on OLTP, and this story is sure to get better. Hekaton is brand new with this release, so you can expect that some of the restrictions will start melting away in subsequent releases.

I didn't even come close to mentioning all of the new features. For some good practical reading, I suggest looking up buffer pool extension, incremental statistics, and managing lock priority of online operations. All of these can have a significant impact on your system performance. However, it's also worth noting that Integration Services, Reporting Services, and replication didn't receive any enhancements in this release at all, and while there are some minor T-SQL enhancements, nothing that will change the way DBAs or developers do their job.

Still, SQL Server 2014 gives you plenty of ways to improve performance, and a few ways to leverage the Azure cloud for backups and high availability. SQL Server shops wrestling with these issues should take a close look at this release.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags databasessoftwareapplications

More about Integration ServicesMicrosoft

Show Comments
[]