Michael J. Swart

February 13, 2024

Modeling Resource Governor Behavior

Filed under: Miscelleaneous SQL,Technical Articles — Michael J. Swart @ 4:02 pm

T-SQL Tuesday LogoFor T-SQL Tuesday, Brent Ozar asked us to write about the most recent ticket we closed. I’m going to write a bit about the most recent project I wrapped up. Although this is a project and not a ticket, the story is recent and it really gives an idea of “what exactly it is that I do” so I figure it’s a fair story.

I just finished up a project to consolidate servers by using Resource Governor.

The problem

How do we predict whether it’s safe to put workloads from two servers onto one?

We use Availability Groups to create readable secondary replicas (which I’ll call mirrors). The mirrors are used to offload reporting workloads. The mirrors are mostly bound by IOPS and the primaries are mostly bound by CPU, so I wondered “Is there any wiggle room that lets us consolidate these servers?”

Can we point the reporting workloads (queries) at the primary replica safely? To do that we’d have to use something like Resource Governor to throttle IO (MAX_IOPS_PER_VOLUME) because we don’t want to overwhelm the primary.

Some questions I want to answer:

  • What value should I use for MAX_IOPS_PER_VOLUME?
  • Is there a safe value at all?
  • If I consider any given threshold value X, how much longer will it take to generate reports?
  • Since we have dozens of mirrors, which servers can we decommission?

Think about that for a second. How would you answer these questions? What data would you want to collect to help you answer these questions?

First visualize the workload

We visualized the existing reporting workload (read operations) for the mirrors. For some of them, the answers were obvious. For example, look at the following graph.

The light traffic server (blue line) would never be notice if we applied a maximum threshold of 2000 IOPS. So that mirror is safe to throttle and point to the primary. Meanwhile the traffic from the heavy traffic server (orange line) could never do the same amount of work if we throttled it.

But what about a server with a workload like the following? It’s not as clear.

Next, model a throttled workload

On the assumption that the reads still needed to happen even when throttled, we wanted to know, how long of a stretch would the reads be saturated for. i.e. If we throttle at 2000 IOPS, would we see the IO be saturated for longer than say 10 minutes?

Using Excel, I added three new calculated columns, work_to_be_done, throttled_work_done, and work_left_to_do. If there was any work left to do, it fed into the work to be done of the next row like this:

So now I can visualize what the “throttled” work would look like

After that, it wasn’t to hard to calculate the longest stretch of saturated IO as kind of proxy for the maximum duration of report generation.

This model isn’t perfect. It’s inaccurate because we made a ton of assumptions, but it was useful. It helped us identify reports on mirrors that could be run on the primary replica.

In our case we were happily surprised. After we applied the RG settings to the various servers we deemed safe, the resulting behavior was accurate to the model and it let us consolidate the servers the way we wanted.

So that was fun.

This size of and type of project pops up all the time for me. Not daily of course, the start-to-finish duration of this project is measured in months. Next up I’m in the middle of trying to figure out how to maximize ONLINE-ness while using Standard Edition. Wish me luck.

December 15, 2023

A Quick SQL Server Puzzle About MIN_ACTIVE_ROWVERSION()

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 4:11 pm

MIN_ACTIVE_ROWVERSION() is a system function that returns the lowest active rowversion value in the current database. Its use then is very similar to @@DBTS.

In fact, the docs for MIN_ACTIVE_ROWVERSION() (currently) say:

If there are no active values in the database, MIN_ACTIVE_ROWVERSION() returns the same value as @@DBTS + 1.

Does it though? You may be tempted then to replace some of your @@DBTS expressions with expressions like this:

Broken example

/* This is not always equivalent to @@DBTS */
SELECT CAST(MIN_ACTIVE_ROWVERSION() - 1 AS rowversion);

Try to figure out why this is broken before reading further.

The problem

The problem occurs when the values get large. In fact you can reproduce this behavior with:

/* This may also give unexpected results */
SELECT CAST(0x017fffffff - 1 AS rowversion);
/* 0x000000007FFFFFFE? Where did that leading one go? */

But why?

The issue is in the expression MIN_ACTIVE_ROWVERSION() - 1. SQL Server will try to subtract an int from a binary(8). To do that, it converts only the last four bytes of the binary(8) value to an int. It does that happily without any errors or warnings, even if the first four bytes are not zeros.

A fix

When we subtract, we want bigint arithmetic:

/* This gives the value we want*/
SELECT CAST(CAST(MIN_ACTIVE_ROWVERSION() AS BIGINT) - 1 AS rowversion);

There may be more elegant solutions.

October 5, 2023

Watch Out For This Use Case When Using Read Committed Snapshot Isolation

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 9:00 am

Takeaway: If you want to extract rows from a table periodically as part of an ETL operation and if you use Read Committed Snapshot Isolation (RCSI), be very careful or you may miss some rows.

David Rose thinks we were looking for "Mister Rose" not "missed rows".

Yesterday, Kendra Little talked a bit about Lost Updates under RCSI. It’s a minor issue that can pop up after turning on RCSI as the default behavior for the Read Committed isolation level. But she doesn’t want to dissuade you from considering the option and I agree with that advice.

In fact, even though we turned RCSI on years ago, by a bizarre coincidence, we only came across our first RCSI-related issue very recently. But it wasn’t update related. Instead, it has to do with an ETL process. To explain it better, consider this demo:

Set up a database called TestRCSI

CREATE DATABASE TestRCSI;
ALTER DATABASE TestRCSI SET READ_COMMITTED_SNAPSHOT ON;

Set up a table called LOGS

use TestRCSI;
 
CREATE TABLE LOGS (
    LogId INT IDENTITY PRIMARY KEY,
    Value CHAR(100) NOT NULL DEFAULT '12345'
);
 
INSERT LOGS DEFAULT VALUES;
INSERT LOGS DEFAULT VALUES;
INSERT LOGS DEFAULT VALUES;

Create a procedure to extract new rows

We want to extract rows from a table whose LogId is greater than any LogId we’ve already seen. That can be done with this procedure:

CREATE PROCEDURE s_FetchLogs ( @AfterLogId INT ) 
AS
    SELECT LogId, Value
    FROM LOGS
    WHERE LogId > @AfterLogid;
GO

That seems straightforward. Now every time you perform that ETL operation, just remember the largest LogId from the results. That value can be used the next time you call the procedure. Such a value is called a “watermark”.

Multiple sessions doing INSERTs concurrently

Things can get a little tricky if we insert rows like this:
Session A:

    INSERT LOGS DEFAULT VALUES; /* e.g. LogId=4  */

Session B:

    BEGIN TRAN
    INSERT LOGS DEFAULT VALUES; /* e.g. LogId=5  */
    /* No commit or rollback, leave this transaction open */

Session A:

    INSERT LOGS DEFAULT VALUES; /* e.g. LogId=6  */
    EXEC s_FetchLogs @AfterLogId = 3;

You’ll see:

Results showing two rows with LogId=4 and LogId=6

And you may start to see what the issue is. Row 5 hasn’t been committed yet and if you’re wondering whether it will get picked up the next time the ETL is run, the answer is no. The max row in the previous results is 6, so the next call will look like this:

    EXEC s_FetchLogs @AfterLogId = 6;

It will leave the row with LogId = 5 behind entirely. This ETL process has missed a row.

What’s the deal?

It’s important to realize that there’s really no defect here. There is no isolation level that really guarantees “sequentiality” or “contiguousness” of inserted sequences this way. That property is not really guaranteed by any isolation level or by any of the letters in ACID. But it still is behavior that we want to understand and do something about.

Transactions do not really occur at a single point in time, they have beginnings and ends and we can’t assume the duration of a transaction is zero. Single-statement transactions are no exception. The important point is that the time a row is created is not the same time as it’s committed. And when several rows are created by many sessions concurrently, the order that rows are created are not necessarily the order that they’re committed!

With any version of READ COMMITTED, the rows created by other sessions only become visible after they’re committed and if the rows are not committed sequentially, they don’t become visible sequentially. This behavior is not particular to identity column values, it also applies to:

So if:

  • columns like these are used as watermarks for an ETL strategy
  • and the table experiences concurrent inserts
  • and Read Committed Snapshot Isolation is enabled

then the process is vulnerable to this missed row issue.

This issue feels like some sort of Phantom Read problem, but it’s not that exactly. Something different is going on in an interesting way. Rows are inserted in a table such that column values are expected to always increase. That expectation is the interesting thing. So when transactions are committed “out of order” then those rows become visible out of order. The expectation is not met and that’s the issue.

Solutions (pessimistic locking)

If you turn off RCSI and run the demo over again, you’ll notice that running s_FetchLogs in Session A will be blocked until the transaction in Session B is committed. When Session A is finally unblocked, we get the full results (including row 5) as expected:

Results of a query which contain three rows with LogIds 4, 5 and 6

Here’s why this works. Any newly created (but uncommitted) row will exist in the table. But the transaction that created it still has an exclusive lock on it. Without RCSI, if another session tries to scan that part of the index it will wait to grab a shared lock on that row. Problem solved.

But turning off RCSI is overkill. We can be a little more careful. For example, instead of leaving RCSI off all together, do it just for the one procedure like this:

CREATE OR ALTER PROCEDURE s_FetchLogs ( @AfterLogId INT ) 
AS
    SELECT LogId, Value
    FROM LOGS WITH(READCOMMITTEDLOCK)
    WHERE LogId > @AfterLogid;
GO

In the exact same way, this procedure will wait to see whether any uncommitted rows it encounters will be rolled back or committed. No more missing rows for your ETL process!

August 16, 2023

Deploying Resource Governor Using Online Scripts

Filed under: Miscelleaneous SQL,SQL Scripts,SQLServerPedia Syndication,Technical Articles — Michael J. Swart @ 12:07 pm

When I deploy database changes, I like my scripts to be quick, non-blocking, rerunnable and resumable. I’ve discovered that:

  • Turning on Resource Governor is quick and online
  • Turning off Resource Governor is quick and online
  • Cleaning or removing configuration is easy
  • Modifying configuration may take some care

Turning on Resource Governor

Just like sp_configure, Resource Governor is configured in two steps. The first step is to specify the configuration you want, the second step is to ALTER RESOURCE GOVERNOR RECONFIGURE.
But unlike sp_configure which has a “config_value” column and a “run_value” column, there’s no single view that makes it easy to determine what values are configured, and what values are in use. It turns out that the catalog views are the configured values and the dynamic management views are the current values in use:

Catalog Views (configuration)

  • sys.resource_governor_configuration
  • sys.resource_governor_external_resource_pools
  • sys.resource_governor_resource_pools
  • sys.resource_governor_workload_groups

Dynamic Management Views (running values and stats)

  • sys.dm_resource_governor_configuration
  • sys.dm_resource_governor_external_resource_pools
  • sys.dm_resource_governor_resource_pools
  • sys.dm_resource_governor_workload_groups

When a reconfigure is pending, these views can contain different information and getting them straight is the key to writing rerunnable deployment scripts.

Turning on Resource Governor (Example)

Despite Erik Darling’s warning, say you want to restrict SSMS users to MAXDOP 1:

Plot a Course

use master;
 
IF NOT EXISTS (
	SELECT *
	FROM sys.resource_governor_resource_pools
	WHERE name = 'SSMSPool'
)
BEGIN
	CREATE RESOURCE POOL SSMSPool;
END
 
IF NOT EXISTS (
	SELECT *
	FROM sys.resource_governor_workload_groups
	WHERE name = 'SSMSGroup'
)
BEGIN
	CREATE WORKLOAD GROUP SSMSGroup 
	WITH (MAX_DOP = 1)
	USING SSMSPool;
END
 
IF ( OBJECT_ID('dbo.resource_governor_classifier') IS NULL )
BEGIN
	DECLARE @SQL NVARCHAR(1000) = N'
CREATE FUNCTION dbo.resource_governor_classifier() 
	RETURNS sysname 
	WITH SCHEMABINDING
AS
BEGIN
 
	RETURN 
		CASE APP_NAME()
			WHEN ''Microsoft SQL Server Management Studio - Query'' THEN ''SSMSGroup''
			ELSE ''default''
		END;
END';
	exec sp_executesql @SQL;
END;
 
IF NOT EXISTS (
	SELECT *
	FROM sys.resource_governor_configuration /* config */
	WHERE classifier_function_id = OBJECT_ID('dbo.resource_governor_classifier') )
   AND OBJECT_ID('dbo.resource_governor_classifier') IS NOT NULL
BEGIN
	ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = dbo.resource_governor_classifier); 
END

And when you’re ready, RECONFIGURE:

Make it so

IF EXISTS (
	SELECT *
	FROM sys.dm_resource_governor_configuration
	WHERE is_reconfiguration_pending = 1
) OR EXISTS (
	SELECT *
	FROM sys.resource_governor_configuration
	WHERE is_enabled = 0
)
BEGIN
	ALTER RESOURCE GOVERNOR RECONFIGURE;
END
GO

Turning off Resource Governor

Pretty straightforward, the emergency stop button looks like this:

ALTER RESOURCE GOVERNOR DISABLE;

If you ever find yourself in big trouble (because you messed up the classifier function for example), use the Dedicated Admin Connection (DAC) to disable Resource Governor. The DAC uses the internal workload group regardless of how Resource Governor is configured.

After you’ve disabled Resource Governor, you may notice that the resource pools and workload groups are still sitting there. The configuration hasn’t been cleaned up or anything.

Cleaning Up

Cleaning up doesn’t start out too bad, deal with the classifier function, then drop the groups and pools:

ALTER RESOURCE GOVERNOR DISABLE
ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = NULL); 
DROP FUNCTION IF EXISTS dbo.resource_governor_classifier;
 
IF EXISTS (
	SELECT *
	FROM sys.resource_governor_workload_groups
	WHERE name = 'SSMSGroup'
)
BEGIN
	DROP WORKLOAD GROUP SSMSGroup;
END
 
IF EXISTS (
	SELECT *
	FROM sys.resource_governor_resource_pools
	WHERE name = 'SSMSPool'
)
BEGIN
	DROP RESOURCE POOL SSMSPool;
END

You’ll be left in a state where is_reconfiguration_pending = 1 but since Resource Governor is disabled, it doesn’t really matter.

Modifying Resource Governor configuration

This is kind of a tricky thing and everyone’s situation is different. My advice would be to follow this kind of strategy:

  • Determine if the configuration is correct, if not:
    • Turn off Resource Governor
    • Clean up
    • Configure correctly (plot a course)
    • Turn on (make it so)

Somewhere along the way, if you delete a workload group that some session is still using, then ALTER RESOURCE GOVERNOR RECONFIGURE may give this error message:

Msg 10904, Level 16, State 2, Line 105
Resource governor configuration failed. There are active sessions in workload groups being dropped or moved to different resource pools.
Disconnect all active sessions in the affected workload groups and try again.

You have to wait for those sessions to end (or kill them) before trying again. But which sessions? These ones:

SELECT 
	dwg.name [current work group], 
	dwg.pool_id [current resource pool], 
	wg.name [configured work group], 
	wg.pool_id [configured resource pool],
	s.*
FROM 
	sys.dm_exec_sessions s
INNER JOIN 
	sys.dm_resource_governor_workload_groups dwg /* existing groups */
	ON dwg.group_id = s.group_id
LEFT JOIN 
	sys.resource_governor_workload_groups wg /* configured groups */
	ON wg.group_id = s.group_id
WHERE 
	isnull(wg.group_id, -1) <> dwg.pool_id
ORDER BY 
	s.session_id;

If you find your own session in that list, reconnect.
Once that list is empty feel free to try again.

January 3, 2023

Can your application handle all BIGINT values?

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 12:24 pm

In the past I’ve written about monitoring identity columns to ensure there’s room to grow.

But there’s a related danger that’s a little more subtle. Say you have a table whose identity column is an 8-byte bigint. An application that converts those values to a 4-byte integer will not always fail! Those applications will only fail if the value is larger than 2,147,483,647.

If the conversion of a large value is done in C#, you’ll get an Overflow Exception or an Invalid Cast Exception and if the conversion is done in SQL Server you’ll see get this error message:

Msg 8115, Level 16, State 2, Line 21
Arithmetic overflow error converting expression to data type int.

The danger

If such conversions exist in your application, you won’t see any problems until the bigint identity values are larger than 2,147,483,647. My advice then is to test your application with large identity values in a test environment. But how?

Use this script to set large values on BIGINT identity columns

On a test server, run this script to get commands to adjust bigint identity values to beyond the maximum value of an integer:

-- increase bigint identity columns
select 
	'DBCC CHECKIDENT(''' + 
	QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + '.' +
	QUOTENAME(object_Name(object_id)) + ''', RESEED, 2147483648);
' as script
from 
	sys.identity_columns
where 
	system_type_id = 127
	and object_id in (select object_id from sys.tables);
 
-- increase bigint sequences
select 
	'ALTER SEQUENCE ' +
	QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + '.' +
	QUOTENAME(object_Name(object_id)) + ' 
	RESTART WITH 2147483648 INCREMENT BY ' + 
	CAST(increment as sysname) +
	' NO MINVALUE NO MAXVALUE;
' as script
from 
	sys.sequences
where 
	system_type_id = 127;

Prepared for testing

The identity columns in your test database are now prepared for testing. And hopefully you have an automated way to exercise your application code to find sneaky conversions to 4-byte integers. I found several of these hidden defects myself and I’m really glad I had the opportunity to tackle these before they became an issue in production.

November 25, 2022

Use RCSI to tackle most locking and blocking issues in SQL Server

Filed under: Miscelleaneous SQL,Technical Articles — Michael J. Swart @ 12:54 pm

What’s the best way to avoid most blocking issues in SQL Server? Turn on Read Committed Snapshot Isolation (RCSI). That’s it.

Configuring RCS isolation level

To see if it’s enabled on your database, use the is_read_committed_snapshot_on column in sys.databases like this:

select is_read_committed_snapshot_on
from sys.databases
where database_id = db_id();

To enable the setting alter the database like this:

ALTER DATABASE CURRENT
SET READ_COMMITTED_SNAPSHOT ON

Is it that easy?

Kind of. For the longest time at work, we ran our databases with this setting off. Mostly because that’s the default setting for SQL Server. As a result, we encountered a lot of blocking and deadlocks. I got really really good at interpreting deadlocks and blocking graphs. I’ve written many blog posts on blocking and I even wrote a handy tool (the blocked process report viewer) to help understand who the lead blocker was in a blocking traffic jam.

Eventually after a lot of analysis we turned on RCSI. Just that setting change probably gave us the biggest benefit for the least effort. We rarely have to deal with blocking issues. I haven’t made use of the blocked process report viewer in years.

Be like Severus Snape

I’m reminded of a note that Snape (from the Harry Potter books) wrote in his textbook on poison antidotes “Just shove a bezoar down their throats.” The idea was that you didn’t have to be good at diagnosing and creating antidotes because a bezoar was simply an “antidote to most poisons”.

In the same way, I’ve found that RCSI is an antidote to most blocking.

October 12, 2022

You Can Specify Two Indexes In Table Hint?

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 12:00 pm

Yes, It turns out that you can specify two indexes in a table hint:

SELECT Id, Reputation
FROM dbo.Users WITH (INDEX (IX_Reputation, PK_Users_Id))
WHERE Reputation > 1000

And SQL Server obeys. It uses both indexes even though the nonclustered index IX_Reputation is covering:
Two Indexes

But Why?

I think this is a solution looking for a problem.

Resolving Deadlocks?
My team wondered if this could be used as to help with a concurrency problem. We recently considered using it to resolve a particular deadlock but we had little success.

It’s useful to think that SQL Server takes locks on index rows instead of table rows. And so the idea we had was that perhaps taking key locks on multiple indexes can help control the order that locks are taken. But after some effort, it didn’t work at avoiding deadlocks. For me, I’ve had better luck using the simpler sp_getapplock.

Forcing Index Intersection?
Brent Ozar wrote about index intersection a while ago. Index intersection is a rare thing to find in a query plan. Brent can “count on one hand the number of times [he’s] seen this in the wild”.

In theory, I could force index intersection (despite the filter values):

SELECT Id
FROM dbo.Users WITH (INDEX (IX_UpVotes, IX_Reputation))
WHERE Reputation > 500000
AND UpVotes > 500000

But I wouldn’t. SQL Server choosing index intersection is already so rare. And so I think the need to force that behavior will be even rarer. This is not a tool I would use for tuning queries. I’d leave this technique alone.

Have You Used More Than One Index Hint?

I’d love to hear about whether specifying more than one index in a table hint has ever helped solve a real world problem. Let me know in the comments.

October 6, 2022

The Tyranny Of Cumulative Costs (Save and Forget Build Up)

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 12:00 pm

50:50 Triangle

Using the right triangle above draw a vertical line separating the area of the triangle in to two parts with the same area.
The triangle on the left is 70.7% of the width of the original triangle.

Cumulative Storage Costs

Think of this another way. The triangle above is a graph of the amount of data you have over time. And if you pay for storage as an operational expense such as when you’re renting storage in the cloud (as opposed to purchasing physical drives). Then the cost of storage is the area of the graph. The monthly bills are ever-increasing, so half of the total cost of storage will be for the most recent 29%.

Put yet another way: If you started creating a large file in the cloud every day since March 2014, then the amount you paid to the cloud provider before the pandemic started is the same amount you paid after the pandemic started (as of August 2022).

How Sustainable is This?

If the amount of data generated a day isn’t that much, or the storage you’re using is cheap enough then it really doesn’t matter too much. As an example, AWS’s cheapest storage, S3 Glacier Deep Archive, works out to about $0.001 a month per GB.

But if you’re using Amazon’s Elastic Block Storage like the kind of storage needed for running your own SQL Servers in the cloud, the cost can be closer to $.08 a month per GB.

The scale on the triangle graph above really matters.

Strategies

This stresses the need for a data life-cycle policy. An exit story for large volumes of data. Try to implement Time-To-Live (TTL) or clean up mechanisms right from the beginning of even the smallest project. Here’s one quick easy example from a project I wrote that collects wait stats. The clean-up is a single line.

Look at Netflix does approaches this issue. I like how they put it. “Data storage has a lot of usage and cost momentum (i.e. save-and-forget build-up).”

Netflix stresses the importance of “cost visibility” and they use that to offer focused recommendations for cleaning up unused data. I recommend reading that whole article. It’s fascinating.

It’s important to implement such policies before that triangle graph gets too large.

September 28, 2022

When are Non-Updating Updates Treated Like Regular Updates?

Filed under: SQLServerPedia Syndication — Michael J. Swart @ 12:00 pm

Takeaway: I look at different features to see whether non-updates are treated the same as other updates. Most of the time they are.

According to Microsoft’s documentation, an UPDATE statement “changes existing data in a table or view”. But what if the values don’t actually change? What if affected rows are “updated” with the original values? Some call these updates non-updating. This leads to a philosophical question: “If an UPDATE statement doesn’t change any column to a different value, has the row been updated?”

I answer yes to that question. I consider all affected rows as “updated” regardless of whether the values are different. I think of the UPDATE statement as more of an OVERWRITE statement. I also think of “affected rows” rather than “changed rows”. In most cases SQL Server thinks along the same lines.

I list some features and areas of SQL Server and whether non-updating updates are treated the same or differently than other updates:

The Performance of Non-Updates Non-Updates treated differently than other Updates

In 2010, Paul White wrote The Impact of Non-Updating Updates where he points out optimizations Microsoft has made to avoid unnecessary logging when performing some non-updating updates. It’s a rare case where SQL Server actually does pay attention to whether values are not changing to avoid doing unnecessary work.

In the years since, I’ve noticed that this optimization hasn’t changed much except that Microsoft has extended these performance improvements to cases where RCSI or SI is enabled.

Regardless of this performance optimization, it’s still wise to limit affected rows as much as possible. In other words, I still prefer

UPDATE FactOnlineSales 
SET DiscountAmount = NULL
WHERE CustomerKey = 19036
AND DiscountAmount IS NOT NULL;

over this logically equivalent version:

UPDATE FactOnlineSales 
SET DiscountAmount = NULL
WHERE CustomerKey = 19036;

Although the presence of triggers and cascading foreign keys require extra care as we’ll see.

Triggers Non-Updates are treated the same as Updates

Speaking of triggers, remember that inside a trigger, non-updating rows are treated exactly the same as any other changing row. Just remember that:

  • Triggers are always invoked, even when there are zero rows affected or even when the table is empty.
  • For UPDATE statements, the UPDATE() function only cares about whether a column appeared in the SET clause. It can be useful for short-circuit logic.
  • The virtual tables inserted and deleted are filled with all affected rows (not just changed rows).

ON UPDATE CASCADE Non-Updates are treated the same as Updates

When foreign keys have ON UPDATE CASCADE set, Microsoft says “corresponding rows are updated in the referencing table when that row is updated in the parent table”.

Non-updating updates are no exception. To demonstrate, I create an untrusted foreign key and perform a non-updating update. It’s not a “no-op”, the constraint is checked as expected.

CREATE TABLE dbo.TestReferenced (
	Id INT PRIMARY KEY
);
 
INSERT dbo.TestReferenced (Id) VALUES (1), (2), (3), (4);
 
 
CREATE TABLE dbo.TestReferrer (
	Id INT NOT NULL
);
 
INSERT dbo.TestReferrer (Id) VALUES (2), (4), (6), (8);
 
ALTER TABLE dbo.TestReferrer 
WITH NOCHECK ADD FOREIGN KEY (Id) 
REFERENCES dbo.TestReferenced(Id)
ON UPDATE CASCADE;
 
-- trouble with this non-updating update:
UPDATE dbo.TestReferrer
SET Id = Id
WHERE Id = 8;
-- The UPDATE statement conflicted with the FOREIGN KEY constraint ...

@@ROWCOUNT Non-Updates are treated the same as Updates

SELECT @@ROWCOUNT returns the number of affected rows in the previous statement, not the number of changed rows.

Temporal Tables Non-Updates are treated the same as Updates

Non-updating updates still generate new rows in the history table. This can lead to puzzling results if you’re not prepared for them. For example, I can make some changes and query the history like this:

INSERT MyTest(Value) VALUES ('Mike')
UPDATE MyTest SET Value = 'Michael';
UPDATE MyTest SET Value = 'Michael';

When I take the union of rows in the table and the history table, I might see this output:

It reminds me of when my GPS says something like “In two miles, continue straight on Highway 81.” The value didn’t change, but there are still two distinct ranges.

Change Tracking Non-Updates are treated the same as Updates

Change tracking could be called “Overwrite Tracking” because all non-updating updates are tracked:

ALTER DATABASE CURRENT
set change_tracking = ON  
(change_retention = 2 days, auto_cleanup = on);
GO
 
create table dbo.test (id int primary key);
insert dbo.test (id) values (1), (2), (3); 
 
alter table dbo.test enable change_tracking with (track_columns_updated = on)  
 
-- This statement produces 0 rows:
SELECT t.id, c.*
FROM CHANGETABLE (CHANGES dbo.Test, 0) AS c  
JOIN dbo.Test AS t ON t.id = c.id ;
 
-- "update"
update dbo.test set id = id; 
 
-- This statement produces 3 rows:
SELECT t.id, c.*
FROM CHANGETABLE (CHANGES dbo.Test, 0) AS c  
JOIN dbo.Test AS t ON t.id = c.id;

Change Data Capture (CDC) Non-Updates treated differently than other Updates

Here’s a rare exception where a SQL Server feature is named properly. CDC does indeed capture data changes only when data is changing.
Paul White provided a handy set up for testing this kind of stuff. I reran his tests with CDC turned on. I found that:

  • When CDC is enabled, an update statement is always logged and the data buffers are always marked dirty.
  • But non-updating updates almost never show up as captured data changes, not even when the update was on a column in the clustering key.
  • I was able to generate some CDC changes for non-updates by updating the whole table with an idempotent expression (e.g. SET some_column = some column * 1)
    CREATE TABLE dbo.SomeTable
    (
        some_column integer NOT NULL,
        some_data integer NOT NULL,
    	index ix_sometable unique clustered (some_column)
    );
     
    UPDATE dbo.SomeTable SET some_column = some_column*1;

If you’re using this feature, this kind of stuff is important to understand! If you’re using CDC for DIY replication (God help you), then maybe the missing non-updates are acceptable. But if you’re looking for a kind of audit, or a way to analyze user-interactions with the database, then CDC doesn’t give the whole picture and is not the tool for you.

September 21, 2022

Batching Follow-Up

Filed under: Miscelleaneous SQL,SQL Scripts,Technical Articles — Michael J. Swart @ 12:00 pm

When I wrote Take Care When Scripting Batches, I wanted to guard against a common pitfall when implementing a batching solution (n-squared performance). I suggested a way to be careful. But I knew that my solution was not going to be universally applicable to everyone else’s situation. So I wrote that post with a focus on how to evaluate candidate solutions.

But we developers love recipes for problem solving. I wish it was the case that for whatever kind of problem you got, you just stick the right formula in and problem solved. But unfortunately everyone’s situation is different and the majority of questions I get are of the form “What about my situation?” I’m afraid that without extra details, the best advice remains to do the work to set up the tests and find out for yourself.

Your Own Batches

But despite that. I’m still going to answer some common questions I get. But I’m going to continue to focus on how I evaluate each solution.
(Before reading further, you might want to re-familiarize yourself with the original article Take Care When Scripting Batches).

Here are some questions I get:

What if the clustered index is not unique?

Or what if the clustered index had more than one column such that leading column was not unique. For example, imagine the table was created with this clustered primary key:

ALTER TABLE dbo.FactOnlineSales
ADD CONSTRAINT PK_FactOnlineSales
PRIMARY KEY CLUSTERED (DateKey, OnlineSalesKey)

How do we write a batching script in that case? It’s usually okay if you just use the leading column of the clustered index. The careful batching script looks like this now:

DECLARE
  @LargestKeyProcessed DATETIME = '20000101',
  @NextBatchMax DATETIME,
  @RC INT = 1;
 
WHILE (@RC > 0)
BEGIN
 
  SELECT TOP (1000) @NextBatchMax = DateKey
  FROM dbo.FactOnlineSales
  WHERE DateKey > @LargestKeyProcessed
    AND CustomerKey = 19036
  ORDER BY DateKey ASC;
 
  DELETE dbo.FactOnlineSales
  WHERE CustomerKey = 19036
    AND DateKey > @LargestKeyProcessed
    AND DateKey <= @NextBatchMax;
 
  SET @RC = @@ROWCOUNT;
  SET @LargestKeyProcessed = @NextBatchMax;
 
END

The performance is definitely comparable to the original careful batching script:

Logical Reads Per Delete

Logical Reads Per Delete

But is it correct? A lot of people wonder if the non-unique index breaks the batching somehow. And the answer is yes, but it doesn’t matter too much.

By limiting the batches by DateKey instead of the unique OnlineSalesKey, we are giving up batches that are exactly 1000 rows each. In fact, most of the batches in my test process somewhere between 1000 and 1100 rows and the whole thing requires three fewer batches than the original script. That’s acceptable to me.

If I know that the leading column of the clustering key is selective enough to keep the batch sizes pretty close to the target size, then the script is still accomplishing its goal.

What if the rows I have to delete are sparse?

Here’s another situation. What if instead of customer 19036, we were asked to delete customer 7665? This time, instead of deleting 45100 rows, we only have to delete 379 rows.

I try the careful batching script and see that all rows are deleted in a single batch. SQL Server was looking for batches of 1000 rows to delete. But since there aren’t that many, it scanned the entire table to find just 379 rows. It completed in one batch, but that single batch performed as poorly as the straight algorithm.

One solution is to create an index (online!) for these rows. Something like:

CREATE INDEX IX_CustomerKey 
ON dbo.FactOnlineSales(CustomerKey) 
WITH (ONLINE = ON);

Most batching scripts are one-time use. So maybe this index is one-time use as well. If it’s a temporary index, just remember to drop it after the script is complete. A temp table could also do the same trick.

With the index, the straight query only needed 3447 logical reads to find all the rows to delete:

DELETE dbo.FactOnlineSales WHERE CustomerKey = 7665;

Careful

Logical Reads

Can I use the Naive algorithm if I use a new index?

How does the Naive and other algorithms fare with this new index on dbo.FactOnlineSales(CustomerKey)?

The rows are now so easy to find that the Naive algorithm no longer has the n-squared behavior we worried about earlier. But there is some extra overhead. We have to delete from more than one index. And we’re doing many b-tree lookups (instead of just scanning a clustered index).

Remember the Naive solution looks like this:

DECLARE	@RC INT = 1;
 
WHILE (@RC > 0)
BEGIN
 
  DELETE TOP (1000) dbo.FactOnlineSales
  WHERE CustomerKey = 19036;
 
  SET @RC = @@ROWCOUNT
 
END

But now with the index, the performance looks like this (category Naive with Index)
Logical Reads Per Delete

The index definitely helps. With the index, the Naive algorithm definitely looks better than it did without the index. But it still looks worse than the careful batching algorithm.

But look at that consistency! Each batch processes 1000 rows and reads exactly the same amount. I might choose to use Naive batching with an index if I don’t know how sparse the rows I’m deleting are. There are a lot of benefits to having a constant runtime for each batch when I can’t guarantee that rows aren’t sparse.

Explore new solutions on your own

There are many different solutions I haven’t explored. This list isn’t comprehensive.

But it’s all tradeoffs. When faced with a choice between candidate solutions, it’s essential to know how to test and measure each solution. SQL Server has more authoritative answers about the behavior of SQL Server than me or any one else. Good luck.

Older Posts »

Powered by WordPress