Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Use UPDATE STATISTICS to fake table & index sizes
DECLARE @object_id int=OBJECT_ID('dbo.tablename'),
@rowcount bigint=10000000;
--- Name of the table
--- Name of the index (unless it's a heap)
ISNULL(N' ('+QUOTENAME(i.[name])+N')', N'')+N' '+
--- New row count:
N'WITH ROWCOUNT='+CAST(@rowcount AS nvarchar(20))+', '+
--- New page count:
N'PAGECOUNT='+CAST(CAST(@rowcount*(CASE WHEN SUM(p.[rows])>1000
--- Inherit fill factor, compression, etc from existing data if we have at least 1000 existing rows:
--- Calculate new values if there are no current rows to model on:
c.index_width*AVG(x.compression_coefficient)/x.fill_factor*(CASE WHEN i.filter_definition IS NOT NULL THEN 0.5 ELSE 1.0 END)/8060
END) AS int) AS nvarchar(20))+N';' AS [T-SQL],
--- Here's what we have today:
N'-- '+CAST(SUM(p.[rows]) AS nvarchar(20))+N' rows, '+
CAST(SUM(ps.used_page_count) AS nvarchar(20))+N' pages used, '+
CAST(SUM(ps.reserved_page_count) AS nvarchar(20))+N' reserved'+
(CASE WHEN MAX(p.partition_number)>1 THEN N', '+CAST(MAX(p.partition_number) AS nvarchar(20))+N' partitions' ELSE N'' END) AS [Current],
--- Here's what we have to work with:
CAST(CAST(c.index_width AS int) AS nvarchar(10))+' bytes per row'+
ISNULL(N', '+CAST(CAST(100-AVG(100*x.compression_coefficient) AS int) AS nvarchar(10))+N'% compression', N'')+
N', '+CAST(CAST(100*x.fill_factor AS int) AS nvarchar(10))+'% fill factor'+
(CASE WHEN i.filter_definition IS NOT NULL THEN N', index is filtered' ELSE N'' END) AS Calculation
FROM sys.dm_db_partition_stats AS ps
INNER JOIN sys.partitions AS p ON ps.[partition_id]=p.[partition_id]
LEFT JOIN sys.indexes AS i ON p.[object_id]=i.[object_id] AND p.index_id=i.index_id
SELECT (CASE WHEN i.fill_factor=0 THEN 1.0 ELSE 0.01*i.fill_factor END) AS fill_factor,
(CASE p.[data_compression]
WHEN 1 THEN 0.8 -- row compression
WHEN 2 THEN 0.6 -- page compression
WHEN 3 THEN 0.3 -- columnstore
WHEN 4 THEN 0.2 -- columnstore archive
END) AS compression_coefficient
) AS x
--- How wide (in bytes) will the index be? Assume 50% usage for var* columns:
SELECT SUM((CASE WHEN t.[name] LIKE 'var%' THEN 0.5 ELSE 1.0 END)*c.max_length) AS index_width
FROM sys.columns AS c
INNER JOIN sys.types AS t ON c.system_type_id=t.user_type_id
WHERE c.[object_id]=@object_id
AND (i.index_id IN (0, 1) -- Clustered index or heap will include all columns
OR i.index_id NOT IN (0, 1) AND c.column_id IN (
--- Index keys and included columns for non-clustered indexes:
SELECT column_id
FROM sys.index_columns AS ic
WHERE ic.[object_id]=@object_id AND ic.index_id=i.index_id))
) AS c
WHERE p.[object_id]=@object_id
AND p.[rows]!=@rowcount
GROUP BY i.index_id, i.[name], i.filter_definition, c.index_width, x.fill_factor
ORDER BY i.index_id;
Copy link

This little script will generate the T-SQL statements you need to fake a table's size, taking into account fill factor, compression, existing row density (if you have at least 1000 existing rows).

Copy link

@dhmacher there's a typo. Line 61 has 1 too many ")". Removed that and it worked. Thanks, this is useful for an issue we're having with SQL Server 2017.

Copy link

@mbourgon, thanks! I've updated the gist accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment