Are you Scripting the tables from SSMS?

Hi Folks,

Today I will share a scenario where we should be really cautious. As the title says when scripting out the tables we need to be more attentive.

Okay coming back to scenario we need to run alter statement on a very huge table(50 GB) where one of the columns needs to be changed from NULL to NOT NULL

I explained in the below post how to achieve minimal logging while performing this activity in the link

Now as part of it I need to create temporary table so I scripted out the main table from SSMS and started dumping the data 

CREATE TABLE [dbo].[table_temp](
[ldd_id] [bigint] NULL,
[ld_iso_id] [int] NOT NULL,
[ld_voucher_id] [int] NOT NULL,
[ld_qualified] [int] NULL,
[ld_lastVATOnPurchase] [int] NULL,
[ld_ranking] [int] NULL,
[ld_expired] [int] NULL,
[Hash_Check] [varbinary](8000) NULL

After completion of the operation I realized that I made a mistake. Do you know what?

Well, Keep this small tip in your mind forever when we script out the table it has to show us the filegroup on which it is placed.  If by any chance if it doesn't then we need to check if it is partitioned or not. As far as I observed when we script out the partition tables it doesn't show the filegroup as they will be associated with partition schemes. We should use exec sp_help 'tablename' to get the information of where the table is placed (Filegroup or partition scheme)

CREATE TABLE [dbo].[Trans](
[LedgerId] [int] NULL,
[LedgerDetailId] [int] NULL,
[CountryId] [smallint] NULL,
[VoucherId] [int] NULL,
[TransactionAmountRounded] [decimal](18, 6) NULL,
[TransactionDateKey] [int] NULL

Now we have our temporary table on Primary file group instead of partition scheme in my case my source table is Heap hence to move the heap table again back to Partition scheme we need to create clustered index and then drop. Below is the link which we can make use of

I would like to share one more very useful technique believe me this is very much needed when you want to estimate how many rows are progressed while doing operations like below

1) insert into select * from 

2) select * into 

3) NonClustered Indexes on a Heap

4) Clustered Index (no NonClustered Indexes exist)

5) NonClustered Indexes on the Clustered Index/Table

6) Clustered Index when NonClustered Indexes already exist

7) Unique NonClustered Indexes on the Clustered Index/Table

8) ALTER TABLE [schema_name].[table_name] REBUILD; (only Clustered Index shows up when using this method)

9) ALTER INDEX ALL ON [schema_name].[table_name] REBUILD;

10)ALTER INDEX [index_name] ON [schema_name].[table_name] REBUILD;

What you need to do is just enable set statistics profile on in the window where you want to execute above commands

Now to track the progress you can use the below query(credits to Solomon Rutzky) and the output would be like 


;WITH agg AS
     SELECT SUM(qp.[row_count]) AS [RowsProcessed],
            SUM(qp.[estimate_row_count]) AS [TotalRows],
            MAX(qp.last_active_time) - MIN(qp.first_active_time) AS [ElapsedMS],
            MAX(IIF(qp.[close_time] = 0 AND qp.[first_row_time] > 0,
                    N'')) AS [CurrentStep]
     FROM sys.dm_exec_query_profiles qp
     WHERE qp.[physical_operator_name] IN (N'Table Scan', N'Clustered Index Scan', N'Sort', 'Index Insert','Index Scan')
 qp.[session_id] = @SPID
), comp AS
     SELECT *,
            ([TotalRows] - [RowsProcessed]) AS [RowsLeft],
            ([ElapsedMS] / 1000.0) AS [ElapsedSeconds]
     FROM   agg
SELECT [CurrentStep],
       CONVERT(DECIMAL(5, 2),
               (([RowsProcessed] * 1.0) / [TotalRows]) * 100) AS [PercentComplete],
       (([ElapsedSeconds] / [RowsProcessed]) * [RowsLeft]) AS [EstimatedSecondsLeft],
               (([ElapsedSeconds] / [RowsProcessed]) * [RowsLeft]),
               GETDATE()) AS [EstimatedCompletionTime]
FROM   comp;

I hope you will start utilizing above query from now on.