Category: T-SQL

Results from multiple rows in a single field

By , May 16, 2014 19:12

Excuses, excuses
A short one for the first time in a very long time. After starting work for a new Company, I for one, don’t have much time to do any blogging and for another, I don’t do too much on databases any more (unfortunately), other than performance tuning by means of indexing.

Introduction
Today, for the first time in quite a while, I wrote a stored procedure! The requirement was to build an electronic invoice ledger from invoice data in an accounting system. Once I understood the data structure of the accounts, it was simple enough to extract the data that was required. One of the trickier parts was, wanting to keep it “set based”,  listing all invoice attachments in a single column in the final result. This meant having to take values from a column in multiple rows and concatenate them into a single entry. That is simple enough (yet very clunky), when working in a loop, but in a single select it is a bit tricky. Now this has been done often enough and there are plenty of solutions out on the net, in fact I am sure that what I did has been blogged about often enough. So, really just for my records and future reference, here is what I did.

One of the requirements was to get all attachments for an invoice and put them into a single field separated by a single space. Each attachment was a single record in a table. I am going to keep it simple here and only work with a pseudo attachment table and not bother with joining in the invoice data too. It will suffice to say, that the common-key between the invoice and its attachments is the barcode field.

Just do it
The DDL und DML for the test data:

use Work;

if (OBJECT_ID('dbo.attachments') is null)

	create table dbo.attachments (
		id int identity(1,1),
		barcode varchar(22),
		atdate date,
		attachment varchar(20)
		);
else
	truncate table dbo.attachments;

insert into dbo.attachments
	(barcode, atdate, attachment)
values
	('11111111111111111', GETDATE(), '9900'),
	('11111111111111111', GETDATE(), '9800'),
	('11111111111111111', GETDATE(), '9700'),
	('11111111111111111', GETDATE(), '9600'),
	('11111111111111111', GETDATE(), '9500'),
	('11111111111111111', GETDATE(), '9400'),
	('11111111111111111', GETDATE(), '9300'),
	('11111111111111111', GETDATE(), '9200'),
	('11111111111111111', GETDATE(), '9100'),
	('11111111111111111', GETDATE(), '9000'),
	('22222222222222222', GETDATE(), '8900'),
	('22222222222222222', GETDATE(), '8800'),
	('22222222222222222', GETDATE(), '8700'),
	('22222222222222222', GETDATE(), '8600'),
	('22222222222222222', GETDATE(), '8500'),
	('22222222222222222', GETDATE(), '8400'),
	('33333333333333333', GETDATE(), ''), --intentionally left blank to compensate for XML blank
	('33333333333333333', GETDATE(), '7800'),
	('33333333333333333', GETDATE(), '7700'),
	('33333333333333333', GETDATE(), '7600'),
	('33333333333333333', GETDATE(), '7500'),
	('33333333333333333', GETDATE(), '7400'),
	('33333333333333333', GETDATE(), '7300'),
	('33333333333333333', GETDATE(), '7200'),
	('33333333333333333', GETDATE(), '7100'),
	('33333333333333333', GETDATE(), '7000')
	go

Now the extraction DML:

use Work;

select at1.barcode, attachments =
	rtrim((
		select distinct attachment + ' '
		from attachments at2
		where at2.barcode = at1.barcode
		for xml path ('')
	))
from dbo.attachments at1
group by at1.barcode

I used the for xml path clause to get the attachment out of dbo.attachments as a single string. Because I concatenate with a blank (”) the XML node structure is not returned, but rather just all the results for the attachments in a single string of characters. If you don’t concatenate, then the result looks something like this:

Now because we are converting the values to XML the blank value for the attachment (first entry for ) is interpreted as the following:

 ” being an entity reference for the blank space character.

Now this is obviously not desired so that needs to be considered in the sub-select:

use Work;

select at1.barcode, attachments =
	rtrim(
			(
				select distinct attachment + ' '
				from attachments at2
				where at2.barcode = at1.barcode
				and isnull(at2.attachment, '') != ''
				for xml path ('')
			)
	)
from dbo.attachments at1
group by at1.barcode

Here is the final result:

A common requirement is to build a comma separated list. This would be done by concatenating ‘ ,’ before the value being selected and then using substring to remove the first comma.

The DML could look like this:

use Work;

select at1.barcode, attachments =
	substring(
		rtrim(
				(
					select distinct ', ' + attachment
					from attachments at2
					where at2.barcode = at1.barcode
					and isnull(at2.attachment, '') != ''
					for xml path ('')
				)
			),
			3, 1000)
from dbo.attachments at1
group by at1.barcode

And finally
I do hope that this may help someone or other out with solving their problem of getting multiple values into a single field.
Thanks for reading.

Displaying Included fields for a NONCLUSTERED Index

By , May 12, 2012 08:58
The standard stored procedure sp_helpindex unfortunately doesn’t display the included fields which can be used with NONCLUSTERED indexes.
I have written a user stored procedure to get around this. I wanted to use a set based solution but couldn’t do so within a reasonable amount of time, so I have reverted to a cursor (something I haven’t written in a log time).
The user stored procedure saves the results of sp_helpindex to a temporary table and the procedes to gather the included fields from the metadata views provided by SQL server.
CREATE PROCEDURE up_helpindex
 @object_name sysname
AS

SET NOCOUNT ON;

IF OBJECT_ID('tempdb.dbo.#HelpIndexResults') IS NOT NULL
 DROP TABLE #HelpIndexResults;

CREATE TABLE #HelpIndexResults (
index_name sysname,
index_description VARCHAR(210),
index_keys NVARCHAR(2078),
include_keys NVARCHAR(2078) DEFAULT ''
);

--why reinvent the wheel? - use standard procedures
INSERT INTO #HelpIndexResults (index_name, index_description, index_keys)
 EXECUTE sp_helpindex @object_name;

IF @@ROWCOUNT = 0
 RETURN;

DECLARE @index_name sysname = '';
DECLARE @col_order_reversed INT;
DECLARE @column_name  sysname;

--now get the INCLUDE keys
DECLARE include_field_cursor CURSOR FOR
SELECT
 ROW_NUMBER() OVER (PARTITION BY ic.index_id ORDER BY ic.index_id, ic.index_column_id DESC),
 i.name AS index_name,
 c.name AS column_name
FROM sys.indexes i INNER JOIN sys.index_columns ic
 ON ic.object_id = i.object_id
 AND ic.index_id = i.index_id
INNER JOIN sys.columns c
 ON c.object_id = ic.object_id
 AND c.column_id = ic.column_id
WHERE ic.is_included_column = 1
ORDER BY ic.index_column_id;

OPEN include_field_cursor;

FETCH NEXT FROM include_field_cursor
 INTO @col_order_reversed, @index_name, @column_name;

WHILE @@FETCH_STATUS = 0
BEGIN
 UPDATE #HelpIndexResults
 SET include_keys += CASE
 WHEN @col_order_reversed > 1 THEN @column_name + ', '
 WHEN @col_order_reversed = 1 THEN @column_name
 END
 WHERE index_name = @index_name;
 FETCH NEXT FROM include_field_cursor
 INTO @col_order_reversed, @index_name, @column_name;
END

CLOSE include_field_cursor;
DEALLOCATE include_field_cursor;

UPDATE #HelpIndexResults
SET include_keys = ''
WHERE include_keys = '';

SELECT index_name, index_description, index_keys, include_keys
FROM #HelpIndexResults
ORDER BY index_name;

SET NOCOUNT OFF;
GO
I would be interested to see any comments about getting this done in a set-base fashion.
Thanks for reading.

BIT Data Type

By , February 23, 2011 20:18

I stumbled upon (or rather over) a little something this morning while I was writing a procedure one of whose parameters was of type bit.

Books Online states that the bit datatype is “An integer data type that can take a value of 1, 0, or NULL”.

Anyway, I wrote the procedure and while testing it, I was expecting to get an error when passing anything other than 1, 0 or NULL the the bit parameter. I was surprised to find that the bit will always have the value 1 when a value other than 0 or NULL is passed to it, even if the value is not an integer, but rather of type numeric (i.e. decimal or float). Even negative values are interpreted as 1.

SET NOCOUNT ON

DECLARE @nullbit BIT
DECLARE @falsebit BIT
DECLARE @truebit BIT
DECLARE @posbit BIT
DECLARE @negbit BIT
DECLARE @decimalbit BIT

SET @falsebit = 0
SET @truebit = 1
SET @posbit = CAST(POWER(2.0,63 -1)  AS BIGINT)
SET @negbit = CAST(POWER(-2.0,63)  AS BIGINT)
SET @decimalbit = -0.1

SELECT @nullbit AS [NULL], @truebit AS [TRUE], @falsebit AS [FALSE],  @posbit AS positive, @negbit AS negative, @decimalbit AS [DECIMAL]

SET NOCOUNT OFF

Discovering that even bigint values were permitted, I decided to push it a little. Try the following and you’ll see that all values, between -2^122 and 2^122 are interpreted as 1 (with, of course, 0 as an exception).

SET NOCOUNT ON
DECLARE @results TABLE (expression FLOAT(53), bitvalue BIT)
DECLARE @i FLOAT(53)  = -122.0

WHILE 1=1
BEGIN
	BEGIN TRY
		INSERT INTO @results
			SELECT	@i, 
					CASE 
						WHEN @i < 0 THEN CAST(POWER(-2.0,@i * -1) AS BIT)
						WHEN @i = 0 THEN CAST(@i AS BIT)
						ELSE CAST(POWER(2.0,@i) AS BIT)
					END
		SET @i += 1
	END TRY
	BEGIN CATCH
		SELECT * FROM @results
		RETURN
	END CATCH
END
SET NOCOUNT OFF

Conclusion
I was expecting my procedure to report some sort of error when a "false" value was passed to it. I think the thing to learn here is: always test to see if things work the way you expect them to.

Running Totals

By , February 7, 2011 19:02

This morning I read a blog post where a solution for calculating running totals was presented. I wanted to post a code correction as a comment to the post itself, but after three attempts (three site errors) to submit my comment, I gave up and decided to do a quick post of my own.

I don’t want to drag anybody through the mud on this, so I wont post a link to the blog, but this is how the code was presented (I have edited it so that it can’t be used to search for the original post).

DECLARE @RunningTotal TABLE (Amount INT)

INSERT INTO @RunningTotal VALUES (1);
INSERT INTO @RunningTotal VALUES (2);
INSERT INTO @RunningTotal VALUES (3);
INSERT INTO @RunningTotal VALUES (4);
INSERT INTO @RunningTotal VALUES (5);

SELECT
     a.Amount
     ,(
		SELECT
          Sum(b.Amount)
		FROM @RunningTotal b
		WHERE b.Amount <= a.Amount
	  ) rs
FROM
     @RunningTotal a

I found the idea fairly nifty, but what I wanted to challenge was the fact that this script is entirely dependant on the order of the values stored in Amount and that if a particular amount were to be entered more than once, the results would be incorrect.

Try the above with the following values.

INSERT INTO @RunningTotal VALUES (1);
INSERT INTO @RunningTotal VALUES (2);
INSERT INTO @RunningTotal VALUES (3);
INSERT INTO @RunningTotal VALUES (4);
INSERT INTO @RunningTotal VALUES (5);
INSERT INTO @RunningTotal VALUES (2);

Even if pre-sorted, the results would be partially incorrect.

It did however make me think about how I would get the required results. My first idea was to use ROW_NUMBER() and I ended up having to use a CTE too. But it works no matter what order the rows are in!

;WITH RunningTotal_CTE( RowNo, Amount)
AS
(
	SELECT
	ROW_NUMBER() OVER (ORDER BY (SELECT 1)) as RowNo,
	Amount
	FROM @RunningTotal
)

SELECT	
		a.Amount,
		(
			SELECT SUM (b.Amount)
			FROM RunningTotal_CTE b
			WHERE b.RowNo <= a.RowNo
		) RunningTotal
 FROM RunningTotal_CTE a

Enabling and Disabling Triggers

By , August 18, 2010 20:47

Yesterday, a colleague of mine proudly told me about a workaround that he had put together in the form of a trigger. “The work around”, he said “can simply be switched on and off by creating or dropping the trigger.” He had a glint in his eye until I asked him if he was aware that triggers can be disabled while still leaving the code in place. His solution would have meant keeping the T-SQL for the trigger on the server’s file system in case it had been “switched off” and needed to be “switched on” again, which is not really conform with best practices.

So I provided him with the code below, which I wrote for a project I dealt with a little while ago. I am also, for the sake of testing, including a little bit of DDL in the post. Also note that the test-trigger, that I include, deals with all rows affected by the triggering statement and not just one. This is a common mistake that is made (I have made it myself) when triggers are written.

use ThinkTank
go

if OBJECT_ID('dbo.TriggerTest') is not null
	drop table dbo.TriggerTest

create table dbo.TriggerTest (
	id int identity(1,1) primary key,
	guid_field uniqueidentifier ,
	calculated_field bigint
	)
go

Now the code for the trigger. Note the inner join between the logical table “inserted” and the table dbo.TriggerTest. This ensures that the trigger fires for all rows affected by the triggering statement.

create trigger dbo.tr_GenerateCalculatedFieldValue on dbo.TriggerTest
after insert
as

update tt
set calculated_field = i.id + 111 --an arbitrary data manipulation for testing
	from dbo.TriggerTest tt
	inner join inserted i
	on i.id = tt.id
go

Ok, so that is the DDL required to test what this post is actually about. Now for the code snippet which I have packed into a user stored procedure:

use ThinkTank
go

create procedure dbo.up_ReverseTriggerStatus
	@TriggerName sysname,
	@TableName sysname,
	@SchemaName sysname = 'dbo'
as

declare @IsDisabled bit
declare @ChangeStatusCommand nvarchar(500)
declare @QualifiedTable nvarchar(260)
declare @QualifiedTrigger nvarchar(260)
declare @UserMessageVar varchar(8)

--validate values passed in to the procedure and prevent sql injection
--First check to see that we are dealing with a valid schema
if schema_id(@SchemaName) != 1 or schema_id(@SchemaName) is null
begin
	raiserror ('"%s" is not a valid schema!', 16, 1, @SchemaName)
	return
end

--build the qualified names
set @QualifiedTable = quotename(@SchemaName) + '.' + quotename(@TableName)
set @QualifiedTrigger = quotename(@SchemaName) + '.' + quotename(@TriggerName)

-- Now check to see that we have a trigger with this schema and name
if objectpropertyex(object_id(@QualifiedTrigger), 'IsTrigger') != 1
or objectpropertyex(object_id(@QualifiedTrigger), 'IsTrigger') is null
begin
	raiserror ('%s is not a known trigger!', 16, 1, @QualifiedTrigger)
	return
end

--Then check to see that the table exists in the database
if objectpropertyex(object_id(@QualifiedTable), 'IsTable') != 1
or objectpropertyex(object_id(@QualifiedTable), 'IsTable')  is null
begin
	raiserror('%s is not a valid table object!', 16, 1, @QualifiedTable)
	return
end
else if objectpropertyex(object_id(@QualifiedTable), 'IsTable') = 1
begin
--OK, we've gotten this far, so the parameters aren't injected.
--It is also necessary to check that the trigger is actually installed on the table
	select @IsDisabled = is_disabled
		from sys.triggers
			where name = @TriggerName --unqualified
			and parent_id = object_id(@QualifiedTable)
end

if @IsDisabled is null
begin
	raiserror('The trigger %s is not installed on %s', 16, 1, @QualifiedTrigger, @QualifiedTable)
	return
end

--now we can get started with the actual work
--this would normally be done in one step, but for the sake of keeping in on the monitor...
set @ChangeStatusCommand = N'&command trigger &trigger on &table'
set @ChangeStatusCommand = replace(@ChangeStatusCommand,  '&table', @QualifiedTable)
set @ChangeStatusCommand = replace(@ChangeStatusCommand, '&trigger', @QualifiedTrigger)

if @IsDisabled = 1
begin
--here we want to re-enable the trigger
	set @ChangeStatusCommand = replace(@ChangeStatusCommand, '&command', 'enable')
	set @UserMessageVar = 'enabled'
end
else if @IsDisabled = 0
begin
--here we want to disable the trigger
	set @ChangeStatusCommand = replace(@ChangeStatusCommand, '&command', 'disable')
	set @UserMessageVar = 'disabled'
end

execute sp_executesql @ChangeStatusCommand
raiserror('The trigger %s was %s on %s.', 10, 1, @QualifiedTrigger, @UserMessageVar, @QualifiedTable)

Testing

Of course we need to test this solution. The standard status for a trigger is enabled, so the following code fires the trigger and we get the expected results.

use ThinkTank
go

insert into dbo.TriggerTest (guid_field)
select top 10 NEWID() from sys.columns a cross join sys.columns b

If we disable the trigger using the procedure “up_ReverseTriggerStatus”, the field “calculated_field” is not filled when the above statement is next run. Try it out and see for yourself.

In Conclusion

I haven’t included any error handling or the like, as I would really like to leave that up to each user to do for his- or herself and I didn’t want to bloat this post too much. It already has more code than I intended when I originally started writing the post (I really just wanted to blog about the fact that enable/disable trigger actually exists). It is, however, interesting to note that the bulk of the code is dedicated to parameter validation and that the core functionality comprises less than half of the code. It might actually be worthwhile looking into reducing the code for the validation. Perhaps I’ll do that at a later date.

Insert, Update or Delete Trigger

By , May 17, 2010 18:44

This one is a bit of a no-brainer and is really just for me, but if it does help anyone out there then I suppose it was worth posting.

I had to write a trigger today to monitor changes to data in a table. The client application does DML for the login language of the current user and the trigger has to do the same for the other languages for which the system was configured, so that clients logging on using another language also see the current data. For example: if a user in England inserts a new part number (along with description etc…), the same information has to be available for users in Romania and Germany. The same applied to updated and deleted data.

I generally don’t write many triggers, as I have had bad (really bad) experiences in the past, with undocumented triggers. But, the requirement was that a trigger be implemented.

Because the code that has to be run varies depending on which DML event was executing the trigger, I had to establish which event was occurring. I could have written three triggers, once each for insert, update and delete but I didn’t want to have three db-objects essentially linked by functionality and purpose, but acting as separate entities. This can get rather confusing when someone else has to maintain and or support the database and application using the database. (I have yet to meet the support-technician that reads system documentation at 3am). So I figured: Just give them one trigger to deal with.

This is how I went about finding out which DML event was firing the trigger:

create trigger TR_TriggerName
on Table
after insert, update, delete
as

declare @Update bit = 0
declare @Insert bit = 0
declare @Delete bit = 0

if exists (select * from inserted) --could be update or insert, but not delete
begin
	if exists (select * from deleted) --this is an update statement
	begin
		select	@Update = 1,
				@Insert = 0,
				@Delete = 0;
	end
	else --this is an insert statement
	begin
		select	@Insert = 1,
				@Update = 0,
				@Delete = 0;
	end
end
else --can only be a delete because nothing was in inserted
begin
	select	@Delete = 1,
			@Insert = 0,
			@Update = 0;
end

if (@Insert = 1)
begin
--insert code for insert procedure
end
else if (@Update = 1)
begin
--insert code for update procedure
end
else if (@Delete= 1)
begin
--insert code for delete procedure
end
go

The ever illusive Dynamic Pivot

By , April 10, 2010 17:47

Introduction
Recently, I have read many forum entries made by frustrated SQL users who have had problems dynamically building a list of columns and rows when the number of these is variable. I have also read, in the same forums, the argument that this sort of functionality belongs in the reporting application and not at the database level. Although that makes perfect sense, there are situations in which there just is no reporting application available or, as weird as it may seem, the reporting services built in to MSSQL Server since version 2005 are explicitly not to be implemented. These are the typical day to day problems many of us are faced with in our jobs.

Some time ago I was faced with exactly this problem and was required to build a “report” for a customer who needed a list of ordered part numbers and their required quantities for any given day.  This at first sounded like a simple select and group by, but it later turned out that what the customer wanted was a list with part numbers as column names, dates as row names and the accumulated quantities of parts per date to be the body of information.

At the time, I was unable to find any useful information in the usual forums (although I generally try to find a solution without help from outside, this doesn’t always work), so this is what I came up with.

Let’s go for it
First let’s do a bit of DDL. Ordinarily there would be a series of other tables along with FK relationships between the tables as well as a range of primary/unique keys and indexes. For the sake of simplicity, I am not going to reproduce the entire range of DDL here.

use ThinkTank
go

if  OBJECT_ID('tempdb..#PartNumbers') is not null
 drop table #PartNumbers
go
create table #PartNumbers (
 [ID] int identity (1,1),
 [PartNo] varchar(35),
 --[PartUsage] etc...
 )
go

if  OBJECT_ID('tempdb..#VehicleOrders') is not null
 drop table #VehicleOrders
go
create table #VehicleOrders (
 [ID] int identity(1,1),
 [ProdNr] varchar(35),
 [DeliveryDate] datetime,
 --[OrderStatus] etc...
 )
go

if  OBJECT_ID('tempdb..#VehicleOrdersItem') is not null
 drop table #VehicleOrdersItem
go
create table #VehicleOrdersItem (
 [VehicleOrders_ID] varchar(35),
 [PartNo_ID] int
 ) 
go

Now for the data

--let's work with 10 part numbers
insert into #PartNumbers (PartNo)
values
('AW93 54502B98 ADWAMN'),
('AW93 54502B98 ADWNUG'),
('AW93 54502B98 AF0AMN'),
('AW93 54502B98 AF8NUG'),
('AW93 54502B98 AG0AMN'),
('AW93 54502B98 AG8NUG'),
('AW93 54502B98 BD0AMN'),
('AW93 54502B98 BDWAMN'),
('AW93 54502B98 BDWNUG'),
('AW93 54502B98 BF0AMN')
go

--Each vehicle can consist of 1 - n parts.
insert into #VehicleOrders (ProdNr, DeliveryDate)
 select top 150000
  --Random production number
  'P1021' + cast(dbo.ufn_getRandomInt(100000,250000) as Varchar(35)),
  --add random number of days to date so that we have a 10 day range
  DATEADD(dd, dbo.ufn_getRandomTinyint(1,10), GETDATE())
  from ufn_MillionTally()
go

insert into #VehicleOrdersItem( VehicleOrders_ID, PartNo_ID)
 select top 1000000
  --Random production number id between 1 and 150000
  dbo.ufn_getRandomint(1,150000),
  --random id from #PartNumbers we want to avoid partno. 7
  replace(dbo.ufn_getRandomTinyint(1,10), 7, 1)
  from ufn_MillionTally()
go

The core problem here is that there could be any number of part numbers (this is an ever changing list in the automobile industry) and the number of dates is also a variable ( No Mo.-Fri.: trade union meetings, public holidays, strikes, extra shifts etc…). Furthermore, there could be part numbers that have not yet been ordered or have been discontinued, but are still in the master data table #PartNumbers.

This was my first simple select and group by mentioned earlier.

--we only want the date info, time is not relevant for this report
select CAST(vo.DeliveryDate as date)as [Date],
  pn.PartNo,
  count(vi.PartNo_ID) as Quantity   
  from #VehicleOrdersItem vi inner join #VehicleOrders vo
 on vo.ID = vi.VehicleOrders_ID
  inner join #PartNumbers pn
 on pn.ID = vi.PartNo_ID   
  group by CAST(vo.DeliveryDate as date), pn.PartNo
order by  [Date], pn.PartNo
go

Essentially, the information returned is what was required, but not formatted in a way that is easy to use.

What was required was the information from the above select, but formatted in such a way, that each date is cumulated into a single row and the part numbers are the names of the columns. As it is impossible to predict what and when the customer is going to order, both of these are variable (well the “what” is predictable to a certain degree, but I don’t want to delve into the ins and outs of suppliers’ nightmares in the automobile industry). Thus it is not possible to define a fixed set of column and row names.

Enter the dynamic pivot!

I used dynamic T-SQL to build the actual PIVOT statement, so that the variables could be inserted in a separate work-step and executed at the end.

declare @PartNoList nvarchar(max) = ''
declare @PartNoListIsNull nvarchar(max) = ''
declare @PivotString nvarchar(max)  =
'select cast(DeliveryDate as date) as [Date of Usage], ?FieldListIsnull   
 from   
 (   
  select cast(vo.DeliveryDate as date) as [DeliveryDate],
   pn.PartNo, isnull(count(*),0) as Quantity   
   from #VehicleOrdersItem vi inner join #VehicleOrders vo
 on vo.ID = vi.VehicleOrders_ID
   inner join #PartNumbers pn
 on pn.ID = vi.PartNo_ID   
   group by CAST(vo.DeliveryDate as date), pn.PartNo
 ) as RSet
 pivot
 (
  sum(RSet.Quantity)   
  for RSet.PartNo in   
   (
    ?FieldList   
   )
 ) as PTable
 order by DeliveryDate'

Build a list of part numbers:

--@PartNoListIsNull is used to display the part numbers and is required
--to allow for part numbers that have no orders within the viewed time frame.
--Remember, we avoided orders for PartNo_ID = 7 while building test data earlier.
select @PartNoListIsNull =
  @PartNoListIsNull + 'ISNULL(' + QUOTENAME(PartNo) + ', 0) as '
  + QUOTENAME(PartNo) + ', '    
 from #PartNumbers   
order by PartNo   

select @PartNoList =
  @PartNoList + QUOTENAME(PartNo) + ', '    
 from #PartNumbers   
order by PartNo  

--now we cut off the trailing comma
select  @PartNoListIsNull =
  SUBSTRING(@PartNoListIsNull, 1, LEN(@PartNoListIsNull) - 1),
@PartNoList =
  SUBSTRING(@PartNoList, 1, LEN(@PartNoList) - 1)

Replace the variables in the PIVOT command and execute to get the required result:

select	@PivotString = REPLACE(@PivotString, '?FieldListIsnull', @PartNoListIsNull),
		@PivotString = REPLACE(@PivotString, '?FieldList', @PartNoList)
execute  sp_executesql @PivotString

There are surely better ways of doing this, but it did work for me and fulfilled the customer’s requirements. I have yet to investigate wether or not this sort of code would be prone to SQL injection, but I don’t think so because this code, packed into a stored procedure, would have a typed parameter of type datetime which, as far as I am aware, would not allow malicious code to be injected.

Credit where credit is due:

1. The random function used above was extracted from Eric Fickes’ Blog. Thanks!

2. I wrote the ufn_MillionTally() tally function after closely following an interesting editorial and the resulting discussion on SQLServerCentral.com
This could also be replaced by a simple cross join on the master.sys.all_columns table.(See code download below)

Code Download
The code is provided as is and should not be run in a production environment before sufficient testing has taken place. Feel free to use this solution if you see fit.

Finally
Since having the idea to write this article I have briefly read about cross table queries, but haven’t had a chance to compare these approaches to my solution yet. I will do so in the not too distant future and will report on my findings.

Panorama Theme by Themocracy