Assign Variables in a Query
- SET is slower than SELECT because each SET statement updates on values per execution
- The entire SELECT statement runs once and update all three values in one execution.
- SET is the ANSI standard for variable assignment, SELECT is not.
- If the query returns multiple values/rows then SET will raise an error.
- SELECT will assign one of the values to the variable and hide the fact that multiple values were returned.
- When assigning from a query if there is no value returned then SET will assign
NULL
, where SELECT will not make the assignment at all keeping the variable unchanged.
DECLARE @ID1 INT,
@ID2 VARCHAR(100),
@ID3 INT,
@ID4 VARCHAR(100);
SET @ID1 = 1;
SET @ID2 = 'One';
SELECT @ID3 = 1,
@ID4 = 'One';
Datatypes
- The rule for division in SQL Server is that it will look at the parameters and based on the their datatype, it will assume the datatype of the resultset.
- If the datatype of numerator and denominator both are an integer, the resultset has a datatype of integers.
- When 1/2 is calculated arithmetically, SQL Server finds the results, which is 0.5.
- However, when it tries to display the answer, it converts 0.5 to integer and the result is displayed as a 0 (zero).
DECLARE @Num1 INT = 1;
DECLARE @Num2 INT = 2;
SELECT 1 / 2 AS [Result1], -- 0
@Num1 / @Num2 AS [Result2], -- 0
CAST(0.5 AS INT) [Result3] -- 0
SELECT
1 / 2 * 1. AS [Result4], -- 0
1. * 1 / 2 AS [Result5], -- 0.500000
1. * 1 / 2 * 1.0 AS [Result6] -- 0.5000000
Default Datatype of NULL
value in a Query
- In SQL Server the default datatype of a
NULL
value is an Integer.
SELECT NULL AS [TestNull]
INTO #TempTable;
EXEC tempdb..sp_columns #TempTable;
Dilemmas of SELECT *
- Indigestion: There is a cost for every column of data you request, within both the database and the application.
- When, in an application, you specify all the columns rather than just the columns you need, you will initially know, and maybe accept, the extent of the wastage.
- If someone subsequently extends the width of the table with more columns you get a whole lot more columns that aren’t needed, and your application will slow down with the increased memory management task.
- Torpidity. A query that retrieves information will probably use an index, especially if your query uses a filter.
- SQL Server’s query optimizer will want to get the data from the covering index, if possible, rather than having to rummage around in the clustered index.
- If you use
SELECT
*
, then the chances are high that no index won’t be covering.
- Even if you were wacky enough to cover the entire table with a non-clustered index, this will be rendered entirely useless if the table gets increased still further in size.
- Misinterpretation: If you reference the columns by order rather than name, using
INSERT
INTO
…SELECT
*
, then you must hope the column order never changes.
- If it does, and it is possible to coerce the values into the datatype you are anticipating, then data can end up in the wrong columns without triggering an error.
- Binding Problems. When you
SELECT…INTO
with a query that has been sprinkled with the asterisk fairy-dust, you can easily hit the problem of a duplicate name for a column.
- If you specify the columns, you will know up-front of duplicates and can alias them.
- If, instead, you pass such a query to an application, it has no easy way of knowing which column is the one with the correct value.