Where I last left off we were discussing a fairly simple way of selecting a random row from a table. A more complicated method is to essentially not assume a 1..rowcount identity. What we can do is build a temp table of identities and select from it:
|declare @foo table
fooId int identity(1,1),
SELECT quoId FROM quotes_quo
DECLARE @count int
DECLARE @rand int
SET @count = (SELECT count(1) from @foo)
SET @rand = Round(((@count - 1) * Rand() + 1), 0)
SELECT quoText as Quote, quoAuthor as Author
WHERE quoId = @rand
That's it! This is essentially building an identity mapping table. This comes at the cost of performance. I built a test table and inserted about 60,000 rows of random data. Using a SQL Trace, the first query (discussed in Part I) executes in about 30 milliseconds. This new query takes about 190 milliseconds, the bulk of the time (about 95%) is the temp table insert. Whether or not that cost is important depends on the data and the frequency of the hit – for example, output caching the data from an ASP.NET page. In fact I'm doing exactly this, so performance for me isn't really an issue.
But in my travels I came across yet another way to do this, and it's clever. Using the newid() function in the ORDER BY, we can simulate a random row function:
|SELECT top 1 quoText as Quote, quoAuthor as Author
ORDER BY newid()
Cool, huh? Although the temp table method is actually a very feasible approach to many situations, it is largely unnecessary because we don't need to data for anything. The cost for this query is very reasonable, too, at about 90 milliseconds.
There's no right or wrong method, but knowing the options and their performance will help you make the right choice for the application.