Posts

Showing posts from May, 2020

tempdb

Image
So what is tempdb? Well… it’s is a system database that is used by queries that contain: joins, aggregations, cursors, xml variables, triggers, online index builds, sorting, hash matching, temp tables or table variables. There are other bits that use tempdb, but they’re beyond me right now… How should it be configured? 8 tempdb files, 1024mb initial size, 256mb auto growth (unlimited), all located in the same directory. The files must be evenly sized, otherwise more traffic will go to the largest file.

Integration Testing with Cypress

To give a bit of background… I’m working with a small, cross-skilled team of Ruby and dot-net engineers, and we were interested in streamlining our awkward development process. We were feature branch based, but were hoping to incrementally move to trunk based CI/CD. One part of our process that we wanted to optimise first was the QA’s manual regression testing. Doing the same tests over and over is not only a waste of their time, but can’t be the least bit fun. For each test our QA was manually conducting we instead wanted to pair up and write an integration test. This would not only give us better confidence to release, but also expand the automation skill-set of our QA. These tests should automatically run as part of the GitLab pipeline whenever code is committed to source control. If any of the tests fail then the pipeline is to be marked as failed. What is Cypress? Enter cypress.io . This is an opensource javascript framework that can run a set of integration tests from t

Search for Text in all DB objects

The other day I needed to alter a bunch of sprocs, views, and UDFs, as they had hard-coded references to other databases. I needed to change these to target a different database for testing some new indexes. This bit of SQL saved me a bunch of time: SELECT OBJECT_NAME ( id ) AS ObjectName , [ text ] as ObjectDefinition FROM sys . syscomments WHERE [ text ] like '%\[Foo\]%' ESCAPE '\' FOR XML PATH(' Row '), root(' Table ' ) It outputs a single xml doc that contains the definition of every database object that contained your search terms. This allowed me to easily search through them and update accordingly.

sp_blitzIndex

Some more notes after watching Brent’s How I Use the First Responder Kit: sp BlitzIndex . This sproc will analyse every index across all databases in your sql server and return a prioritised list of potential issues. EXEC sp_blitzIndex @GetAllDatabases = 1 The MoreInfo column is interesting. If you copy the SQL out and run it, it will give you more information on the indexing issue in question. E.g: EXEC dbo . sp_BlitzIndex @DatabaseName = 'YourDatabase' , @SchemaName = 'dbo' , @TableName = 'tFoo' ; If you run this, the top result set it a list of indexes that are on the target table. It shows how many seeks and scans each index has had, the size of the index, etc. Note the URL column for more information on each issue category. Missing indexes are shown in the 2nd result set. The 3rd result set shows the table definition, which is nice to have without having to mess with the SSMS UI.

sp_blitzCache

Some more notes after watching Brent’s How I Use the First Responder Kit: sp BlitzCache . This one shows you the most resource-intensive queries on SQL Server without running a profiler trace . Sounds pretty neat! EXEC sp_blitzCache @SortOrder = 'CPU' , @Top = 10 This outputs the execution plans of the 10 most expensive queries in the query plan cache. By default this is sorted by CPU, but can commonly be set to reads, writes or duration. There are a bunch of other options, which you can see if you add the @Help = 1 parameter. The Warnings column will give you hints as to what the problem might be, for example missing indexes. These warnings are prioritised in the second result set. For example my top warning was: 10 Execution Plans Forced Serialization http://www.brentozar.com/blitzcache/forced-serialization/ Something in your plan is forcing a serial query. Further investigation is needed if this is not by design. 25 If like me, you still have a way to go

Development query statistics

So you want to improve the performance of a query, and need stats. Add this to the top of your query (remove it before deploying to production!) SET STATISTICS IO ON SET STATISTICS TIME ON When you run your query, it’ll output some useful stats such as; execution time, query compilation time, number of scans on each table, number of reads, etc. Example output: SQL Server parse and compile time: CPU time = 78 ms, elapsed time = 78 ms. SQL Server Execution Times: CPU time = 3 ms, elapsed time = 3 ms. Table 'tFoo'. Scan count 9, logical reads 18, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'tBar'. Scan count 8, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. As a side note on this; I had a query that was taking 5+ seconds to execute, but the server-side stats were only showing it taking ~50ms. Our DBA showed

Getting the Execution Plan for a given sproc

When you already know which stored procedure is causing you pain, and you need to know the existing execution plan run this fella: SELECT TOP 10 D . name , ST . text AS TSQL_Text , QS . creation_time , QS . execution_count , QS . total_worker_time AS total_cpu_time , QS . total_elapsed_time , QS . total_logical_reads , QS . last_logical_reads , QS . total_physical_reads , PH . query_plan FROM sys . dm_exec_query_stats QS CROSS APPLY sys . dm_exec_sql_text ( QS . plan_handle ) ST CROSS APPLY sys . dm_exec_query_plan ( QS . plan_handle ) PH INNER JOIN sys . databases D ON ST . dbid = D . database_id WHERE ST . text LIKE '%spYourStoredProc%' ORDER BY total_elapsed_time desc Just so you’re aware; the above query will build it’s own execution plan and will be included in the result-set the second time you run it. I’d recommend having the free Sentry One Plan Explorer installed, so you can open the execution p

sp_blitzFirst

I figured I’d write some notes after watching Brent’s How I Use the First Responder Kit: sp BlitzFirst This query from Brent’s SQL-Server-First-Responder-Kit essentially gives you a view of what is happening on the database over a 5 second period. EXEC sp_blitzFirst @ExpertModel = 1 There are several sets of data returned. The first set shows which queries are running right now, at the start of the 5 second snapshot, and also includes their execution plans! You can open these just by clicking the plan link, and if you have the free SentryOne Plan Explorer installed, you can open the plan straight from SSMS. The second set of data shows a prioritised list of potential issues with your database, for mine it showed pretty high CPU usage. For Brent’s, it showed a database backup was in progress. The third shows WAIT STATS ; like how long SQL Server had to wait for any given reason. My top entry was SOS_SCHEDULER_YIELD , which is in the CPU category and it waited for 70 seco