August 7, 2013 § 1 Comment
SQL SERVER – Disadvantages (Problems) of Triggers – Pinal Dave
One of my team member asked me should I use triggers or stored procedure. Both of them has its usage and needs. I just basically told him few issues with triggers. This is small note about our discussion.
Disadvantages(Problems) of Triggers
- It is easy to view table relationships , constraints, indexes, stored procedure in database but triggers are difficult to view.
- Triggers execute invisible to client-application application. They are not visible or can be traced in debugging code.
- It is hard to follow their logic as it they can be fired before or after the database insert/update happens.
- It is easy to forget about triggers and if there is no documentation it will be difficult to figure out for new developers for their existence.
- Triggers run every time when the database fields are updated and it is overhead on system. It makes system run slower.
I do not use triggers. In my whole career I am able to get my work done using Stored Procedures instead of Triggers. (Implementation and Architecture changes are required if either of is to be used to support business logic).
July 15, 2013 § 3 Comments
What are the Differences Between Relational and Graph Databases?
Say “database” to most people, and the concept of an organized collection of data, neatly stored in rows and columns of tables comes to mind. This concept of a fixed schema, where each row is a collection of attributes, is the basis for relational databases and the querying languages, such as SQL (Structured Query Language) used to interact with the stored data since the earlier 1970’s.
In the last 15 years however, there are several industries where the amount of data being generated greatly exceeds the ability for relational databases to handle it. Companies like Google and Amazon have long been generating massive amounts of data using countless numbers of servers. With the resulting data spread across multiple machines, traditional relational SQL JOIN operations are just not possible.
Enter graph databases, which are defined as any storage system that provides index-free adjacency. What this means is that every element in the database contains a direct link to its adjacent element. No index lookups are required; every element (or node) knows what node or nodes it is connected with, this connection is called an edge. This allows graph database systems to utilize graph theory to very rapidly examine the connections and interconnectedness of nodes – and how Netflix can recommend videos for you.
The power of the edge allows a graph database to find results in associative data sets – data where information is the aggregation of links between nodes – faster than relational databases. Graph databases can scale more naturally to large data sets and to datasets with changing or on-the-fly schemas. On the other hand, relational databases are still better at performing the same operation on large numbers of identical data. When you want your bank balance, you don’t want a rapid list of all your transactions – just your bottom line.
The use of graph databases is rapidly spreading to many applications through the use of mixed-database approaches, where a graph search is used to identify the extent of the data, and a subsequent relational search is used to provide the detailed analytics. While this approach presently involves developing (and supporting) two database structures, it yields rapid response and targeted data analysis. Some solutions present the graph results to users while the analytics are being pulled and crunched; other systems serve up old results while the new results are being calculated. How analytics and associations coexist is one of the considerations that must be made when architecting your solution.
So is a graph database in your future? If a third or more of your relational tables describe links between data elements, your database is heavily associative, and can be a graph database candidate. The final decision requires a complete analysis of how the data is being used, volume and growth patterns, and not just a review of table structures. If your data is used for statistical analysis, data mining and exploration, or operational research, the relational database approach is still at least part of the architectural solution.
July 4, 2013 § 1 Comment
Ian Plosker shares a number of techniques for establishing the data query patterns from the outset of application development, designing a data model to fit those patterns.
Interesting discussion on choosing data model. Its really important to understand what our application want. Very good presenation
June 21, 2013 § 3 Comments
This is life savior stuff. As we send HTML formatted email. Sharing Steve Moore articles below
Email Formatted HTML Table with T-SQL
By Steve Moore, 2013/06/20
One of the most common tasks for me is to send data from a query as the body of an e-mail. You can do this in SSIS by outputting a query to a flat file, but there’s no easy way to format the content. You have the option of using XML and XSLT transformations, but that is a bit onerous for simple use cases.
This article is about how to produce a table like the one below in an e-mail using just SQL. The only real complexity is formatting alternate rows, which was my goal. I wanted it to look like a .NET DataGrid (shown below).
The basic approach involves three steps:
- Create a query that pulls together the data as HTML.
- Use bcp and xp_cmdshell to write the SQL to a file.
- Email the file using Blat, a free SMTP client (www.blat.net)
One of the problems I ran into is the use of a <script> tag in the email, so there I had to abandon CSS styling and instead do inline styling on every row.
In order to find the odd/even rows, you need the ROW_NUMBER() function in a CTE, then use a CASE statement, and the modulo operator, to format the result:
CREATE PROCEDURE [dbo].[procHtmlSql] AS BEGIN SET NOCOUNT ON; WITH htmlresult AS ( SELECT ROW_NUMBER() OVER (ORDER BY AuditTrail.EnteredDate) AS RowNumber, '<td>' + AuditTrail.OldValue + '</td><td>' + AuditTrail.NewValue + '</td><td>' + AuditTrail.LoginID + '</td><td>' + CAST((CONVERT(smalldatetime, AuditTrail.EnteredDate)) AS VARCHAR(20)) + '</td>' AS HtmlData FROM AuditTrail WHERE (AuditTrail.EnteredDate > GETDATE() - 1) ) SELECT '<html><head><title></title></head> <body><br><br><table><tr style="background-color: #5D7B9D; font-weight: bold; color: white;"> <td>Old Value</td><td>New Value</td><td>Login</td><td>Entered Date</td></tr>' AS HtmlData UNION ALL SELECT HtmlData = CASE RowNumber%2 WHEN 0 THEN '<tr style="background-color: #F7F6F3">' + HtmlData + '</tr>' ELSE '<tr>' + HtmlData + '</tr>' END FROM htmlresult UNION ALL SELECT '</table></body></html>' END
Once you have the SQL with HTML formatting, you want to use that within another stored procedure to write the result to a file. This is what the second proc should look like:
CREATE PROC [dbo].[procHtmlFile] AS BEGIN DECLARE @strCommand varchar(255) DECLARE @lret int SET @strCommand = 'bcp "EXECUTE MYDB.dbo.procHtmlSql" QUERYOUT C:\MyProject\SqlHtml.html -T -c' EXEC @lRet = master..xp_cmdshell @strCommand, NO_OUTPUT END
Now you should be able to open the file “C:\MyProject\SqlHtml.html” in a browser and it should look similar to a .NET DataGrid.
Finally, send the file as the body of an e-mail with this procedure.
CREATE PROCEDURE [dbo].[procBlatSendMail] @to varchar(255) = 'email@example.com', @from varchar(255) = 'firstname.lastname@example.org', @subject varchar(100) = 'My Example - ' AS DECLARE @command as varchar(1500) SET @command = '' SET @command = @command + 'Blat C:\MyProject\SqlHtml.html' SET @command = @command + ' -to ' + '"' + @to + '"' SET @command = @command + ' -f ' + '"' + @from + '"' SET @command = @command + ' -subject ' + '"' + @subject + CONVERT(nvarchar(25), GETDATE()) + '"' EXEC master.dbo.xp_cmdshell @command
Simple, effective, and professional looking.
February 24, 2011 § Leave a comment
Use of Identity Property to Resolve Concurrency Issues
Recently, I came across an interesting concurrency problem. A database I work with started experiencing intermittent locking and blocking issues. This resulted in timeouts being observed from an application which used the database as its backend.
To analyse this, I wrote a script and scheduled it as a job which would execute between a given timeslot the next day. The script would look for blocking in the database and when a block was found, it would log relevant information in a table. This included the SQL routine causing the blocking, the SQL routine being blocked, the statement within the SQL routine that was blocked and the database object that was the source of contention.
The trace Script (Main SQL)
SELECT DB_NAME() AS database_name, GETDATE() AS audit_time, s.spid AS process_id, s.blocked AS blocking_process_id, s.hostname, s.loginame, s.program_name, blocking_s.hostname AS blocking_hostname, blocking_s.loginame AS blocking_loginame, blocking_s.program_name AS blocking_program_name, REPLACE(REPLACE(buffer.[text], CHAR(10), ''), CHAR(9), '') AS sql_statement, SUBSTRING (buffer.[text], request.statement_start_offset/2, (CASE WHEN request.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX), buffer.[text])) * 2 ELSE request.statement_end_offset END - request.statement_start_offset)/2) AS specific_sql, REPLACE(REPLACE(blocking_buffer.[text], CHAR(10), ''), CHAR(9), '') AS blocking_sql_statement, o.[name] AS blocking_object, blocking_tr_locks.request_mode FROM sys.sysprocesses s INNER JOIN sys.dm_exec_connections conn ON s.spid = conn.session_id CROSS APPLY sys.dm_exec_sql_text(conn.most_recent_sql_handle) AS buffer LEFT JOIN sys.dm_exec_requests request ON conn.session_id = request.session_id INNER JOIN sys.dm_exec_connections blocking_conn ON s.blocked = blocking_conn.session_id CROSS APPLY sys.dm_exec_sql_text(blocking_conn.most_recent_sql_handle) AS blocking_buffer INNER JOIN sys.dm_tran_locks blocking_tr_locks ON s.blocked = blocking_tr_locks.request_session_id INNER JOIN sys.objects o ON blocking_tr_locks.resource_associated_entity_id = o.object_id INNER JOIN sys.sysprocesses blocking_s ON s.blocked = blocking_s.spid WHERE s.blocked <> 0
The next day, I looked at the data logged by the script. It was evident that most blocks were being caused by the execution of a stored procedure. This procedure made use of a transaction, which was getting blocked and causing the timeouts.
Since many SQL sessions were trying to execute the same stored procedure, one session ended up blocking one or more of the other sessions. The locks were always placed on a single table in the database, and the specific SQL statement being blocked within the procedure was always an update statement on this table.
On closely reviewing the data, I found that this table was created by an application developer to serve as a “Key Value Pair” (KVP) generator. The table contained 2 columns, one for a key and the other for the corresponding value. The key would be a static string and the value would be incremented by 1 each time. This incremented value would be used in several insert statements, which affected many tables.
It was obvious that since one procedure call would update this value and hold on to it until the stored procedure completed and executed a commit or a rollback, the other calls coming from the other sessions would get blocked.
For you to visualise this, I have written some sample code and created a sample table, which are simplified versions of the database objects mentioned above.
Here is the “Key Value Pair” (KVP) table and some sample data
CREATE TABLE [dbo].[tbl_kvp]( [column_key] [nvarchar](50) NOT NULL, [column_value] [int] NOT NULL ) ON [PRIMARY] GO INSERT tbl_kvp (column_key, column_value) VALUES (N'PR1', 40) GO
The stored procedure which was used to increment the value for a given key
CREATE PROCEDURE [dbo].[USP_Get_Value_For_Key] ( @key NVARCHAR(50), @value INT OUTPUT ) AS BEGIN SELECT @value = 0 UPDATE tbl_kvp SET column_value += 1 WHERE column_key = @key SELECT @value = column_value FROM tbl_kvp WHERE column_key = @key END
The stored procedure causing the contention
CREATE PROCEDURE [dbo].[USP_Business_Process] AS BEGIN BEGIN TRANSACTION Business_Process BEGIN TRY DECLARE @val INT = 0 -- Call key value incrementing stored procedure EXEC USP_Get_Value_For_Key 'PR1', @val OUTPUT SELECT @val -- Print key value for display -- There would be code present here in the actual stored procedure -- to insert the key value into multiple tables. To simulate this and -- also give enough time to review the locks, the transaction will -- be kept open for 10 seconds WAITFOR DELAY '00:00:10' COMMIT TRANSACTION Business_Process END TRY BEGIN CATCH ROLLBACK TRANSACTION Business_Process END CATCH END
Trace Script Output
If you were to execute 2 calls to stored procedure, the typical output you would see from the trace script would be as shown below.
As you can see, the stored procedure that updates the KVP table (which gets called in the business process procedure) is blocked by another call to the business process procedure. The update statement for the KVP table is blocked and the source of contention is the KVP table.
Now that I knew the source of the problem, I had to find ways of addressing it. The KVP table had been introduced with several keys in it, and there were many areas in the code which were calling the USP_Get_Value_For_Key stored procedure, thereby affecting not just the business process mentioned above, but several other processes.
I was keen on finding a solution that could be implemented for all keys in the KVP table, without having to modify much of the existing code. This would mean that all processes suffering from this problem would benefit, and only few changes would be required which would make deployment of the fix easier.
Having worked with Oracle, I kept thinking that if I had to implement such an ‘incrementing’ logic, I would probably have made use of sequences. Since SQL Server 2008 doesn’t have sequences, the closest I could get to such an object would be by making use of an identity column (The upcoming version of SQL Server 2011 code name ‘Denali’ does have sequences).
I therefore created a table that had an identity field and a dummy char(1) column into which I would keep inserting a dummy value to generate the next identity value. With this approach, I could convert the single horizontal row for a key into a vertical structure provided by the identity field. Building a vertical structure meant that a transaction could insert records into this new table and hold on to those records, while other transactions were free to insert their own records and get the next value in the sequence.
Here’s some sample code that demonstrates this. First the Vertical table for PR1 key (The seed value is the next key value in sequence)
CREATE TABLE tbl_vert_pr1(id int identity(45, 1), dummy_col char(1))
Here is the modification to USP_Get_Value_For_Key to cope with the vertical structure
ALTER PROCEDURE [dbo].[USP_Get_Value_For_Key] (@key NVARCHAR(50), @value INT OUTPUT) AS BEGIN -- Replaced logic to look up vertical structure table -- Naming convention chosen for the vertical tables was tbl_vert_<key> DECLARE @exec_sql NVARCHAR(4000), @param_defn NVARCHAR(1000), @value_OUT INT -- By making use of the output clause, the identity value could be -- obtained directly, instead of having to perform a "select max(id)" -- operation, which could get blocked if another insert statement was -- holding on to a record as part of a transaction SELECT @exec_sql = N'DECLARE @op_table TABLE (id INT); ' + N'INSERT tbl_vert_' + @key + N' OUTPUT inserted.id INTO @op_table VALUES ('' ''); ' + N'SELECT @value_OUT = id from @op_table' SELECT @param_defn = N'@value_OUT INT OUTPUT' EXEC SP_EXECUTESQL @exec_sql, @param_defn, @value_OUT = @value OUTPUT END
If you were to now execute two calls to the business process stored procedure in separate concurrent sessions and run the trace script in a separate session, you will observe that no blocks get picked up.
You will also notice the procedure that was executed first fetches the next available key value in sequence, while the procedure that was executed second fetches the key value after the one fetched by the first, thus proving that both transactions could obtain and make use of their respective keys without blocking each other. The above mentioned logic was applied to all keys in the KVP table, which resolved the concurrency issues associated with it.
Every real world concurrency problem tends to have its own unique solution. In the example mentioned above, the solution involved using the identity property, since it could convert a single horizontal record into a vertical structure. The best solution of course, is to not build database objects that do not satisfy good design practices!
November 5, 2010 § Leave a comment
Sending E-Mail from SQL 2008 Express
By Andrew Prisk, 2010/11/04
This solution is not entirely new, in fact there is an article that describes a very basic approach to this and can be found here:http://www.mssqltips.com/tip.asp?tip=1795. The solution works for simple use cases, but didn’t do everything I needed it to, so I went ahead and added to it.
For those of you not familiar with using the Common Language Runtime (CLR), it is basically a way in which to execute, or call, managed code written in C# or VB.NET from within the SQL Server environment. This is basic functionality that is bundled in all versions of SQL (from 2005 on), including Express. There are two parts to this solution:
- The source VB file.
- A SQL Script for: Configuring CLR, Loading the assembly and Creating the Stored Procedure
Let’s have a look at the vb source:
Import the appropriate namespaces
Imports System.Net Imports System.Net.Mail
We’ll call this SQLCLREmail – this will be very important later on.
Public Class SQLCLREmail <Microsoft.SqlServer.Server.SqlProcedure()> _
No surprises here, just setting up a basic sub procedure, only we are accepting arguments that correspond as stored procedure variables…
Public Shared Sub SendEmail(ByVal recipients As String _ , ByVal CC as String _ , ByVal BCC as String _ , ByVal subject As String _ , ByVal from As String _ , ByVal body As String _ , ByVal strAttachments as String _ , ByVal strSMTPServer as String _ , ByVal strSMTPServerPort as String _ , ByVal strSMTPServerUser as String _ , ByVal strSMTPServerPwd as String) Using MailMsg As New MailMessage() MailMsg.From = New MailAddress(from) MailMsg.Subject = subject MailMsg.Body = body MailMsg.IsBodyHtml = True
Gotcha number one, two, three and four; in order to send to multiple mail recipients (CC, BCC and attachments too),
they have to be passed in as one long string and then looped through and added individually
If Not recipients.Equals(String.Empty) Then Dim strRecip As String Dim strTo() As String = recipients.Split(";") For Each strRecip In strTo MailMsg.To.Add(New MailAddress(strRecip)) Next End If If Not CC.Equals(String.Empty) Then Dim strCCRecip As String Dim strCCTo() As String = CC.Split(";") For Each strCCRecip In strCCTo MailMsg.CC.Add(New MailAddress(strCCRecip)) Next End If If Not BCC.Equals(String.Empty) Then Dim strBCCRecip As String Dim strBCCTo() As String = BCC.Split(";") For Each strBCCRecip In strBCCTo MailMsg.BCC.Add(New MailAddress(strBCCRecip)) Next End If If Not strAttachments.Equals(String.Empty) Then Dim strFile As String Dim strAttach() As String = strAttachments.Split(";") For Each strFile In strAttach MailMsg.Attachments.Add(New Net.Mail.Attachment(strFile.Trim())) Next End If
I added this purely for my own convenience, but it certainly enhances the assembly. If a custom SMTP server is passed, this will also check for custom credentials and a custom port. Otherwise it will assume that the local server is also the SMTP server using port 25.
If Not strSMTPServer.Equals(String.Empty) Then Dim smtp As New System.Net.Mail.SmtpClient With smtp If Not strSMTPServerUser.Equals(String.Empty) Then .UseDefaultCredentials = False .Credentials = New System.Net.NetworkCredential(strSMTPServerUser, strSMTPServerPwd) End If .Host = strSMTPServer .Port = strSMTPServerPort .Send(MailMsg) End With Else Dim smtp As New System.Net.Mail.SmtpClient With smtp .Host = "localhost" .Port = 25 .Send(MailMsg) End With End If End Using End Sub End Class
That’s it. Now all we have to do is compile it for use in SQL. To do that, open a command prompt (Start -> Run -> CMD) and type the following:
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\vbc /target:library C:\SQL_CLR\SendEmail.vb
Obviously, this snippet is making some assumptions, so change the paths accordingly for your system. After that, voila! In the same directory as the VB source file, there will be a newly minted DLL, ready to be loaded into SQL.
The second part of this is to deploy the DLL to the SQL Server environment and load it as an assembly, thereby making it available for our stored procedure. In order to do that, the first step is to configure CLR in SQL:
-- In the event you want to see all of the various -- advanced options that can be set, just execute -- sp_configure *after* 'show advanced options' has -- been set to 1 exec sp_configure 'show advanced options',1 reconfigure GO -- This simply enables CLR in the SQL environment exec sp_configure 'clr enabled',1 reconfigure go -- If this is to be executed on a USER database, -- run this statement alter database YourDatabase set trustworthy on go -- In the event this is a USER database and is -- owned by the user, run this to change it back to 'sa' --ALTER AUTHORIZATION ON DATABASE::YourDatabase TO sa go use YourDatabase go -- Now we load the assembly in the specified database. -- Make sure the path to the DLL is correct. create assembly SendEmail from 'C:\SQL_CLR\SendEmail.dll' with permission_set = external_access GO
The second script will create the stored procedure that will call the CLR proc and pass in our values:
CREATE PROCEDURE [dbo].[p_SendEMail] @recipients [nvarchar](4000), @CC [nvarchar](4000), @BCC [nvarchar](4000), @subject [nvarchar](4000), @from [nvarchar](4000), @body [nvarchar](4000), @strAttachments [nvarchar](4000), @strSMTPServer [nvarchar](300), @strSMTPServerPort [nvarchar](10), @strSMTPServerUser [nvarchar](300), @strSMTPServerPwd [nvarchar](300) WITH EXECUTE AS CALLER AS EXTERNAL NAME [SendEmail].[SQLCLREmail].[SendEmail]
Pretty straightforward – notice the naming convention used for the external name. Look familiar? It is the DLL name + class name + CLR proc name. Now for calling the proc:
EXEC p_SendEMail @recipients = 'email@example.com' , @CC = 'firstname.lastname@example.org' , @BCC = 'email@example.com' , @subject = 'RE: Email from SQL Express' , @from = 'Test@SQLExpress.com' , @body = 'This is a test email from SQL Server' , @strAttachments = 'C:\test1.txt;C:\test2.txt' , @strSMTPServer = 'my.smtpserver.com' , @strSMTPServer = '25' , @strSMTPServerUser = '' , @strSMTPServerPwd = ''
That’s it – if the SMTP information is correct, the mail should be on its way.
So in wrapping up there are additional features that you may need to build in to suit your environment. These include: Logging and Error Handing (SQL TRY/CATCH block would work well). It is also important to note that this has been successfully tested and deployed in both SQL Express 2008 and SQL Express 2008 R2; it may very well work with SQL Express 2005, but it will depend greatly upon the version of the .NET framework that is installed on the server.
October 26, 2010 § Leave a comment
If you are an SQL Server Consultant, there is never a single dull moment in your life. Quite often you are called in for fixing something, but then you always end up fixing something else!
I was recently working on an offshore project where I was called in to tune high transaction OLTP server. During work, I demanded that I should have a server which is very similar to live database so I could inspect all the settings and data. I may end up running a few queries which may or may not change the server settings. The Sr. DBA agreed and instructed the Jr. DBA accordingly to restore a database from their backup.
In a few minutes, their Jr. DBA arrived running to us, asking for our help. The problem was that when he attempted to restore the database, he was not able to restore the database as their backup. This eventually threw an error. Now this was very alarming for them because their backup file had some issues. I suggested trying some backups but again, the same issue occurred on all of them.
The CTO, who was also present at the location, got very upset with this situation. He then asked when the last successful restore test was done. As expected, the answer was NEVER. There were no successful restore tests done before.
During that time, I was present and I could clearly see the stress, confusion, carelessness and anger around me. I did not appreciate the feeling and I was pretty sure that no one in there wanted the atmosphere like me.
The next step was to take a backup of their database right away. The CTO wanted to observe the complete process so he could figure out where the error originated and where was its exact location.
Here is the error they were receiving when they attempted to restore the database:
Msg 3243, Level 16, State 1, Line 1
The media family on device ‘D:\TestDB.bak’ was created using Microsoft Tape Format version 1.22. SQL Server supports version 1.0.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.
The first thing I attempted to do is to take a backup of the database on another spare drive by myself. After having that backup, I tried to restore it to the database. As what I expected, it was successfully restored. Now that event provided me the information that there was nothing wrong with how the backup was done. The problem was located somewhere else. I asked them about their current process of taking a backup and storing it. They provided me the following diagram:
Looking at the diagram, I decided to test their FTP software. I uploaded our organization’s FTP space. After it was done, I restored the database again and it worked without an error. This led me to think that there is something wrong in their UNIX server. After talking to their UNIX administrator, he informed me that they were running some cron jobs to compress the files and they were also moving those files to a larger drive. After listening to this, I was very sure there was something wrong during that process which had something to do with the .bak file because it was not able to work properlu.
After a few tests, the UNIX administrator accepted that that cron job was changing the structure of the file. In the meantime, he disabled the process to avoid this corruption. The CTO then asked me if there was any way they could tell if the backup which was already taken was good as gold or if there was a corruption in it.
Here is the command that you can run on your database file so you can check if the database file is intact or if it has any corruption in it:
The question is, “Do you practice this on your production server’s backup?”
The reason i posted this in my blog…seriously there are lot of issues when it comes to recover the database…
I had faced earlier.. we need to back up every now and then..Hope this is useful