Bug Me Not - Redgate

 Bug Me Not

Published Friday, July 08, 2011 10:10 AM

Bug metrics are a notoriously erratic way to judge the performance of a development team and project, but despite this almost all software projects use them. There is a lot of data you can get from an electronic bug-tracking system, from bugs per lines of code, bugs per component, to defect trend graphs and bug fix rates. It is tempting to try to find meaning in the data, but how useful is this data, ultimately, in driving up software quality, in the long term?

If you judge software testers on the number of bugs that they find, then more bugs will be found. If you judge developers on the number of bugs for which their code is responsible, then you'll get much less buggy code, but you'll probably struggle to ship a product on any reasonable timescale. Over the course of a project, it's easy for the team and even individual developers to feel oppressed by the bugs, and under intense pressure to produce 'better' code. Bugs continue to be logged and reported assiduously, but many of them simply disappear into the backlog, to be fixed "at some later time". As the pressure of the ship date mounts, developers are simply forced to cut corners, to change their perception of what "done" really means, in order to increase their velocity, and meet the deadline. Software quality and team morale suffers as result, and despite being rigorously tracked and reported, bugs fester from release to release, since there is never time to fix them. Before long the team finds itself mired in the oubliette.

So how can we use bug metrics to drive up software quality software, over the long term, while enabling the team to ship on a predictable and reasonable timescale? In all likelihood, the surprising answer is "we can't". In fact, the ultimate goal of an agile development team might be to dispense with the use of an electronic bug tracking system altogether!

Certainly at Red Gate, some teams are using JIRA for incoming customer defects, but also a more holistic "technical debt wall", consisting of post-it notes describing the most important issues causing "drag" on a team. They then collectively seek to resolve these issues, whilst striving to remain close to zero new internal defects.

The team works to cultivate an atmosphere of zero tolerance to bugs. If you cause a bug, fix it immediately; if you find a bug in the area in which you're working, tidy it up, or find someone on the team who can. If you can't fix a bug as part of the current sprint, decide, very quickly, how and even if it will be fixed. This is not easy to achieve; it requires, among other things, an environment where it is "safe" for the team to stop and fix bugs, where developers and testers work very closely together, and both are strongly aligned with the customer, so they understand what they need from the software and which bugs are significant and which not.

However, when you get there, what becomes important is not the number of bugs, and how long they stay open in your bug-tracking system, but a deeper understanding of the types of bugs that are causing the most pain, and their root cause. The team are then judged on criteria such as how quickly they learn from their mistakes, by for example, tightening up automated test suites so that the same type of bug doesn't crop up time and again, or by improving the acceptance criteria associated with user stories, so that the team can focus on fixing what's important, as soon as the problem arises.

These are criteria that really will drive up software quality over the long term, and allow teams to produce software on predictable timescales, and with the freedom to feel they can "do the right thing".

What do you think? Is this a truly achievable goal for most teams, or just pie-in-the-sky thinking?

Cheers, Tony.

by Tony Davis

PowerShell Eventing and SQL Server Restores

05 July 2011 by Laerte Junior

When you're managing a large number of servers, it makes no sense to run maintenance tasks one at a time, serially. PowerShell is able to use events, so is ideal for, say, restoring fifty databases on different servers at once, and be notified when each is finished. Laerte shows you how, with a little help from his friends.

It all began one bright morning, when my good friend and Powershell Jedi Ravikanth Chaganti (blog| twitter) asked me if I had a PowerShell

script to restore databases. This sounded like a pretty simple process, and so I told him that what he needed was available on CodePlex in the form of SQLPSX. However, it turned out the challenge he faced was not so simple, and he elaborated on his real problem:

He actually needed to restore 50 databases in asynchronous mode and, having discovered that the Restore class had events, wanted to use those to trigger a message when the restore process finished.

Now this sounded interesting! But how to do it? Helloooo PowerShell Eventing...

Powershell Eventing

Eventing is a feature built into PowerShell V2 which lets you respond to the asynchronous notifications that many objects support (as seen on the Windows Powershell Blog). However, my goal is not to explain what the PowerShell Eventing feature is; I'm here to demonstrate how to implement an effective real-world solution using that feature.

Before we get started, I'll explain that I modified Chad Miller's (blog|twitter) original Invoke-Sqlrestore function to use the complete Restore event for our purposes (with Chad's kind permission, naturally). In the course of the article, I'll show you step by step how I got the final solution, and you can download the finished script from the speech bubble at the top of the article. The altered function is called InvokeSqlrestoreEventing , inside the PoshTest.Psm1 module, and comes with additional SMO assemblies to import it directly into your Powershell user profile.

Of course, if you want to know more about what PowerShell Eventing is, then I suggest you read the links at the end of the article.

The Problem

I needed an automated and reasonably scalable way to restore 50 databases asynchronously, and be notified when each one was finished.

Step 1 ? Just Show a Message

My first step towards Eldorado was to just show a "Restore Completed" message when a restore operation was finished. If we take a look at the MSDN information for the Restore Class, we find the available Events, including Complete:

Figure 1 ? The available events on the Restore Class (click to enlarge) So I wrote some PowerShell to use that: $restore = new-object ("Microsoft.SqlServer.Management.Smo.Restore") Register-ObjectEvent -InputObject $restore -EventName "Complete" -SourceIdentifier CompleteRestore -Action { Write-Host "Restore Completed"} | Out-Null And tested it to make sure it works:

Figure 2 ? Our initial script, working fine. (click to enlarge)

That all looked OK. So imagine my surprise when I tried to restore again, and saw this:

Figure 3 ? The same simple message script, but something's gone wrong. (click to enlarge)

Cannot subscribe to event. A subscriber with source identifier 'CompleteRestore' already exists.

I realized I had created a CompleteRestore subscriber in the SourceIdentifier parameter of the Register-ObjectEvent cmdlet, so I needed to unregister it before I could run the cmdlet again:

try { $restore.SqlRestore($server) } catch {

blablablabla } finally {

Unregister-Event CompleteRestore }

Second Step ? Running in an Asynchronous Powershell Job

With my message script running smoothly, my second thought was "Neat, but it's not much good without being asynchronous". If I have to restore 50 databases, it cannot be in a serialized way! So I tried :

$server = "Vader" $dbname = "TestPoshEventing_6" $filepath = "c:\temp\backup\TestPoshEventing.bak" $Realocatefiles = @{TestPoshEventing= 'c:\temp\restore\TestPoshEventing_6.mdf';TestPoshEventing_log = 'c:\temp\restore\TestPoshEventing_6.ldf'} Start-Job -Name "Restore1" -InitializationScript {Import-Module c:\temp\testes\PoshTest.psm1 -Force} scriptblock { Invoke-SqlRestoreEventing -sqlserver $args[0] -dbname $args[1] -filepath $args[2] relocatefiles $args[3] -force } -ArgumentList $server, $Dbname ,$filepath ,$Realocatefiles

Aaand...it didn't work. Why not? Because background Jobs run in a different runspace, and so anything we send to output in the console won't show up. To work around that, I needed to use Receive-Job :

$server = "Vader" $dbname = "TestPoshEventing_6" $filepath = "c:\temp\backup\TestPoshEventing.bak" $Realocatefiles = @{TestPoshEventing= 'c:\temp\restore\TestPoshEventing_6.mdf';TestPoshEventing_log = 'c:\temp\restore\TestPoshEventing_6.ldf'} $job = Start-Job -Name "Restore1" -InitializationScript {Import-Module c:\temp\testes\PoshTest.psm1 -Force} -scriptblock { Invoke-SqlRestoreEventing -sqlserver $args[0] -dbname $args[1] -filepath $args[2] relocatefiles $args[3] -force } -ArgumentList $server, $Dbname ,$filepath ,$Realocatefiles Wait-Job $job | Receive-Job

And the Oscar goes to... Powershell! Everything now works just fine.

Third Step ?Showing a Message and the Database Name

The "Restore Completed" message I put together earlier is handy, but not actually that useful without knowing which database has been restored. To improve that, I added the $dbname element:

Invoke-SqlRestoreEventing -sqlserver Vader -dbname "TestPoshEventing_6" -filepath "c:\temp\backup\TestPoshEventing.bak" -relocatefiles @{TestPoshEventing=

'c:\temp\restore\TestPoshEventing_6.mdf';TestPoshEventing_log = 'c:\temp\restore\TestPoshEventing_6.ldf'} force TestPoshEventing_6 Restore Completed

Figure 4 ? The "Restore Complete" message, complete with the database name. (click to enlarge) Now you should hopefully be thinking, as I was, that because backgrounds jobs run in a different runspace, $dbname will not be displayed when we put these two scripts together. How do we solve this?

Never fear! In this case, I used the -messagedata parameter on Register- ObjectEvent, and got the value we need using $event.Messagedata:

Register-ObjectEvent -InputObject $restore -EventName "Complete" -SourceIdentifier CompleteRestore -Action { Write-Host "$($event.MessageData) restore Completed"} -MessageData $dbname | Out-Null

Now let's run the function:

$server = "Vader" $dbname = "TestPoshEventing_6" $filepath = "c:\temp\backup\TestPoshEventing.bak" $Realocatefiles = @{TestPoshEventing= 'c:\temp\restore\TestPoshEventing_6.mdf'; TestPoshEventing_log = 'c:\temp\restore\TestPoshEventing_6.ldf'} $job = Start-Job -InitializationScript {Import-Module c:\temp\testes\PoshTest.psm1 -Force} -scriptblock { InvokeSqlRestoreEventing -sqlserver $args[0] -dbname $args[1] -filepath $args[2] -relocatefiles $args[3] -force } -ArgumentList $server, $Dbname ,$filepath ,$Realocatefiles Wait-Job $job | Receive-Job

... and watch the magic happening:

Figure 5 ? Asynchronous database restores, complete with "Restore Complete" messages for each database. (click to enlarge)

Scaling out The code

One of the main reasons why I use PowerShell is because of its inherent capacity to manage multiple servers with just a few lines of script. That is, scaling out my code is relatively easy. Which is just as well, because while the solution as it stands is fine for a test case, it's not quite ready to deal with 50 databases efficiently. The first thing I needed to do was to add the server name into the message so that I knew exactly which database was being managed at each stage. For this I used -messagedata again, but with a twist: I passed the parameters as a property to PSObject and used $Event.MessageData.

$pso = new-object psobject -property @{Server=$server;DbName=$dbname} Register-ObjectEvent -InputObject $restore -EventName "Complete" -SourceIdentifier CompleteRestore -Action { Write-Host "Server $($event.MessageData.Server), database $($event.MessageData.dbname) restore Completed"} -MessageData $pso | Out-Null

And with that in place, let's see how this code deals with restoring 2 databases:

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download