Archive

Archive for the ‘quality assurance’ Category

Bumper Stickers for Software QA

September 11, 2009 1 comment

Here’s my personal favorite:

* Software Testing: Where failure is always an option.
* Improving the world one bug at a time.
* Software Testing: You make it, we break it.
* Software Testers don’t break software; it’s broken when we get it.
* Software Testers: We break it because we care.
* If developers are so smart, why do testers have such job security?
* Life is too short for manual testing.
* Trust, But Verify.
* The Definition of an Upgrade: Take old bugs out, put new ones in.
* We break software so you don’t have to.
* I used to build software…now I break it! Its a lot more fun!!
* All code is guilty, until proven innocent.
* It’s Automation, Not Automagic!
* Quality Assurance, we take the blame so you don’t have to.
* In God we trust, and for everything else we test.

Pick yours in the list. =)

Bumper Stickers for Software QA

Categories: quality assurance Tags: , ,

Understanding Performance Test and its Metrics

July 20, 2009 3 comments

There are a lot of definition that you could draw out of the concept “Performance testing,” one of these, which I found brief and simple:

Performance testing is the process by which software is tested and tuned with the intent of realizing the required performance.

Regardless of the many terms that you could relate to “Performance,” like load, stress, spike, soak etc. There are three major categories that you should focus on when you do performance test:

Speed — Does the application respond quickly enough for the intended users?

Scalability — Will the application handle the expected user load and beyond?

Stability — Is the application stable under expected and unexpected user loads?

And in order for you to objectively measure the above categories, you need to carefully identify the suitable performance metrics to be used. To give you an overview of performance metrics, here are some of useful information from RadView Software’s White Paper:  Test Metrics – Which Are Most Valuable?

During a test session, virtual clients generate result data (metrics) as they run scenarios against an application. These metrics determine the application’s performance, and provide specific information on system errors and individual functions. Understanding these different metrics will enable you to match them to the application function and build a more streamlined test plan.

Scalability and Performance

1. Hits per Second

- a Hit is a request of any kind made from the virtual client to the application being tested. The higher the Hits Per Second, the more requests the application is handling per second. A virtual client can request an HTML page, image, file, etc.

2. Pages per Second

- measures the number of pages requested from the application per second. The higher the Page Per Second the more work the application is doing per second.

3. Throughput

- this is an important baseline metric and is often used to check that the application and its server connection is working.  Throughput measures the average number of bytes per second transmitted from the application being tested to the virtual clients running the test agenda during a specific reporting interval.  This metric is the response data size (sum) divided by the number of seconds in the reporting interval.

4. Rounds

- tells you the total number of times the test agenda was executed versus the total number of times the virtual clients attempted to execute the Agenda. The more times the agenda is executed, the more work is done by the test and the application.

Responses and Availability

1. Hit Time

- hit time is the average time in seconds it took to successfully retrieve an element of any kind (image, HTML, etc).  The time of a hit is the sum of the Connect Time, Send Time, Response Time and Process Time. It represents the responsiveness or performance of the application to the end user.

2. Time to First Byte

- this measurement is important because end users often consider a site malfunctioning if it does not respond fast enough.  Time to First Byte measures the number of seconds it takes a request to return its first byte of data to the test software’s Load Generator.

3. Page Time

- page time calculates the average time in seconds it takes to successfully retrieve a page with all of its content.  This statistic is similar to Hit Time but relates only to pages. In most cases this is a better statistic to work with because it deals with the true dynamics of the application.

With regards to choosing the performance metric, you should always consider the type of application that you’re testing. Say for an open and public web application where you expect that many concurrent users will hit the system at the same time, HIT PER SECOND would be a valuable metric to use, compared to an in-house application like accounting system where you could explicitly tell how many client will be using the application, HITS PER SECOND would be irrelevant.

Rest in Peace IE6!

June 30, 2009 Leave a comment

As Software Quality Assurance, we have encountered a lot of UI issues when AUT is tested in IE6. And as much as our developers struggles to find solutions to this less than impossible defects, we appeal and protect the user interest in experiencing a user-friendly application. So we are in one in spreading out this news.

Obituary Notice

IE6 Bugs, Problems, Fixes, Solutions, Tips & Tricks, Hints? NO MORE! .:. RIPIE6.com

Shared via AddThis

Categories: quality assurance Tags: ,

Top 25 Most Dangerous Programming Errors for Software Testing

February 25, 2009 Leave a comment

Last January, experts from more than 30 US and international cyber security organizations jointly released the consensus list of the 25 most dangerous programming errors that lead to security bugs and that enable cyber espionage and cyber crime.

Some of  the expert people and organizations that provided substantive input to this project are Symantec and Microsoft, to DHS’s National Cyber Security Division and NSA’s Information Assurance Division, to OWASP and the Japanese IPA, to the University of California at Davis and Purdue University.

This list can serve as a guide for software testing tool vendors in evaluation and improvement of the testing tools.

Top 25 Most Dangerous Programming Errors

Unit vs. Acceptance test

January 13, 2009 Leave a comment

Reading blogs has been eating most of my time recently and I came across this site which I find technically enlightening about Acceptance testing.

http://www.acceptancetesting.info/

I find it necessary to mark off unit test against acceptance test.

So click here and read on from my previous post Unit vs. Acceptance test

Categories: quality assurance

Performance vs. Stress vs. Load testing

January 9, 2009 1 comment

More often these terms are interchangeably used in QA testing. Just to draw the line, here’s a definition which I agree with Goranka Bjedov on her Google Techtalk regarding “Using Open Source Tools for Performance Testing”

Performance – measuring how quickly the system responds to a work load. Response time tells everything,  proportional to its database, network infrastructure etc.

Stress – measuring  when and how the system fail and recover under an extreme load condition.

Load – measuring how the system behaves on a particular load (extremely high and low) over a prolonged period of time.

Categories: quality assurance
Follow

Get every new post delivered to your Inbox.