Archive

Archive for the ‘quality assurance’ Category

Selenium RC to test unsecured connection HTTPS

November 9, 2011 Leave a comment

It’s inevitable for software testers to run test in an environment with self-signed SSL certificates. This became one of my dilemma when trying to run my Selenium scripts in an HTTPS environment and was always prompted with “This Connection is Untrusted” error.

I have an existing Firefox Profile solely for Selenium, if you don’t have one, you can check this post.  Was able to resolve this issue by doing the following:

  1. Launch Profile Manager by typing “firefox -ProfileManager -no-remote” in your terminal (Linux user)
  2. Select Selenium profile then Start Firefox
  3. Access your web application URL in HTTPS
  4. Accept the SSL Certification:
      • Click “I Understand the Risks”
      • Click “Add Exception”
      • Click “Get Certificate”
      • Make sure “Permanently store this exception” tickbox is checked
      • Click “Confirm Security Exception”
  5. After successfully directed to the web application page, close Firefox
  6. Go to Selenium Profile folder ( in my case /home/girlie/.mozilla/firefox/selenium )
  7. Delete all files except for cert_override.txt and cert8.db files.

From here on, I rerun my Selenium scripts and didn’t encountered the “This Connection is Untrusted” error anymore :D

Classic Programmer’s Arguments with Testers

April 5, 2011 Leave a comment

“It works on my machine”
“No user would ever do that”
“That’s not a bug, it’s a feature”
“It’s supposed to do that.”
“The software is performing as coded.”
“99.9% done, but it’s not ready for QA yet”
“You darn QA guys are slowing us down!”

Are these lines familiar to you? Did I miss something?

For some organizations, the “programmer-tester” blame-game has been an accustomed scenario especially during testing and deployment stage. It’s strange that programmers/developers dislike testers who are doing their job.

I felt fortunate to be part of a company (Exist Global) where developers and testers work closely together as a team. In my past project engagements, there was never a time, our developers made us feel that we made their life difficult. Of course there’ll always be issues encountered,  but these were taken objectively by both parties, with one end in mind — deliver quality

Software QA quotes

March 3, 2011 3 comments

Software testing is not all about  tasks  – testcases to write, scripts to run, issues to file, bugs to verify, tests reports etc.

It’s fun too =D

Here are some of my favorite quotes as a Software QA. Enjoy!

This slideshow requires JavaScript.

Cross Browser compatibility testing tools

April 5, 2010 Leave a comment

As software quality assurance, one concern that we want to resolve is how our application looks across different browsers with its different versions. Looking around, here are some of the ways to do it efficiently.

1. Spoon – http://www.spoon.net/Browsers/

Spoon allows you to run your application using different browsers. All you need to do is install the plugin, and from there you could choose from different versions of Microsoft Internet Explorer, Mozilla Firefox, Apple Safari, Google Chrome, Opera. This so far is the coolest way I’ve ever tried and one more thing it’s FREE! :D

Spoon-Browsers

2. Browsershots – http://browsershots.org/

Browsershots is an online service that automatically captures full page screenshot images of your website in various browsers and versions across all different OS platforms(Linux, Wndows, Mac OS). Tick the particular browser that you want to test, input your application URL and submit.

Browsershots

3. Multiple IE in one machine

Download Multiple IE Installer and check this link http://tredosoft.com/Multiple_IE for more information.

4. For IE testers, you can download Developers Tool (Firebug counterpart of Firefox). From Tools\Developer Tools menu of your IE browser find the “Browser Mode” option where you can shift from IE7 to IE8 mode.

5. Install Virtual Machines in your PC, then install a different browser on your VM that you can use for testing.

6. Try proprietary softwares like BrowserCam that offers wide range of services.

Just to make a balance, though it’s a good practice to do cross browser compatibility test to our application so we can minimize the problems when it is viewed using other browser. We can already have a full coverage test by verifying our application against the supported browsers as specified in the project requirement.

Bumper Stickers for Software QA

September 11, 2009 1 comment

Here’s my personal favorite:

* Software Testing: Where failure is always an option.
* Improving the world one bug at a time.
* Software Testing: You make it, we break it.
* Software Testers don’t break software; it’s broken when we get it.
* Software Testers: We break it because we care.
* If developers are so smart, why do testers have such job security?
* Life is too short for manual testing.
* Trust, But Verify.
* The Definition of an Upgrade: Take old bugs out, put new ones in.
* We break software so you don’t have to.
* I used to build software…now I break it! Its a lot more fun!!
* All code is guilty, until proven innocent.
* It’s Automation, Not Automagic!
* Quality Assurance, we take the blame so you don’t have to.
* In God we trust, and for everything else we test.

Pick yours in the list. =)

Bumper Stickers for Software QA

Categories: quality assurance Tags: , ,

Understanding Performance Test and its Metrics

July 20, 2009 3 comments

There are a lot of definition that you could draw out of the concept “Performance testing,” one of these, which I found brief and simple:

Performance testing is the process by which software is tested and tuned with the intent of realizing the required performance.

Regardless of the many terms that you could relate to “Performance,” like load, stress, spike, soak etc. There are three major categories that you should focus on when you do performance test:

Speed — Does the application respond quickly enough for the intended users?

Scalability — Will the application handle the expected user load and beyond?

Stability — Is the application stable under expected and unexpected user loads?

And in order for you to objectively measure the above categories, you need to carefully identify the suitable performance metrics to be used. To give you an overview of performance metrics, here are some of useful information from RadView Software’s White Paper:  Test Metrics – Which Are Most Valuable?

During a test session, virtual clients generate result data (metrics) as they run scenarios against an application. These metrics determine the application’s performance, and provide specific information on system errors and individual functions. Understanding these different metrics will enable you to match them to the application function and build a more streamlined test plan.

Scalability and Performance

1. Hits per Second

– a Hit is a request of any kind made from the virtual client to the application being tested. The higher the Hits Per Second, the more requests the application is handling per second. A virtual client can request an HTML page, image, file, etc.

2. Pages per Second

– measures the number of pages requested from the application per second. The higher the Page Per Second the more work the application is doing per second.

3. Throughput

– this is an important baseline metric and is often used to check that the application and its server connection is working.  Throughput measures the average number of bytes per second transmitted from the application being tested to the virtual clients running the test agenda during a specific reporting interval.  This metric is the response data size (sum) divided by the number of seconds in the reporting interval.

4. Rounds

– tells you the total number of times the test agenda was executed versus the total number of times the virtual clients attempted to execute the Agenda. The more times the agenda is executed, the more work is done by the test and the application.

Responses and Availability

1. Hit Time

– hit time is the average time in seconds it took to successfully retrieve an element of any kind (image, HTML, etc).  The time of a hit is the sum of the Connect Time, Send Time, Response Time and Process Time. It represents the responsiveness or performance of the application to the end user.

2. Time to First Byte

– this measurement is important because end users often consider a site malfunctioning if it does not respond fast enough.  Time to First Byte measures the number of seconds it takes a request to return its first byte of data to the test software’s Load Generator.

3. Page Time

– page time calculates the average time in seconds it takes to successfully retrieve a page with all of its content.  This statistic is similar to Hit Time but relates only to pages. In most cases this is a better statistic to work with because it deals with the true dynamics of the application.

With regards to choosing the performance metric, you should always consider the type of application that you’re testing. Say for an open and public web application where you expect that many concurrent users will hit the system at the same time, HIT PER SECOND would be a valuable metric to use, compared to an in-house application like accounting system where you could explicitly tell how many client will be using the application, HITS PER SECOND would be irrelevant.

Rest in Peace IE6!

June 30, 2009 Leave a comment

As Software Quality Assurance, we have encountered a lot of UI issues when AUT is tested in IE6. And as much as our developers struggles to find solutions to this less than impossible defects, we appeal and protect the user interest in experiencing a user-friendly application. So we are in one in spreading out this news.

Obituary Notice

IE6 Bugs, Problems, Fixes, Solutions, Tips & Tricks, Hints? NO MORE! .:. RIPIE6.com

Shared via AddThis

Categories: quality assurance Tags: ,
Follow

Get every new post delivered to your Inbox.

Join 25 other followers