sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +===================================================+
         +=======    Quality Techniques Newsletter    =======+
         +=======               May 2000              =======+
         +===================================================+

QUALITY TECHNIQUES NEWSLETTER (QTN) (Previously Testing Techniques
Newsletter) is E-mailed monthly to subscribers worldwide to support the
Software Research, Inc. (SR), TestWorks, QualityLabs, and eValid WebTest
Services user community and to provide information of general use to the
worldwide software and internet quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the entire
document/file is kept intact and this complete copyright notice appears
with it in all copies.  (c) Copyright 2003 by Software Research, Inc.


========================================================================

   o  13th International Internet & Software Quality Week 2000

   o  Barry Boehm:  Winner of the 2000 Harlan D. Mills Practical
      Visionary Prize

   o  Methods & Tools Newsletter

   o  Automated Software Testing -- A Perspective (Part 2 of 2), by
      Kerry Zallar

   o  Call for Papers/Presentations; QWE2K

   o  Call for Papers: Workshop on Distributed Communities on the Web
      (DCW2000)

   o  QTN Article Submittal, Subscription Information

========================================================================

       13th International Internet & Software Quality Week 2000

                      May 30, 2000 -- June 2, 2000
                       Hyatt Regency Embarcadero
                       San Francisco, California

Time is running out to register for Quality Week 2000 in San Francisco!
Register on-line at:

        

Join your colleagues at the premier software and internet quality
conference.  Gain the latest insights and experiences from the brightest
QA, IT, and Internet professionals.  Learn techniques from over 100
presentations to make your job more effective!

Exciting Highlights at Quality Week 2000:

** KEYNOTES: A superior lineup of Industrial and Academic Keynote
Speakers:

  * Sanjay Jejurikar (Director of Windows 2000 Testing, Microsoft
    Corporation) The Engineering Process of Windows 2000 (10P2)
  * Gene Spafford (CERIAS / Purdue University) Information Security
    Requires Assurance (10P3)
  * Stu Feldman (IBM Corporation) Internet and E-Commerce: Issues and
    Answers (1P)
  * Bill Gilmore (Intel Corporation) The Intel Corporate Software
    Quality Network (1P2)
  * Leon Osterweil (University of Massachusetts) Determining the Quality
    of Electronic Commerce Processes (5P1)
  * Andreas Rudolph (IBM Austria) The Need for Quality -- E-business
    Performance Testing (5P2)
  * Marcelo Dalceggio (Banco Rio de la Plata Argentina) Automated
    Software Inspection Process (10P1) [QWE'99 Best Presentation]

** TUTORIALS: Fourteen Tutorials given by the foremost experts in their
fields:

  * Johanna Rothman (Rothman Consulting Group) Life as a New Test
    Manager (A1)
  * Norman Schneidewind (Naval Postgraduate School) A Roadmap to
    Distributed Client-Server Software Reliability Engineering (B1)
  * Michael Deck (Cleanroom Software Engineering, Inc) Requirements
    Analysis Using Formal Methods (C1)
  * Bill Deibler (SSQC) Making the CMM Work: Streamlining the CMM for
    Small Projects and Organizations (D1)
  * Ross Collard (Collard & Company) Test Planning Workshop (E1) NEW!
  * G. Bazzana & E. Fagnoni (ONION s.r.l.) Testing Web-based
    Applications: Techniques for Conformance Testing (F1) NEW!
  * Edward Kit (Software Development Technologies) Testing In the Real
    World (G1)
  * Robert Binder (RBSC) How to Write a Test Design Pattern (A2) NEW!
  * John Musa (Consultant) Developing More Reliable Software Faster and
    Cheaper (B2)
  * Tom Gilb (Result Planning Limited) Requirements Engineering for
    Software Developers and Testers (C2)
  * Tim Koomen (IQUIP) Stepwise Improvement of the Testing Process using
    TPI (D2)
  * Linda Rosenberg, Ruth Stapko, & Albert Gallo (NASA GSFC) Risk-Based
    Object Oriented Testing (E2) NEW!
  * Adrian Cowderoy (MMHQ) Cool Q -- Quality Improvement for Multi-
    Disciplinary Tasks in WebSite Development (F2)
  * Chris Loosley & Eric Siegel (Keynote) Web Application Performance
    (G2)

** PARALLEL TRACKS: Six Parallel Tracks cover the broad field of
software quality:
  - Technology: From OO Automation to Java and UML methods
  - Applications: Hear solutions from researchers and practitioners
  - Internet: E-commerce experiences, Internet Time and Site Performance
  - Management: Managing Testing, Defect Tracking, Process Innovations
  - 90-minute QuickStarts: Tailored to people new in the industry
  - Vendor Technical Presentations allow you to broaden your tools and
    services information base

** WORKSHOPS: Four Post-Conference Workshops:  NEW!

  * Douglas Hoffmann (Software Quality Methods LLC) Oracle Strategies
    for Automated Testing (W1)
  * Cem Kaner (Attorney at Law) Bug Advocacy Workshop (W2)
  * Edward Miller (Software Research, Inc.) Achieving WebSite Quality
    (W3)
  * Robert Sabourin (Purkinje, Inc)The Effective SQA Manager-Getting
    Things Done (W4)

** BOFSs: Our Bird of a Feather Sessions are an added benefit at Quality
Week 2000.  Meet, talk and debate your favorite topics with your peers
in these informal sessions.  Check out the current topics on our
website:

        .

BOFSs are the perfect opportunity to meet with others who share your
interest.  To become a moderator or to suggest new topics contact Mark
Wiley at: .

** PANELS:  Pick Top Quality Industry Expert's Brains During Three
Special Panel Sessions

  * Ask The Quality Experts!  Moderated by Microsoft's Nick Borelli,
    this special QW2000 panel session works interactively with you to
    get your key questions answered! If you have a burning question
    about any aspect of Software or Internet Quality, click on Ask The
    Quality Quality Experts!

            

    There you will see the current set of questions posed to the Panel
    of Quality Experts, rank ordered based on the number of votes each
    question has received.  Submit your vote for the most important
    questions or post a new Ask The Quality Experts! question today!

  * Protecting Intellectual Property In An Open Source World -- Doug
    Whitney will tell how Intel does it.

  * How Can I Tell When My Project Is In Trouble? -- Brian Lawrence,
    Johanna Rothman and other management experts will explain.

** SPECIAL EVENTS: Don't miss out! The Giants are playing at Pac-Bell
Park!  We have a block of tickets to the Sold-Out Game!  Come join us
for some All American Fun!

Complete data at .

========================================================================

Barry Boehm:  Winner of the 2000 Harlan D. Mills Practical Visionary
Prize

The Ad Hoc Committee for the Harlan D. Mills Practical Visionary Prize
is very pleased to announce that the winner of the Prize for 2000 is

                              Barry Boehm

of the Center for Software Engineering, Computer Science Department,
University of Southern California, Los Angeles, California, USA.

The Prize was established in Harlan D. Mills's name to recognize
researchers and practitioners who have demonstrated long-standing,
sustained, and meaningful contributions to the theory and practice of
the information sciences, focusing on contributions to the practice of
software engineering through the application of sound theory.

The Prize will be awarded with an honorarium, and the winner will give
an invited talk, at the 22nd International Conference on Software
Engineering (ICSE 2000), June 4 - 11, in Limerick, Ireland.

Gene, Chair,
Ad Hoc Committee for the Harlan D. Mills Practical Visionary Prize

Gene F. Hoffnagle
        Director, IBM Technical Journals
          Editor, IBM Systems Journal
          Editor, IBM Journal of Research and Development
        IBM 82-207A, P.O. Box 218, Route 134
Yorktown Heights, New York 10598-0218 USA

+1-914-945-3831, t/l 862-3831
hoffnagl@us.ibm.com or ibmusm11(hoffnagl)
http://www.almaden.ibm.com/journal/

========================================================================

                       Methods & Tools Newsletter

Methods & Tools is a free PDF & HTML newsletter for software developers.
The Spring 2000 issue content includes:

   ti -2 o Practical Experience in Automated Testing ti -2 o
   Understanding Use Case Modeling ti -2 o GUI Testing Checklist

To download or read the current issue or to subscribe go to this URL:

        

========================================================================

       Automated Software Testing -- A Perspective (Part 2 of 2)

                                   by

                              Kerry Zallar

                          Work With Developers

The same approach should be applied at each subsequent level of testing.
Apply test automation where it makes sense to do so. Whether homegrown
utilities are used or purchased testing tools, it's important that the
development team work with the testing team to identify areas where test
automation makes sense and to support the long-term use of test scripts.

Where GUI applications are involved the development team may decide to
use custom controls to add functionality and make their applications
easier to use. It's important to determine if the testing tools used can
recognize and work with these custom controls. If the testing tools
can't work with these controls, then test automation may not be possible
for that part of the application. Similarly, if months and months of
effort went into building test scripts and the development team decides
to use new custom controls which don't work with existing test scripts,
this change may completely invalidate all the effort that went into test
automation. In either case, by identifying up front in the application
design phase how application changes affect test automation, informed
decisions can be made which affect application functionality, product
quality and time to market. If test automation concerns aren't addressed
early and test scripts cannot be run, there is a much higher risk of
reduced product quality and increased time to market.

Working with developers also promotes building in testing can sometimes
be made more specific to any area of code. Also, some tests can be
performed which otherwise could not be performed if these hooks were not
built.

Besides test drivers and capture/playback tools, code coverage tools can
help identify where there are holes in testing the code.  Remember that
code coverage may tell you if paths are being tested, but complete code
coverage does not indicate that the application has been exhaustively
tested. For example, it will not tell you what has been 'left out' of
the application.

                            Capture/Playback

Here's just a note on capture/replay. People should not expect to
install the testing tool, turn on the capture function and begin
recording tests that will be used forever and ever. Capturing keystrokes
and validating data captured within the script will make the script hard
to maintain. Higher level scripts should be designed to be modular which
has options to run several tests scripts. The lower level test scripts
that actually perform tests also should be relatively small and modular
so they can be shared and easily maintained. Data for input should not
be hard coded into the script, but rather read from a file or
spreadsheet and loop through the module for as many times as you wish to
test with variations of data. The expected results should also reside in
a file or spreadsheet and read in at the time of verification. This
method considerably shortens the test script making it easier to
maintain and possibly reuse by other test scripts. Bitmap comparisons
should be used very sparingly. The problem with bitmap comparison is
that if even one pixel changes in the application for the bitmap being
compared, the image will compare as a mismatch even if you recognize it
as a desirable change and not a bug. Again, the issue is maintainability
of the test suite.

Capture/playback functionality can be useful in some ways.  Even when
creating small modular scripts it may be easier to first capture the
test then go back and shorten and modify it for easier maintenance. If
you wish to create scripts that will obviously provide immediate pay
back, but you don't care if it's maintainable, then using
capture/playback can be a very quick way to create the automated test.
These scripts typically are thrown away and rebuilt later for long term
use. The capture/playback functionality is also good to use during the
design phase of a product if there's a prototype developed.  During
usability testing, which is an application design technique, users sit
at the computer using a mock up of the actual application where they're
able to use the interface, but the real functionality has not yet been
built. By running the capture/playback tool in capture mode while the
users are 'playing' with the application, recorded keystrokes and mouse
movements can track where the users move on the system. Reading these
captured scripts help the designers understand the level of difficulty
in navigating through the application.

                                Players

Test automation is not just the responsibility of the testers. As noted,
getting developers involved is important as well as getting the
understanding and support of management. Since test automation is an
investment, it's important that they understand the up front costs and
expected benefits so that test automation stays around long enough to
show the benefits.  There is the tendency to 'give up' when results are
not shown right away.

If the project is just beginning with test automation then having
someone who can champion the test automation effort is important. This
software development (preferably a coding background). This 'champion'
is responsible for being the project manager of the test automation
effort. This person needs to interact well with both the testers and the
application developers. Since this person may also be actively involved
with writing scripts as well, good development skills are also
desirable. This person should not be involved with the designing of test
cases or manual testing other than to review other team member's work.
Typically there is not enough time to both design test cases and design
test automation. Nor is there time to build test scripts and run manual
tests by the same person. Where the testing effort is large the
distinction between these two roles apply to teams of automators and
testers as well. Too many times test automators are borrowed to
performed manual testing never to realize the benefits of test
automation in the current or future releases of the application.

This is not to say that the role of testers is reduced. Test planning
still needs to be done by a test lead, test cases still need to be
designed and manual testing will still be performed. The added role for
these testers is that they most likely will begin to run the automated
test scripts.  As they run these scripts and begin to work more closely
with the test automation 'champion' or test automators, they too can
begin to create scripts as the automated test suite matures.

Experience has shown that most bugs are not found by running automated
tests. Most bugs are found in the process of creating the scripts, or
the first time the code is tested. What test automation mostly buys you
is the opportunity to not spend valuable man-hours re-testing code that
has been tested before, but which has to be tested in any case because
the risk is too high not to test it. The other benefit comes from the
opportunity to spend these man-hours rigorously testing new code for the
first time and identifying new bugs. Just as testing in general is not a
guarantee, but a form of insurance, test automation is a method to have
even more insurance.

                          Some Nuts and Bolts

When learning to use testing tools it's common to make mistakes. One way
to mitigate these mistakes is to create scripts that will provide
immediate pay back. That is, create scripts which won't take too much
time to create yet will obviously save manual testing effort. These
scripts will be immediately useful and it's all right if they're not
intended to be part of the permanent test suite. Those creating the
scripts will learn more about the tool's functionality and learn to
design even better scripts. Not much is lost if these scripts are thrown
away since some value has already been gained from them. As experience
is gained with the testing tool, a long-term approach to test automation
design can start to be developed.

Again, start off small when designing scripts. Identify the functional
areas within the application being tested. Design at a higher level how
each of these functional areas would be automated, then create a
specific automated test design for one of the functional areas. That is,
what approach will be used to create scripts using test cases as the
basis for automating that function? If there are opportunities to use
common scripting techniques with other testing modules, then identify
these common approaches as potential standards would be useful in
creating maintainable scripts.

Use a similar approach to design and create scripts for some of the
other functional areas of the application. As more experience is gained
from automation then designing and building scripts to test the
integration of these functional areas would be the next step in building
a larger and more useful testing suite.

Since the purpose of automating testing is to find bugs, validations
should be made as tests are performed. At each validation point there is
a possibility of error. Should the script find an error, logic should be
built into it so that it can not only report the error it found but also
route back to an appropriate point within the automated testing suite so
that the automated testing can continue on. This is necessary if
automated tests are to be run overnight successfully. This part of test
automation is the 'error recovery process'. This is a significant effort
since it has to be designed in for every validation point. It's best to
design and create reusable error recovery modules that can be called
from many validation points in many scripts.  Related to this are the
reports that get generated from running the tests. Most tools allow you
to customize the reports to fit your reporting needs.

It's also important to write documented comments in the test scripts to
help those who would maintain the test scripts. Write the scripts with
the belief that someone else will be maintaining them.

In the automation test design or documented within the test scripts also
identify any manual intervention which is necessary to set up the test
environment or test data in order to run the scripts. Perhaps databases
need to be loaded or data has to be reset.

                               Test Data

I know of three ways to have the test data populated so that the test
environment is setup correctly to run automated tests. If complete
control of the test environment is available to testers, then reloading
preset databases can be a relatively quick way to load lots of data. One
danger in having several preset databases is if a future release
requires a reconstruction of data structures and the effort to convert
the current data structures to the desired state is a large effort.

Another method of setting up the data is to create tests scripts which
run and populate the database with the necessary data to be used in
automated tests. This may take a little longer to populate, but there's
less dependency on data structures. This method also allows more
flexibility should other data change in the database.

Even though I mention 'databases' specifically, the concepts apply to
other types of data storage as well.

Other people with test automation experience have used Personally, I
have no experience using randomly generated data, but this is another
option worth looking into if you're looking for other ways to work with
data.

                            Potential Risks

Some common risks to the test automation effort include management and
team members support fading after not seeing immediate results,
especially when resources are needed to test the current release.
Demanding schedules will put pressure on the test team, project
management and funding management to do what it takes to get the latest
release out. The reality is that the next release usually has the same
constraints and you'll wish you had the automated testing in place.

If contractors are used to help build or champion the test automation
effort because of their experience, there is the risk that much of the
experience and skills will 'walk away' when the contractor leaves. If a
contractor is used, ensure there is a plan to back fill this position
since the loss of a resource most likely will affect the maintenance
effort and new development of test scripts. It's also just as important
that there is a comprehensive transfer of knowledge to those who will be
creating and maintaining the scripts.

Since the most significant pay back for running automated tests come
from future releases, consider how long the application being tested
will remain in its current state. If a rewrite of the application is
planned in the near future or if the interface is going to be
overhauled, then it probably makes sense to only use test automation for
immediate pay back. Again, here's where working with application
designers and developers can make a difference, especially if internal
changes are planned which may not appear to affect the testing team, but
in reality can affect a large number of test scripts.

                                Summary

As mentioned earlier, most of the concepts identified here came from
experiences and as also noted there are not a lot of facts to back up
these ideas. The intent here wasn't to prove any particular technique
worked, but, rather just to share methods that appear to be more
successful. If nothing else, this information can be used to look at
test automation from a little different perspective and assist in
planning.

If you have experiences that are different than these that you've found
successful, or if you've experienced hardships using some of these
recommendations, I'd be grateful to hear from you. Many people,
including myself, are interested in finding out what really works in
creating higher quality software more quickly.

========================================================================

                  Call For Papers/Presentations: QWE2K

    4th Annual International Software & Internet Quality Week/Europe
                          20-24 November 2000
                           Brussels, Belgium
              CONFERENCE THEME: Initiatives For The Future

                  

ABOUT QWE2000

QWE2000 is the fourth in the continuing series of International Software
& Internet Quality Week/Europe Conferences that focus on advances in
internet & software test technology, quality control processes, software
system safety and risk management, WebSite performance and reliability,
and improved software process.

QWE2000 papers are reviewed and selected by a distinguished
International Advisory Board made up of industrial and academic experts
from Europe and North America.  The QWE2000 Conference is sponsored by
SR/Institute, Inc.

The QWE2000 Conference Theme, Initiatives For The Future, focuses
attention on the opportunities for improvement and advancement in the
internet and client/server fields for the coming decades.

The mission of the QWE2000 Conference is to increase awareness of the
importance of internet & software quality and the methods used to
achieve it.  QWE2000 seeks to promote internet & software quality by
providing technological education and opportunities for information and
exchange of experience within the software development and testing
community.

PAPERS/PRESENTATIONS WANTED

QWE2000 is soliciting 45-minute and 90-minute presentations, full &
half-day standard seminar/tutorial proposals, 90-minute mini-tutorial
proposals, and proposals for participation in a panel and "hot topic"
discussions on any area of internet & software testing and automation,
including:

      E-Commerce Quality Technology           Test Data Generation
      CASE/CAST Technology                    Test Documentation Standards
      Client-Server Computing                 Defect Tracking/Monitoring
      GUI Test Technology                     Load Generation and Analysis
      Integrated Environments                 New and Novel Test Methods
      ISO-9000                                Mature Software Processes
      Automated Inspection Methods            Object Oriented Testing
      Cost/Schedule Estimation                Process Assessment/Improvement
      WebSite Capacity Checking               Productivity and Quality Issues
      Real-World Experience                   Real-Time Software
      Reliability Studies                     UML Based Requirements Analysis
      Risk Management                         Test Management Automation
      WebSite Quality and Reliability         Multi-Threaded Systems
      Test Automation                         Test Planning Methods
      Test Data Generation                    Test Policies and Standards
      WebSite Testing

IMPORTANT DATES:

        Abstracts and Proposals Due:            30 June 2000
        Notification of Participation:           1 August 2000
        Camera Ready Materials Due:             22 September 2000

FINAL PAPER LENGTH:

Accepted final papers should be limited to 10-20 pages, including Text,
Slides and/or View Graphs (Transparencies).  There is a strict size
limit for the printed Conference Proceedings.  The CD-ROM that will
accompany the printed Conference Proceedings has relaxed limits.

SUBMISSION INFORMATION:

Abstracts and session proposals should be 1-2 pages long, and should
provide enough detail to give the QWE2000 International Advisory Board
an understanding of the final paper/presentation.

Please include with your submission:
   o  The paper title, complete postal mailing and e-mail address(es),
      and telephone and FAX number(s) of each author.
   o  The primary author's -- assumed to be the corresponding author --
      should be named first.
   o  Three keywords or key phrases that describe the paper.
   o  A brief biographical sketch of each author.
   o  One photo [of the primary author].

Please indicate if your target audience for your paper/presentation is:
   o  Application Oriented
   o  Management Oriented
   o  Technical or Technology Related
   o  Internet/Web Oriented

Also, please indicate if the basis of your paper/presentation is:
   o  Work Experience
   o  Opinions/Perspectives
   o  Academic Research

You can complete your submission in a number of ways:

   o  Email your abstract and other information to  The
      material should either be an ASCII file or in PDF format.  Be sure
      to include all of your contact information.  (This method is
      preferred.)

   o  Mail your abstract (or send any other questions you may have) to:

              Ms. Rita Bral
              Executive Director
              Software Research Institute
              1663 Mission St. Suite 400
              San Francisco, CA  94103  USA

              Email: qw@sr-corp.com

For Exhibits and Vendor Registration for the QWE2000 Conference, please
call or FAX or Email your request to the attention of the QWE2000
Exhibits Manager You should contact the QWE2000 team as early as
possible because exhibit space is strictly limited.

========================================================================

                            Call for Papers
        Workshop on Distributed Communities on the Web (DCW2000)
                          Quebec City (Canada)
                            19-21 June 2000

The complete scientific program may be found at:

        

Web communities are groupings of objects and subjects that are capable
of communicating, directly or indirectly, through the medium of a shared
context. To support communities on a wide scale will require
developments at all levels of computing, from low-level communication
protocols supporting transparent access to mobile objects, through to
distributed operating systems, through to high-level programming models
allowing complex interactions between objects.

This workshop will bring together researchers interested in the
technical issues related to the support of web communities.

The Registration Form, travel and accommodation information may be found
at:

        

Contact:

Prof. Gilbert Babin                       Departement d'Informatique
3719 pav. Adrien-Pouliot                  Ph:  +1 (418) 656-3395
Universite Laval, Quebec CANADA G1K 7P4   FAX: +1 (418) 656-2324
    babin@babs.ift.ulaval.ca

========================================================================
      ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------
========================================================================

QTN is E-mailed around the middle of each month to over 9000 subscribers
worldwide.  To have your event listed in an upcoming issue E-mail a
complete description and full details of your Call for Papers or Call
for Participation to "ttn@sr-corp.com".

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the QTN issue date.  For example,
  submission deadlines for "Calls for Papers" in the January issue of
  QTN On-Line should be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters; QTN disclaims any responsibility for their content.

TRADEMARKS:  STW, TestWorks, CAPBAK, SMARTS, EXDIFF, Xdemo, Xvirtual,
Xflight, STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR
logo are trademarks or registered trademarks of Software Research, Inc.
All other systems are either trademarks or registered trademarks of
their respective companies.

========================================================================
          -------->>> QTN SUBSCRIPTION INFORMATION <<<--------
========================================================================

To SUBSCRIBE to QTN, to CANCEL a current subscription, to CHANGE an
address (a CANCEL and a SUBSCRIBE combined) or to submit or propose an
article, use the convenient Subscribe/Unsubscribe facility at:

         .

Or, send E-mail to "qtn@sr-corp.com" as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:

           subscribe your-E-mail-address

   TO UNSUBSCRIBE: Include this phrase in the body of your message:

           unsubscribe your-E-mail-address

   NOTE: Please, when subscribing or unsubscribing, type YOUR email
   address, NOT the phrase "your-E-mail-address".

		QUALITY TECHNIQUES NEWSLETTER
		Software Research, Inc.
		1663 Mission Street, Suite 400
		San Francisco, CA  94103  USA

		Email:     qtn@sr-corp.com
		Web:       <http://www.soft.com/News/QTN-Online>

                               ## End ##